Women in Product 2019

Women in Product 2019 Conference Notes – bit rough but referenceable knowledge
Peggy Alfred
  • Finding purpose
  • what you’re doing is aligned with what you want to be doing
  • be intentional, sometimes it’s not about work and linearity
  • focusing outside of work somtimes
  • paths to passion are different
DVF
  • you havea vision and you create a product, and he product takes over
  • “people say I made the wrap dress, but the wrap dress made me”
  • “every successful person feels like a loser at least twice a week. only losers don’t feel like losers.”
  • Only by owning vulnerabilities can they become your strengths
  • The only thing you have 100% is your character
    best relationship to cultivate and own is the one with yourself
  • “First two emails of my day don’t benefit me”
  • Connect +
  • Expand
  • Inspire
  • Talk about challenges and vulnerability and advocate
  • Imposters and sales people
  • If I doubt my power, I give power to my doubt
Jimena
  • Categorize goals
  • Timeline
  • High low medium Impact and difficulty
  • Leaps of faith + reasons to believe
  • Life roadmaps don’t follow linear development
  • Your True North is what matters
  • It’s ok to launch, fail, learn, and pivot

This slideshow requires JavaScript.

A Career of your own, alternative routes to the top
  • The Confidence Code
Modern Fertility
  • PM to founder is natural
  • Be the person people want to do favors for
  • Obsessing over early adopters
  • Long-term not short-term
    • so many ways to make money in the short-term it is crazy
    • Figure out the hard boundaries (not selling data or recommending procedures

This slideshow requires JavaScript.

Growing Teams: Success Strategies that Scale (Cassie)
  • Reid Hoffman – masters of scale podcast
  • swing for fences – level up ops, and team and invest, need a strong platform for big a bold views
  • Growing a Team
    • High performing teams are table stakes
      • hire groups with a track record – acqui hires and functional teams from your history
      • Leverage 3rd parties strategically to scale and keep the integrity of tech stack
      • Grow people across your org
    • Make Hiring Someone’s Job – not in recruiting
      • As a leader, one of the most important responsibilities we have is hiring
      • Has to be a goal and therefore a priority. Teams should focus on it
      • Partner with recruiting, not rely
      • Set as a quarterly goal for a product leader in org
      • Productize your hiring appoach
    • Align team structure with goals
      • Telling people what to achieve (not how to achieve it) only works if they have the resources and tools to do their best work
        • Empowered teams can come in different shapes
        • Test with different org structures
      • Focus & Re-Focus
        • Start-ups are dynamic and new discoveries can lead to great opportunity
        • Shiny objects – either roadmap killers or start-up makers, don’t be too quick to judge
    • Operationalize Culture
      • Culture isn’t a checkbox and isn’t measured in perks. Authentic culture is now table stakes to hire and retain the best talent
      • Develop a common language and integrate
      • Encourage people to seek coaching
      • Lead with inclusion – diverse teams drive more revenue from innovation
      • Connect to care on individual team level
      • Foster team awareness and inclusive team practices
      • Develop products with inclusion
  • Launching Fast
  • Cultivating Culture
Child (Early Stage) -> Teen (rapid growth) 60% fail here -> Adult (repeatable business)
  • Tribal knowledge dominates in early stages with with people wearing hats
  • What makes companies successful early on are obstables in mature company
Teenage Stage Transition
  1. People to Process
    1. Most have a core team that has to be involved in everything, when that core team fails to scale they become a bottleneck
    2. Culture becames paramount to maintain. Start-ups have strong cultures bc of founders -> can it be preserved? Deliberate effort has to be made to communicate values
  2. From Relationships to Brand
    1. Scaling scales and marketing
  3. From Early Product to Scalable Product
    1. Early Stage
      1. Move fast and iterate (shortcuts and customer facing features and functionality)
      2. Features, features, and features
      3. Secure early adopters
    2. Cost of Customer-Specific functionality
      1. Marquee customers are critical, but may takeover your roadmap and cause you to miss the market. Pushing back here is critical but hard
    3. Reigning in Customer Reqeusts – communicate proactively
      1. Tell them what is coming and why – so customers realize you can’t do their entire wishlists. Be prepared to say no.
    4. Educate executive team on cost to roadmap if they try to accommodate customers
    5. Dealing with Technical Debt
      1. Partner with Engineering
      2. Priorize on roadmap to communicate as an investment
  4. Moving from Ownership to Partnership
    1. In early stage you touch everything
    2. You may want go to market partners, tech partners, etc.
    3. You have to make a decision to build or to partner
  5. From Opportunitistc to Strategic
    1. Focus to Expansion – strategic expansion and chill out on side products
    2. One product cannot take product to size

This slideshow requires JavaScript.

Emerging Tech at Scale
  • Alexa Natural Voice experience
  • Build a culture of experimenting with products and expanding individual ones – alexa guard
  • As your company grows, everything needs to scale, including the size of your failed experiments

This slideshow requires JavaScript.

Jennifer Tejada, CEO of PagerDuty
IMG_5734
  • Run your product org like a P&L leader
  • The emotional energy of being one person at work and outside is insane and eating it up when people are looking to you to lead
  • Product is pivotal – became CEO without being product
  • Get in front of what customers need and where they’re going, what you’re building is what they needed two years ago
  • If you get it right for customers and employees, things will be right. Focusing just on the board will not yield that result
  • Create access (Eg open in other places) not just try to pipeline
  • Find companies where there are people you resonate with
Designing Product Teams with Intention
  • Common purpose
  • Mutual accountability
  • Setting team and design norms helps reduce conflict
    • Set norms, don’t default to them.
  • Norms we wish to see
    • Assume good intentions
    • Asking to understand
      • Not jumping to solve
      • Not responding to solve
    • Integrity
    • Voices are heard
    • “Can you tell me what you heard?” is always a good clarifying question
  • Instead of saying no, ask why
  • Try with “we” first before you
  • Companies have cultures the way countries and groups do, some more high context v low context, task-based v relationship based, concept first v application first. Even teams have cultures

This slideshow requires JavaScript.

  • Book: The Fearless Organization
  • Our framing is not the truth, but one part of a subjective map
  • Cadence for Leading Teams:
    • Set, Check, Correct

This slideshow requires JavaScript.

  • Set informal roles – rotate them
    • Facilitator
    • Spokesperson
    • Office housekeepers
    • Tie-breaker
    • Schedule keeper
    • Operations
  • Roles: Goal, Reality, Opportunities, What Next?
  • Consent solves a lot of life’s problems, don’t just give advice or what to solve
  • A good interview q for evaluating:
    • Something you want to teach v something you want to learn?
    • What questions should I ask you?
  • Sometimes you have to fire the double downers
  • Take advantage of temporal landmarks
    • Quarterly resets
    • Make sure remote teams meet once a quarter
  • Story of Golden Apple & Things That Really Matter
    • Strategy is knowing what to say no to
    • When you are in a deadlock, use the hypothetical “Let’s pretend we already decided”
  • OKR
    • O: Dream – qualitative
    • KR: Success Criteria – Commit to outcomes, not tasks
    • “You want flexibility in the how, not the what”
  • 1 objective + 3 results
  • Commitments are for Monday
  • Overmeasuring can become a prison
  • The can’t live by only numbers
  • Numbers are information, not a roadmap
  • Fridays are for wines
  • Everyone needs bragging time for motivation, progress, and belonging
  • Focus on learning, not blaming
  • Weekly check-ins, friday celebrations
  • Mission: Objective for five years
  • Objection: Mission for three months
  • Good KPIs (aka KPIs with a soul)
    • Brainstorm every OKR
    • Results with tasks
    • Solutions thinking versus Tasks thinking
    • Choose what to measure first
    • Some ppl are not motivated by numbers or even company, keep this in mind
  • Evaluate KPIs
    • Can you get this data?
    • Set a baseline?
    • Do you trust it?
    • Is it meaningful?
    • Do you want to gather it?
  • How much?
    • Change tasks until you see motion
  • How to Radical Focus: Push v Protect
    • OKR Confidence
    • Org Health – GBY – can you call a code red
    • Do less etter
  • If people can’t prioritize, have a stack rank
  • Planning with OKRs
  • 4Os a quarter, and Krs, doing this reduces loss aversion
  • Pipeline > Roadmaps
  • Free list projects silently with post-it notes
  • Test Cards
    • We believe that ____ will result in ____ we know we will have succeeded when _______.
  • OKR status e-mails
    • Last week
    • This week
    • P1s
    • Notes
    • Why something is not done
  • Restate goals with wins
  • Timeline EoQ
    • Review OKRs and call them
    • Write out accomplishments
    • Reviews and Bonuses
  • Timeline BoQ
    • Gather company input
    • Execs set company goals
    • Teams set goals
    • Individuals set goals
  • Feedback
    • Team
      • Weekly (40-45 min?)
      • Quarterly
    • Individual
      • Quarterly convo
      • Smarter cyicles
        • actionable
        • memorialize
        • iterative
  • Reflected best self
    • Carbon five dartboard
    • Spotify Health Check Model
  • Roll Your Own
    • List norms
    • Rate performance
    • Discuss
    • Decide on changes

This slideshow requires JavaScript.

Learning June-Dec 2019

Books Read

Crossing the Chasm

The Memo: What Women of Color Need to Know to Secure a Seat at the Table

 

Articles Read

The Empathy Delusion

  1. ‘People in the advertising and marketing industry and the modern mainstream have different ‘moral foundations’ and (unconscious) intuitions about what is right and wrong”
  2. “As a leading social psychologist, Haidt has been at the forefront of popularising the idea of WEIRD (Western, Educated, Industrialised, Rich and Democratic) morality and psychology. Haidt identifies five moral foundations and shows that, although WEIRD morality is dominant in political, cultural, media and professional elites in the United States, WEIRD people are actually statistical outliers whose moral foundations are unrepresentative of the general population.”
  3. The need is to combine empathy with efficiency in advertising

 

Why We shouldn’t trust our gut instinct

  1. Ad agency employees are “anywheres” with mobility and exposure to other cultures. This is a fundamental difference to 50% of the UK in terms of values to Somewheres where identities take a firm local root.
  2. Tldr: Agency employees and MBA employees do not think like mainstream they sell to
  3. “There is no universal ‘one size fits all’ model of perception and reasoning that goes across cultural differences

 

WTF IS OPERATIONS? #SERVERLESS

  1. “Operations is the sum of all of the skills, knowledge and values that your company has built up around the practice of shipping and maintaining quality systems and software.  It’s your implicit values as well as your explicit values, habits, tribal knowledge, reward systems.  Everybody from tech support to product people to CEO participates in your operational outcomes, even though some roles are obviously more specialized than others.”
  2. A critical path when considering trade-offs in going serverless is resiliency from a user’s perspective and preserving that. Figure out what your core differentiators are, and own those.
  3. You still need to understand your storage systems – there’s still a server underneath so many abstractions.

 

Orchestration vs. Choreography

  1. Orchestration is a central process that controls different web services and coordinates the execution of different operations on the Web services involved in the operation
  2. Choreography is when each web service knows exactly when to executive it’s operations and with whom to interact.
  3. Orchestration is more flexible paradigm

 

Scheduling a Meeting the Right Way

  1. Know the hierarchy, ask what works best for them first or do the team approach first
  2. Put fences around time – don’t ask blindly for times that work
  3. Keep paper trail

 

Serverless Architectures

  1. Serverless architectures are application designs that make use of 3rd party Backend as a Service or in managed ephemeral containers that Functionas as a Service. Serverless removes the need for a traditional always-on component that may reduce operational cost, complexity, and engineering hours with the trade-off of relying more on vendors and possibly immature support services.
  2. Benefits:
    1. More flexible to change, adding features requires less changes in architecture
    2. Fundamentally, you can run back-end code without managing owner server systems or applications. The vendor handles resource provisioning and allocation
    3. FaaS do not require coding to a specific framework or library
    4. You can bring up entire applications up and down to respond to an event
  3. Drawbacks:
    1. Requires better distributed monitoring capabilities. More moving pieces to manage and done by external parties
    2. Vendor management becomes a much more important function in a serverless org
    3. Multitenancy problems creep
    4. Security concerns, configurations become much more paramount

 

Managing communications effectively and efficiently

  1. Check-in with stakeholders about understanding project
    1. What is working in how we communicate with you about the project?
    2. What is not working or is not effective in our communications?
    3. Where can we improve our communications with you?
  2. How to figure out fine balance between too much and too little communications
    1. Who needs to know what information?
    2. How often must that information be communicated/shared?
    3. By what means will information be communicated/shared?
  3. Figure out stakeholders and their repsonsibilities on each project to tailor

 

 

All the best engineering advice I stole from non-technical people

  1. Most things get broken around the seems. Use the 100:10:1 approach
    1. Brainstorm 100things that could go wrong
    2. Pick 10on that list that feel like the most likely and investigate them
    3. Find the critical problem you’re going to focus on.
  2. Understand why they hired you – always ask yourself what am I asked to be the expert here? (In my case, translating and ruthlessly prioritizing and trading off features so that a finished product that makes money gets into the end users hands in a form where they can do that for the org)
  3. Figure out an observability that works for your team, not just facetime and don’t get into cycle of over attempts at over optimization and trust degradation. “Replacing trust with process is called bureaucracy.”

 

Why Companies Do “Innovation Theater” Instead of Actual Innovation

  1. “People who manage processes are not the same people as those who create product. Product people are often messy, hate paperwork, and prefer to spend their time creating stuff rather than documenting it. Over time as organizations grow, they become risk averse. The process people dominate management, and the product people end up reporting to them.”
  2. “In sum, large organizations lack shared beliefs, validated principles, tactics, techniques, procedures, organization, budget, etc. to explain how and where innovation will be applied and its relationship to the rapid delivery of new product.”
  3. “Process is great when you live in a world where both the problem and solution are known. Process helps ensure that you can deliver solutions that scale without breaking other parts of the organization… These processes reduce risk to an overall organization, but each layer of process reduces the ability to be agile and lean and — most importantly — responsive to new opportunities and threats.”

 

How Does The Machine Learning Library TensorFlow Work?

  1. Tensorflow allows for deep neural network models
  2. Tensorflow lets you display your computation as a data flow graph and visualize it using the in-built tensorboard
  3. You build a graph by defining constants, variables, operations, and then executing. Nodes represent operations and Edges are the carriers of data structures (tensors) where the output of one operation (from one node) becomes the input for another operation.

 

Whatever Happened To The Denominator? Why We Need To Normalize Social Media

  1. The denominator, especially in social media and data science around it is almost nonexistent so there’s no sense of how big a dataset is or if it’s growing or shrinking. We can’t normalize it, and therefore, can’t really understand it.
  2. Analyzing Twitter data is a frequent misuse of this, eg. interpreting retweets as a behavioral proxy for some sort of engagement about breaking news that is flawed when many are just forwards
  3. “We tout the accuracy and precision of our algorithms without acknowledging that all of that accuracy is for naught when we can’t distinguish what is real and what is an artifact of our opaque source data…. We tout the accuracy and precision of our algorithms without acknowledging that all of that accuracy is for naught when we can’t distinguish what is real and what is an artifact of our opaque source data.”

 

Stop the Meeting Madness

  1. Prework, Clearly defined goals, and meeting time managed against agenda
  2. Debrief once in awhile about meetings
  3. Institute no tech meetings when needed

 

How to choose the right UX metrics for your product

  1. Quality of user experience: Good old HEART framework
    1. Happiness: measures of user attitudes, often collected via survey. For example: satisfaction, perceived ease of use, and net-promoter score.
    2. Engagement: level of user involvement, typically measured via behavioral proxies such as frequency, intensity, or depth of interaction over some time period. Examples might include the number of visits per user per week or the number of photos uploaded per user per day.
    3. Adoption: new users of a product or feature. For example: the number of accounts created in the last seven days or the percentage of Gmail users who use labels.
    4. Retention: the rate at which existing users are returning. For example: how many of the active users from a given time period are still present in some later time period? You may be more interested in failure to retain, commonly known as “churn.”
    5. Task success: this includes traditional behavioral metrics of user experience, such as efficiency (e.g. time to complete a task), effectiveness (e.g. percent of tasks completed), and error rate. This category is most applicable to areas of your product that are very task-focused, such as search or an upload flow.
  2. Don’t necessarily need a metric in every one of HEART category, but it’s a useful framework to apply to your particular product
  3. Goals Metrics Signals – match to your user experience, grid with a HEART framework
    1. Goals
    2. Metrics
    3. Signals

 

The secrets to running project status meetings that work!

  1. Agenda need to be defined, team members need to be prepared, time management, handling input and topic control
  2. Focus on both the what (the content) and how (the meeting is run)
  3. Look back and forward on the meeting in a two week or appropriate interval to make sure it’s not a rehash and things are moving forward

 

The First Thing Great Decision Makers Do

  1. Commit to your default decision upfront as a habit, meaning framing the context before you seek data. You need to be a decision criteria set that is informed by background knowledge and a hypothesis.
    1. simple example of max price you’re will to pay before you see a price
    2. acknowledging sunk costs upfront
  2. The pitfall otherwise is “data-inspired decision making” that can be riddled with confirmation bias or misinterpretations
  3. A question to ask if, there is no data, what will by decision be and what’s that based on? If there is data, what is the magnitude of evidence to sway me from my default decision?

 

The ultimate guide to remote meetings 2019

  1. Build a virtual water cooler to encourage rapport and build relationships, eg. Slack channels. Make some time for small talk in beginning
  2. Agenda setting
    1. Key talking points
    2. Meeting structure (for example, when and for how long you plan to discuss each talking point)
    3. Team members/teams that will be in attendance
    4. What each team member/team is responsible for bringing to the meeting
    5. Any relevant documents, files, or research
    6. Actions for next meeting
      1. Deliverables and next steps
      2. Who’s responsible for following up on each item or task
  • When those deliverables are due
  1. When the next meeting or check-in will be
  1. Make sure everyone has a job and include the introverts

 

What SaaS Product Managers Need to Know About Customer Onboarding in 2019

  1. Every successful user journey consists of four main parts:
    1. Converting your website visitors
    2. Unleashing the “aha moment”.
    3. User activation – when user feels value
    4. Customer adoption – contact with secondary features and start using product
  2. Personalization as key and using right products to streamline process and need for strong customer support/portals/kb
  3. Take advantage of the Zeigarnik effect – people have the tendency to remember uncompleted tasks, eg checklists, breadcrumbs of more tasks

 

How to ask for a raise

  1. Key question ““What do I need to do to make a bigger difference to the company?”
  2. “If your manager visibly doesn’t believe in your capacity to get to the next level regardless of what you do, find a new manager; your career is at a dead end where you are.”
  3. The only piece of leverage that really matters is a counteroffer

 

Hooking

  1. Range of techniques for altering or augmenting behavior of an OS, apps, or other software components by intercepting function calls, messages, or events passed through components. Code that handles such intercepted function calls, events, or messages, of called a hook.
  2. Methods:
    1. Source modification – modifying source of executable or library before app is running
      1. You can also use a wrapper library and make own oversion of a library that an application loads
    2. Runtime modification: inserting hooks at runtime, eg. modify system events or app events for dialogs

 

Webhook vs API: What’s the Difference?

  1. “Both webhooks and APIs facilitate syncing and relaying data between two applications. However, both have different means of doing so, and thus serve slightly different purposes.”
  2. Webhook: Doesn’t need to be a request, data is sent whenever there is new data available
    1. Usually performs smaller tasks, eg. new blog posts for CMS
  3. API: Only does stuff when you ask it to.
    1. Tends to be entire frameworks, eg. Google Maps API that powers other apps

 

 

Treehouse

 

SQL Functions

  • Concatenation operator joins two pieces of text ||
  • Single quotes should be used for String literals (e.g. ‘lbs’), and double quotes should be used for identifiers like column aliases (e.g. “Max Weight”)
  • Example
    • SELECT first_name || ‘ ‘ || last_name || ‘ ‘ || ‘<‘ || email || ‘>’ AS to_field FROM patrons;
  • Examples of using functions:
    • — SELECT LENGTH(<column>) AS <alias> FROM <table>;
      • SELECT username, LENGTH(username) as length FROM customers;
    • — LOWER(<value or column>)
      • SELECT LOWER(title) as lowercase_title, UPPER(author) as uppercase_author FROM books;
    • — SUBSTR(<value or column>, <start>, <length>)
      • SELECT name, SUBSTR(description, 1, 35) || “…” AS short_description, price FROM products;
    • — REPLACE(state, <target>, <replacement>)
      • SELECT street, city, REPLACE(state, “California”, “CA”), zip FROM addresses WHERE REPLACE(state, “California”, “CA”) = “CA”;

SQL Counting Results

  • — COUNT(<column>)
    • SELECT COUNT(DISTINCT category) FROM products;
  • — SELECT <column> FROM <table> GROUP BY <column>;
    • SELECT category, COUNT(category) as product_count FROM products GROUP BY category;
  • — SUM(<column>)
    • — having keyword works on aggregates after GROUP BY before ORDER BY
    • SELECT SUM(cost) AS total_spend, user_id FROM orders GROUP BY user_id ORDER BY total_spend DESC
    • SELECT SUM(cost) AS total_spend, user_id FROM orders GROUP BY user_id HAVING total_spend > 250 ORDER BY total_spend DESC;
  • — AVG(<column>)
    • SELECT user_id, AVG(cost) AS average_orders FROM orders GROUP BY user_id;
  • — MAX(<numeric column>) MIN(<numeric column>)
    • SELECT AVG(cost) AS average, MAX(cost) as maximum, MIN(cost) as minimun, user_id FROM orders GROUP BY user_id;
    • SELECT name, ROUND(price*1.06,2) AS “Price in Florida” FROM products;
  • –DATE(“now); in sqlite
    • SELECT * FROM orders WHERE status = “placed” AND ordered_on = DATE(“now”);

NYC ProductCon 2019 Round-Up Notes

 

My personal notes from Product School’s NYC ProductCon. Overall some really good speakers, but not very organized. If I hadn’t been able to get a free ticket, I would have been displeased. Product School owes it to both the participants and speakers to be more polished, also what conference in 2019 doesn’t have Wifi and insanely oversells like this?

 

Nir Eyal’s Talk: Indistractable: How to Control Your Attention and Choose Your Life

  • Escaping distraction is a superpower
  • Psychological escape of discomfort – why ppl go to apps
  • “Time management is pain management”
  • Indistractable is his latest book book
  • Way to manage is to note sensation you feel and write it
    • That way you avoid fake time wasting eg checking e-mail on phone but not doing anything
  • Feel curious instead of contempt about your uncomfortable feelings
  • Surf the urge – they crest and subside – ten minute rule do anything you want, either be with sensation or be curious and get into task at hand
  • Know your Intent for work: You can’t call something a distraction unless you know what it distracted you from
  • Schedule your days with timeboxes or someone else will
  • Turn values into time and make time, not just to do lists
  • Get out of low value work, use tech or delegate
  • Less time communicating and more time concentrating
  • Drug round vest as example of an innovation to hack distraction for nurses and avoid medication errors
  • When see see alerts and such, ask yourself, “Is the trigger serving me, or am I serving it?”
  • Cleanup desktop, change notification settings, put sign on your desk, etc
  • Forest app and Self control app and other digital tools can help
  • Self compassion is a key above else, how would you talk to a friend

Data Analytics for Better Product Decision Making

  • Mixpanel talk a sales deck and missed opportunity
  • “Intuition is key. As PMs we never have the full info, we have to make a judgement call based on data we’re getting.”
  • Key summary points
    • Collect accurate data
    • Identify trends
    • Understand the why
    • Set goals and create hypothesis
    • Engage

Morgan Brown: Product Brief – The Primary Artefact

  • What’s a Product Brief:
    • PRD
    • Spec
    • Product Proposal
  • Product briefs are among most inconsistent experiences as PM
  • “Artefact actors can use to identify a clear business goal, the actors involved in achieving the and deliverables to achieve that goal.”
  • Challenges
    • Who is it for?
    • Format – slides, confluence, docs, etc.
    • When?
  • Impact Mapping Framework is a recommended book
  • First step
    • Company goals – what are you meant to impact
    • Spotify examples: MAU growth, sub rev, creator livelihood
  • Actors to consider – people who influence goal
    • Teams
    • Entities
      • Artists
      • Local Govts
    • Departments
  • Impact assessment of actors
    • Example: Artist control of catalog – ask yourself what can they do?
  • Prioritization Framework for Stakeholders
    • Reach
    • Impact
    • Cost
    • Effort
    • Social Responsibility
    • Eco Sustainability
  • OKR planning is helped with this framework and socializing your stakeholders
  • Impact mapping workshop first month of quarter
  • Monthly OKR checking
  • Need an agenda on calendar for concrete feedback loop

 

This slideshow requires JavaScript.

Jason Nichols: Product Management and AI art

  • They key is asking kind of question you want answered

 

This slideshow requires JavaScript.

  • Loss function key to AI problems
    • Stick to a specific variable per model
  • Chains of models propagate errors and then exponentially propagates errors
  • Model processing will have business rules, it needs to be codified very stringently in post-processing, eg. if it’s alcohol, don’t sell to minor
  • Supervised data for good labelled data sets
  • Supervised v unsupervised depending on the problem
  • How will your machine learn?
    • Do you have downtime to learn? How do you handle that when you have a system that’s 24 hours – you can’t really do this background from your prod stream if you don’t factor this in
    • How do you build ci-cd and kpis that block release to prod
  • How you measuring KPIs and what are you trying to solve before building the model?
  • There is no such thing as ground truth
  • Precision, Accuracy, Recall – these are misused all the time. Accuracy is the most misused and they usually mean Recall
    • WalMart running out of chickens problem
  • Recall typically has business cost -> people leave the store if there are no rotisserie chickens
  • If you doing anomaly correction – if you don’t know what incidence rate you can’t build a test environment. Need user research and base sampling

Nate Franklin: Powering the Next Generation of Products

  • ETE experiences – Nike as flagship
  • Industries need to optimize for LTV
    • Peloton has 96% retention rate
    • Idea and experience so ppl come back over and over again
  • Marketing will focus on growth mindset and not only good brand awareness but entire customer experience
  • 3 challenges
    • Key challenge is integrating entire data ecosystem and have it be high quality, no point in just hoarding arbitrary data
    • Systematic growth and experimentation – no just optimize button color for funnels – find best ideas that create great experiences
    • “Most ideas fail to show value”
    • Contextual Awareness is lacking in systems, eg. Facebook friendship celebrating house burning down, LinkedIn suggesting last job. Good: metromile and giving sweeping alerts 

      This slideshow requires JavaScript.

Andrea Chesleigh: Lessons in Product Leadership

  • Know who you are, what you’ll be flexible about, what you’ll cave into
  • Know enough details to add value but don’t micromanage

Build v Buy Panel

  • Build v Buy is based on culture – eg Zygna if it touches my core product we build it
  • Buy mentality: Any product is an iceberg
  • Vendors: anciallary needs like fraud protection and chat bot
  • Validated hypothesis for a feature – using a vendor for an initial beta who has the use case and then scaling or building own experience using the same concept
  • Big question to ask is does building give competitive advantage and is that your core competency versus time to market
  • Vendor is always upside and downside, eg. is it gonna break my app, is the start-up reliable?
  • Idea of keyword v non-keyword lead companies/models. Something broad and searchable versus something much more specific and obscure that it’s difficult you have a framework to evaluate that you need to evaluate with the vendor.
  • Does a feature request become a service business for a specific customer or does it become a part of product that adds more value?
  • Hacker one program priceline, get external contractors to try to break the site and then form a backlog out of it

Vivek Bedi: Disrupting 160-Year-Old Company

  • Relationship-based business (visiting ranches in Texas) -> Digital experience with low bounce rate from 3% of users three years ago
  • Small start-up + core values of 160 year old company
  • Pizza pie times (two pizza rule)
  • Small teams that match the company’s operating
  • Everything goes to changing the way people work to transform companies
  • Clear roles and responsibilities and bridges across teams
  • Structuring teams in a way a customer would understand
  • Resarch Obsessed Mindset – Journey Based Org – Pizza pod team
  • Deep ethnographic research, shadow sessions
  • 50-65 is the age of users adopting mobile which upends conventional thinking
  • Understanding generational differences and how they all matter
  • Every company has two types of competitors
    • Incumbents
    • Disrupters
  • How can you show but not tell stakeholders, be able to virtually walk them through a customer experience
  • Third culture of combining old company values with start-up culture

img_4906.jpg

 Abigail Hart Gray: A User Guide to Product Design

  • It started with the apple imacs clear
  • Design
    • UX/Experience Designer/Info Architect
    • Visual Design: branded experience – make experience individual v websites that all seem the same but logo
    • Content:
    • Research
  • Design Maturity Report from EnVision
    • 41% of companies are at the bottom of design maturity – “pretty it up”
    • 21% of companies are at second to bottom – “Design as faciliators” – do participatory design exercises, etc. but they don’t or can’t push back
    • 62% are at the bottom 2-box
  • What does great look like?
    • Why should you care? Revenue chart
  • When you begin to measure things you can design towards that and you can tell good stories
  • Being design driven pays (across six dimensions as a result of HBR study)
  • Analytics really is What people are doing. The Why is different and qualitative
    • Get to the Will – surveys and concepts testing on scale to predict what people are doing
  • Start with something small to add value to you don’t threaten the status quo immediately – create value where there wasn’t much

 

March-June 2019 Learning

Articles Read 

Engineering Management Philosophies and Why They Matter Even if You are Not a Manager

  1. Internal Team Success, External Team Collaboration, Company-wide Responsibilities & Culture, and Strategic Direction and Impact are key buckets of focus
  2. Do not take things at a face value and learn to overc-ommunicate
  3. Being an effective leader is helping the team make decisions rather than making decisions for them

How to choose the right UX metrics for your product

  1. Two-prong approach of the quality of user experience (using HEART framework) and goals of product/project (using Goals-Signals-Metrics)
  2. HEART Framework
    1. Happiness
    2. Engagement
    3. Adoption
    4. Retention
    5. Task Success
  3. Goals-Signals-Metrics layered on top

What is Data Egress? Managing Data Egress to Prevent Sensitive Data Loss

  1. Data egress refers to data leaving a network in transit to an external location. Examples include outbound e-mail messages, cloud uploads, files moved to external storage, copying to usb drive, FTP/HTTP transfer, etc. Data ingress refers to when data outside network is traveling into the network.
  2. Egress filtering involves monitoring egress traffic to monitor for signs of malicious activity.
  3. Data exfiltration refers to techniques that can result of loss, theft, or exposure of sensitive data. Eg. stealing USB drives or encrypting or modifying data prior to filteratio or using services to mask location or traffic.

WTF is deal ID?

  1. Deal identifier is the unique number of an automated ad buy. This is the identifier used to match buyers and seller individually. This identifier implies a previously agreed-upon set of parameters, a more narrow criteria to programmatic and private marketplaces.
  2. Deal ID allows publishers to specify terms and kind of inventory available to different types of advertisers.
  3. Deal ID can be thought of as an automated insertion order, better flexibility but controlling for the parameters of an ad deal.

Responses to Negative Data: Four Senior Leadership Archetypes.

  1. Most senior leaders in org came up when data wasn’t so accurate and available
  2. You have bubble kings who ignore the data and Attackers on the other end
    1. Deal with Bubbles: form relationships and justify decisions.
    2. Deal with Attackers: get out or provide solutions and not just data
  3. Rationalizer who sow doubt and Curious ones who ask the wwhy
    1. Deal with Rationalizers: need to bring overwhelming analytical competence
    2. Deal with Curious: be joyous and work hard

The Engineering Manager: Working with Product Marketing

  1. Great Marketing market makes code you’re writing something people need to have
    1. “A world-class engineer, designer, product manager and product marketer can really change the world.”
  2. Teach them your features and work with your team and let them practice their narrative to your team
  3. Build in feature toggling – batching sets of features as a campaign. Can do targeted or percentage-based rollouts for select feedback as well.

Building Customer Churn Models for Business

  1. “In its simplest form, churn rate is calculated by dividing the number of customer cancellations within a time period by the number of active customers at the start of that period. Very valuable insights can be gathered from this simple analysis — for example, the overall churn rate can provide a benchmark against which to measure the impact of a model. And knowing how churn rate varies by time of the week or month, product line, or customer cohort can help inform simple customer segments for targeting as well.”
  2. Churn can be characterized as
    1. Contractual
      1. Customers buy at intervals or otherwise observed. Eg subscriptions
    2. Non Contractual
      1. Free to buy or not anytime. Churn is not explicit, eg. ecommerce
    3. Voluntary
      1. Customers chose to leave service
    4. Involuntary
      1. Customers forced to discontinue or payments
    5. Good churn models should factor in things like different risk scores, predicting different use cases for probabilities of churn, and have metrics that stakeholders will understand respond to

Getting started with AI? Start here!

  1. Write down labels you’ll accept and how you’d know if the answer is right for one of them and what mistakes might look like. It will save you trouble downstream and put you in right paradigm.
  2. Remember: the goal of analytics is to generate inspiration/inform the decision maker.
  3. ML/AI is for proejcts where the goal is to sue data to automate thing-labeling.
    1. Data mining is about maximizing the speed of discovery while ML/AI is about performance in automation.

Once You’re in the Cloud, How Expensive Is It to Get Out?

  1. Negotiate a good egress rate or account for it just in case
  2. Ingress of course is usually free

How to Build an Amazing Relationship Between Product Management and Marketing

  1. Figure out how to align Product’s metrics/goals with Marketing
    1. User feedback v lead gen
  2. Start early on products/planning across divisions
  3. Have transparent strategic goals and defined roles off the bat

How Product Marketers Want to Work With Product Managers

  1. Show Product Marketers the full plan from the start
  2. Work with the Product Marketer to connecting your product or part of it to the full system experience
  3. Share dasta and customer stories as the perspective from either side is different but important for collaboration and positiong

Second-Order Thinking: What Smart People Use to Outperform

  1. Don’t seize the first available option, no matter how good it seems, before you’ve asked questions and explored. It’s asking and then what?
  2. Think about how others in ecosystem will respond to business decisions, your suppliers, regulators, etc.
  3. Think in terms of ten minutes, ten weeks, ten months, ten years, etc.

Why a High Performing Product Marketing Team Is the Key to Growth

  1. Product marketers should champion the voice of the customer, more akin to a sociologist or psychologist than a product manager or a technology
  2. Product marketers can perform win/loss analysis, customer profiles, segmentation, buyer personas, etc.
  3. Knows how to bundle feature, understands market + category insights vis a vis the competitive environment

Why Modeling Churn is Difficult

  1. Churn = CustomersLostDuringPeriod/CustomersAtBeginningOfPeriod
  2. Difficultly is looking in true rate of churn to account for the differences period to period as moving toward accounting for seasonality, etc.
  3. A stochastic model is one way to approach this problem as it allows for random variation of the inputs on a time basis.

Content Targeting Driving Brand Growth Without Collecting User Data

  1. There are avenues to meaning create content targeting strategies using channels and demo not reliant on user data effectively still
  2. Integrated Content Targeting improves overall media experiences still
  3. This also increase trust and length of time engaged

LetsLearnAI: What Is Feature Engineering for Machine Learning?

  1. “Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. If feature engineering is done correctly, it increases the predictive power of machine learning algorithms by creating features from raw data that help facilitate the machine learning process. Feature Engineering is an art.”
  2. Combining two columns like lat and long together into one feature is known as Crossed Column and can help the model learn better.
  3. Bucketized columns are sometimes useful, eg like pooling age ranges, 25-35, etc.

What Is Regularization In Machine Learning?

  1. Regularization is used to solve the problem of overfitting in machine learning models – that is when models learn too much from the noise in the training data that it negatively impacts the performance of the model on new data.
  2. There are two types of Regularization
    1. L1: Lasso regularization: Adds a penalty to the error function. The penalty is the sum of the absolute values of weights.
    2. L2: Ridge regularization: Adds penalty using the sum of squared values of the weights
  3. Generally, good models do not give more weight to a particular feature – the weights are evenly distributed using regularization to solve for overfitting.

What Are L1 and L2 Loss Functions? L1 vs L2 Loss Function

  1. L1 Loss Function minimizes the error in ML models by using the sum of all absolute differences between true and predicted value. Call LAD or Least Absolute Deviations
  2. L2 Loss Function Least Square Errors or LS minimizes the error using the sum of all squared differences between predicted and true value
  3. L2 generally is preferred but does not work well if the data set outliers because the squared differences will lead to a larger error.

 

Learning to explain Gains/Lifts better as an outcome of models from Machine Learning

Cumulative Gains and Lift Charts

  1. A Cumulative Gains chart shows the percentage of overall number of cases in a given category “gained” by targeting a percentage of total number of cases.
    1. Each point on curve, x-axis is percentage of total cases “gained” by y-axis category value
    2. Diagonal line is the baseline, eg. if you select 20% of cases from scored dataset at random you would expect to gain 20% in all cases of category
    3. What makes a desirable gain depends on the cost of errors, eg. Type 1 and Type II errors as you move up
  2. Lift Chart is derived from cumulative gains chart
    1. Values on y-axis correspond to ratio of cumulative chain for curve to baseline
    2. It’s another way of looking at Gains Chart

 

Treehouse

Introduction to Algorithms

  • O(1): Constant: takes constant time regardless of n, doesn’t change. Ideal because input time doesn’t matter
  • O(log n): Logarithmic (sometimes called sublinear) runtime, as n grows large, the operation grows thoroughly and flattons out
  • O(n) Linear Time: eg. reading item on every list
  • O(n2) Quadratic Time: eg. for any given value of n, we carry out n^2 of operations
  • Cubic Runtimes: n^3 of operations
  • Quasilinear Runtimes: O(n log n)
    • For every value of n, we are going to execute a log n number of operations. n times long n
    • Lies between a linear runtime and quadratic runtime
    • Sorting algorithm is where you see this
    • Merge Sorts is an example that takes a long time in terms of quasilinear run time
  • Polynomial runtime O(n^k) – if for a given value of time is n raised to k power
    • Anything bounded by this is considered to have a polynomial runtime or be efficient
  • Exponential runtime O(x^n) algorithmns are too expensive to be used, eg brute force algorithms analogy to manually testing each combo on a lock to break it, eg. three combo locker 1000 values, and four is 1000
    • Traveling Salesman analogy (eg multiple routes) or factorial On!
    • Knowing off the bat that a problem is somewhat unsolvable in a realistic time means you can focus your efforts on other aspects of the problem.
  • Worse Case Complexity
    • When evaluating the run time for an algorithm, we say that the algorithm has as its upper bound, the same run time as its least efficient step in algorithm.
    • The run time of the algorithm in the worst case is O(logn n) or big O log of log n or a logarithmic.

linear_search.py

def linear_search(list, target):
            “””
            Returns the index positioningg of the target if found, else returns home
            “””

            for i in range(0, len(list)):
                 if list[i] == target:
                       return i
            return None

def verify(index):

            if index is not None:
                        print(“Target found at index: “, index)
            else:
                        print(“Target not found in list”)


result = linear_search(numbers, 12)
verify(result)


“Target not found in list”

 

In worse case scenario this loop here will have to go through entire range of values and read every element on list. This gives Big O value of N or running in linear time.

Binary Search

def binary_search(list, target):

  first = 0
  last - len(list) - 1

  while first <= last:
    midpoint = (first + last)//2

    if list[midpoint] == target:
      return midpoint
    elif list[midpoint] < target:
      first = midpoint + 1 # point to value after midpoint
    else:
      last = midpoint - 1 # if greater than midpoint, point to value after midpoint

    return None

recusive_binary_search.py

def recursive_binary_search(list, target):
    if len(list) == 0:
      return False
    else:
      midpoint = (len(list))//2

      if list[midpoint] == target:
        return True
      else:
        if list[midpoint] < target:
          return recursive_binary_search(list[midpoint+1:], target)#new list using slice operation
        else:
          return recursive_binary_search(list[:midpoint], target)

def verify(result):
  print("Target found: ", result)

numbers = [1, 2, 3, 4, 5, 6, 7, 8]
result = recursive_binary_search(numbers, 12)
verify(result)

result = recursive_binary_search(numbers,6)
verify(result)

 

Recursive Functions

  • A recursive function is one that calls itself
  • When writing a recursive function, you always need a stopping condition, often called the base case
    • Eg the empty list in example of above or finding the midpoint
  • The number of times a recursive function calls itself is called Recursive Depth
  • An iterative solution means it generally implemented using a loop of some guide versus a recursive solution is one that involves a set of stopping conditions and a function that calls itself
  • In functional languages, we avoid changing data that is given to a function
    • Python on flipside, prefers iterative solutions and has Maximum Recursion Depth (or how many times function can call itself)
  • Space Complexity
    • Space Complexity is a measure of how much more working storage or extra storage is needed as an algorithm grows
    • For example, recursive binary search runs in O(log n) in Python
    • Tail Optimization in some programming languages, such as Swift, if the recursive call is the last line of code in the function, reduces space and computing overhead of recursive functions. Python does not implement tail optimization so the iterative version will be more optimal for safe

 

Reporting with SQL (Review time!)

  • Ordering
    • SELECT <columns> FROM <table> ORDER BY <column>;

SELECT * FROM customers ORDER BY last_name ASC, first_name ASC;

  • Limiting
    • SELECT * FROM <table> LIMIT <# of rows>;
    • SELECT * FROM campaigns ORDER BY sales DESC LIMIT 3;
  • Offset
    • The offset keyword is used with SELECT and ORDER BY to provide a range to select records
    • — SELECT * FROM <table> LIMIT <# of rows> OFFSET <skipped rows>;
    • SELECT * FROM orders LIMIT 50 OFFSET 100;
  • Manipulating Text
  • Aggregation
  • Date times

 

Jan-Feb Learning 2019

What’s This?

I’m trying to give myself at least half an hour during the workdays (or at least blocking two hours or so a week at least) to learn something new – namely taking classes/reviewing what I know on Treehouse, reading job related articles, and reading career-related books. Tracking notables here on a monthly basis as a self-commitment and to retain in memory and as reference. I tell off posting this the last six months with work and life has been insanely busy and my notes inconsistent across proprietary work versus my own, but worth a round-up here. Posting with good intentions for next year. Reminding myself that if I don’t capture every bit I did, it’s alright. Just keep yourself accountable.

Books Read

Inspired: How To Create Products Customers Love

Some favorite quotes from Kindle highlights:

  • This means constantly creating new value for their customers and for their business. Not just tweaking and optimizing existing products (referred to as value capture) but, rather, developing each product to reach its full potential. Yet, many large, enterprise companies have already embarked on a slow death spiral. They become all about leveraging the value and the brand that was created many years or even decades earlier. The death of an enterprise company rarely happens overnight, and a large company can stay afloat for many years. But, make no mistake about it, the organization is sinking, and the end state is all but certain.
  • The little secret in product is that engineers are typically the best single source of innovation; yet, they are not even invited to the party in this process.
  • To summarize, these are the four critical contributions you need to bring to your team: deep knowledge (1) of your customer, (2) of the data, (3) of your business and its stakeholders, and (4) of your market and industry.
  • In the products that succeed, there is always someone like Jane, behind the scenes, working to get over each and every one of the objections, whether they’re technical, business, or anything else. Jane led the product discovery work and wrote the first spec for AdWords. Then she worked side by side with the engineers to build and launch the product, which was hugely successful.
  • Four key competencies: (1) team development, (2) product vision, (3) execution, and (4) product culture.
  • It’s usually easy to see when a company has not paid attention to the architecture when they assemble their teams—it shows up a few different ways. First, the teams feel like they are constantly fighting the architecture. Second, interdependencies between teams seem disproportionate. Third, and really because of the first two, things move slowly, and teams don’t feel very empowered.
  • I strongly prefer to provide the product team with a set of business objectives—with measurable goals—and then the team makes the calls as to what are the best ways to achieve those goals. It’s part of the larger trend in product to focus on outcome and not output.

In my experience working with companies, only a few companies are strong at both innovation and execution. Many are good at execution but weak at innovation; some are strong at innovation and just okay at execution; and a depressing number of companies are poor at both innovation and execution (usually older companies that lost their product mojo a long time ago, but still have a strong brand and customer base to lean on).

 

Articles Read

Machine learning – is the emperor wearing clothes

  1. “The purpose of a machine learning algorithm is to pick the most sensible place to put a fence in your data.”
  2. Different algorithms, eg. vector classifier, decision tree, neural network, use different kinds of fences
  3. Neural networks give you a very flexible boundary which is why they’re so hot now

Some Key Machine Learning Definitions

  1. “A model is overfitting if it fits the training data too well and there is a poor generalization of new data.”
  2. Regularization is used to estimate a preferred complexity of a machine learning model so that the model generalizes to avoid overfitting and underfitting by adding a penalty on different parameters of the model – but this reduces the freedom of the model
  3. “Hyperparameters cannot be estimated from the training data. Hyperparameters of a model are set and tuned depending on a combination of some heuristics and the experience and domain knowledge of the data scientist.”

Audiences-Based Planning versus Index-Based Planning

  • Index is the relative composition of a target audience of a specific program or network as compared to the average size audience in TV universe to give marketers/agencies a gauge of the value of a program or network relative to others using the relative concentrations of a specific target audience
  • Audience-based buying is not account for relative composition of an audience or the context within the audience is likely to be found but rather values the raw number of individuals in a target audience who watch given program, their likelihood of being exposed to an ad, and the cost of reaching them with a particular spot. Really it’s buying audiences versus buying a particular program
  • Index-based campaigns follow TV planning model: maximum number of impressions of a given audience at minimum price -> buy high-indexing media against a traditional age/demo: note this doesn’t include precision index targeting
  • Huge issue is tv is insanely fragmented so even if campaigns are hitting GRP targets, they’re doing so by increasing frequency rather than total reach
  • Note: GRP is measure of a size of an ad campaign by medium or schedule – not size of audience reached. GRPs quantify impressions as a percentage of target population and this percent may thus be greater than 100. This is meant to measure impressions in relation to number of people and is the metric used to compare strength of components in a media plan. There are several ways to calculate GRPs, eg. GRP % = 100 * Reach % * Avg Freq or even just rating TV rating with a rating of 4 gets placed on 5 episodes = 20 GRPS
  • Index-based planning is about impressions delivered over balancing of reach and frequency. Audience-based is about reaching likely customers for results
  • DSPs, etc. should be about used optimized algorithms to assign users probability of being exposed to a spot to maximize probabilities of a specific target-audience reach
  • Audience-based planning is about maximizing reach in most efficient way possible whereas index-based buying values audience composition ratios

Finding the metrics that matter for your product

  1. “Where most startups trip up is they don’t know how to ask the right questions before they start measuring.”
  2. Heart Framework Questions:
    • If we imagine an ideal customer who is getting value from our product, what actions are they taking?
    • What are the individual steps a user needs to take in our product in order to achieve a goal?
    • Is this feature designed to solve a problem that all our users have, or just a subset of our users?
  3. Key points in a customer journey:
    1. Intent to use: The action or actions customers take that tell us definitively they intend to use the product or feature.
    2. Activation: The point at which a customer first derives real value from the product or feature.
    3. Engagement: The extent to which a customer continues to gain value from the product or feature (how much, how often, over how long a period of time etc).

A Beginners Guide to Finding the Product Metrics That Matter

  1. It’s actually hard to find what metrics that matter, and there’s a trap of picking too many indicators
  2. Understand where your metrics fall under, eg. the HEART framework: Happiness, Engagement, Adoption, Retention, Task Success
  3. Don’t measure all you can and don’t fall into the vanity metrics trap, instead examples of good customer-oriented metrics:
    • Customer retention
    • Net promoter score
    • Churn rate
    • Conversions
    • Product usage
    • Key user actions per session
    • Feature usage
    • Customer Acquisition costs
    • Monthly Recurring Revenue
    • Customer Lifetime Value

Algorithms and Data Structures

Intro to Algorithms

  • Algorithm steps a program takes to complete a task – the key skill to derive is to be able to identify which algorithm or data structure is best for the task at hand
  • Algorithm:
    • Clearly defined problem statement, input, and output
    • Distinct steps need to be a specific order
    • Should produce a consistent result
    • Should finish in finite amount of time
  • Evaluating Linear and Binary Search Example
  • Correctness
    • 1) in every run against all possible values in input data, we always get output we expect
    • 2) algorithm should always terminate
  • Efficiency:
  • Time Complexity: how long it takes
  • Space Complexity: amount of memory taken on computer
  • Best case, Average case, Worst case

Efficiency of an Algorithm

  • Worst case scenario/Order of Growth used to evaluate
  • Big O: Theoretical definition of a complexity of an algorithm as a function of the size O(n) – order of magnitude of complexity
  • Logarithmic pattern: in general for a given value of n, the number of tries it takes to find the worst case scenario is log of n + 1 or O(log n)
  • Logarithmic or sublinear runtimes are preferred to linear because they are more efficient

Google Machine Learning Crash Course

Reducing Loss

  • Update model parameters by computing gradient – negative gradient tells us how to adjust model parameters to reduce lost
  • Gradient: derivative of loss with respect to weights and biases
  • Take small (negative) Gradient Steps to reduce lost known as gradient descent
  • Neural nets: strong dependency on initial values
  • Stochastic Gradient Descent: one example at a time
  • Mini-Batch Gradient Descent: batches of 10-1000 – losses and gradients averaged over the batch
  • Machine learning model gets trained with an initial guess for weights and bias and iteratively adjusting those guesses until weights and bias have the lowest possible loss
  • Convergence refers to when a state is reached during training in which training loss and validation loss change very little or not with each iteration after a number of iterations – additional training on current data set will not improve the model at this point
  • For regression problems, the resulting plot of loss vs. w1 will always be convex
  • Because calculating loss for every conceivable value of w1 over an entire data set would be an inefficient way of finding the convergence point – gradient descent allows us to calculate loss convergence iteratively
  • The first step is to pick a starting value for w1. The starting point doesn’t matter so many algorithms just use 0 or a random value.
  • The gradient descent algorithm calculates the gradient of the loss curve at the starting point as the vector of partial derivatives with respect to weights. Note that a gradient is a vector so it has both a direction and magnitude
  • The gradient always points in the direction of the steepest increase in the loss function and the gradient descent algorithm takes a step in the direction of the negative gradient in order to reduce loss as quickly as possible
  • The gradient descent algorithm adds some fraction of the gradient’s magnitude to the starting point and steps and repeats this process to get closer to the minimum
  • Gradient descent algorithms multiply the gradient by a scalar learn as the learning rate/step size
    • eg. If gradient magnitude is 3.5 and the learning rate is .01, the gradient descent algorithm will pick the next point .025 away from the previous point
  • Hyperparameters are the knobs that programmers tweak in machine learning algorithms. You want to pick a goldilocks learning rate, learning rate too small will take too long, too large, the next point will bounce haphazardly and could overshoot the minimum
  • Batch is the total number of examples you use to calculate the gradient in a single iteration in gradual descent
  • Redundancy becomes more likely as the batch size grows and there are diminishing returns in after awhile in smoothing out noisy gradients
  • Stochastic gradient descent is a batch size one 1 – a single example. With enough iterations this can work but is super noisy
  • Mini-batch stochastic gradient descent: compromise between full-batch and SGD, usually between 10 to 1000 examples chosen at random, reduces the noise more than SGD but more effective than full batch

First Steps with Tensorflow

  • Tensorflow is a framework for building ML models. TFX provides toolkits that allows you construct models at your preferred layer of abstraction.
  • The estimator class encapsulates logic that builds a TFX graph and runs a TF session graph, in TFX is a computation specification – nodes in graph represent operations. Edges are directed and represent the passing of an operation (a Tensor) as an operand to another operation

 

 

 

 

Jul-Dec Learning

What’s This?

I’m trying to give myself at least half an hour during the workdays (or at least blocking two hours or so a week at least) to learn something new – namely taking classes/reviewing what I know on Treehouse, reading job related articles, and reading career-related books. Tracking notables here on a monthly basis as a self-commitment and to retain in memory and as reference. I tell off posting this the last six months with work and life has been insanely busy and my notes inconsistent across proprietary work versus my own, but worth a round-up here. Posting with good intentions for next year. Reminding myself that if I don’t capture every bit I did, it’s alright. Just keep yourself accountable.

Books Read:

So Good They Can’t Ignore You

Key Points:

  • It’s not about passion – it’s about gaining career capital so you have more agency over a career you want.
  • Control traps 1) you don’t have enough career capital to do what you want 2) employers don’t want you to change/advance/slowdown because you have skills valuable to them
  • Good jobs have autonomy, financial viability, and mission – you can’t get there on passion alone.
  • Figure out if the market you wish to succeed in is winner-take-all, one killer skill, eg. screenwriting is all about just getting a script read or auction-based, diverse collection of skills, eg. running a complex business.
  • Make many little bets and try different things that give instant feedback to see what is working or not and show what you’re doing in a venue that will get you noticed.
  • On Learning
    • Research bible-routine – summarize what you might work on – description of result and strategies used to do it.
    • Hour-tally and strain – just work on for an hour and keep track of it
    • Theory-Notebook – brainstorm notebook that you deliberately keep track of info in
    • Carve out time to research and independent projects
  • “Working right trumps finding the right work” p228
  • Good visual summary

The Manager’s Path

Key Points:

  • “Your manager should be the person who shows you the larger picture of how your work fits into the team’s goals, and helps you feel a sense of purpose in the day-to-day work”
  • “Developing a sense of ownership and authority for your work and not relying for manager to set the tone”
  • “Especially as you become more senior, remember that your manager expects you to bring solutions, not problems”
  • “Strong engineering managers can identify the shortest path through the systems to implement new futures”
  • Dedicate 20% of time in planning meetings to sustainability work
  • “Be carefully that locally negative people don’t stay in that mindset on your team for long. The kind of toxic drama that is created by these energy vampires is hard for even the best manager to combat. The best defense is a good offense in this case”
    • You are not their parent – treat them as adults and don’t get emotionally invested in every disagreement they have with you personally.

Articles:

What is a predicate pushdown? In mapreduce

  1. Concept is if you issue a query to run in one place you’d spawn a lot of network traffic, making that query slow and costly. However, if yo updush down parts of the query to where data is stored and thus filter out most of the data, you reduce network traffic.
  2. You filter conditions as True or False – predicates, and pushdown query to where the data resides
  3. For example, you don’t need to pass through every single column for every map reduce job in the pipeline for no reason so you filter so you don’t read the other columns

What is a predicate pushdown?

  1. The basic idea is to push certain parts of SQL queries (the predicates) to where the data lives to optimize the query by filtering out data earlier rather than later so it skips reading entire files or chunks of files to reduce network traffic/processing time
  2. This is usually done with a function that returns a boolean in the where cause to filter out data
  3. Eg example below where clause “WHERE a.country = ‘Argentina’”
SELECT *
  a.*
FROM
  table1 a
JOIN 
  table2 b ON a.id = b.id
WHERE
  a.country = 'Argentina';

The Leaders Calendar

  1. 6 hours a day of non-work time, half with family and some downtime with hobbies
  2. Setting norms and expectations with e-mail is essential. For example sending e-mails from CEO late at night sets a wrong example for the company or CEO’s time is spend cc’d on endless irrelevant items.
  3. Be agenda-driven to optimize limited time and also not only let the loudest voices stand out so that the important work can get done, not just the work that appears the most urgent be work on strategy.
    1. A key way to do this is to limit routine activities that can be given to a direct report

What People Don’t Tell You About Product Management

  1. “Product Management is a great job if you like being involved in all aspects of a product — but it’s not a great job if you want to feel like a CEO.”
    1. You don’t necessary make the strategy, have resources, and have the ability to fire people. Your job is to get it done by being resourceful and convincing.
  2. Product Managers should channel the needs to the customer and follow a product from conception, dev, launch, and beyond. Be a cross functional leader coordinating between R&D, Sales, Marketing, Manufacturing, and Operations. Leadership and coordination are key. Your job is to make strategy happen and convincing people you work with.
  3. “For me, product management is exciting and stressful for the same reason: there’s unpredictability, there’s opportunity to create something new (which also means it may be unproven), and you’re usually operating with less data than you’d like, and everything is always a little bit broken.”

Web Architecture 101

  1. In web dev you almost always want to scale horizontally, meaning you add more machines into your pool of resources, versus vertically, meaning scaling by adding more powers (eg. CPU, RAM) to an existing machine, this redundancy allows you to have another plan so your applications keep running if a server goes down and makes your app more fault tolerant. You can also minimally couple different parts of the app backend to run on different servers.
  2. Job queues store lists of jobs that need to be run asynchronously – eg Google does not search the entire internet every time you do a search, it crawls the web asynchronously and updates search indexes along the way
  3. Typical data pipeline: firehouse that provides streaming interface to ingest and process data (eg. Kinesis and Kafka) -> raw data as well as final transformed/augmented data saved to cloud storage (eg. S3) -> data loaded into a data warehouse for analysis (eg. Redshift)

Running in Circles – Why Agile Isn’t Working and What We Do Differently

  1. “People in our industry think they stopped doing waterfall and switched to agile. In reality they just switched to high-frequency waterfall.”
  2. “Only management can protect attention. Telling the team to focus only works if the business is backing them up.”
  3. Think of software development as going uphill when you’re finding out the complexity/uncertainty and then downhill when you have certainty.

Product Managers – You Are Not the CEO of Anything

  1. Too many product managers think their role is that of an authoritarian CEO (with no power) and often disastrous because they think they have all the answers.
  2. You gain credibility through your actions and leadership skills.
  3. Product management is a team sportafter all, and the best teams don’t have bosses – they have coaches who ensure all the skills and experiences needed are present on the team, that everyone is in the right place, knows where the goal is, and then gets out of the way and lets the team do what they do best in order to reach that goal.”

Product Prioritization: How Do You Decide What Belongs in Your Product?

  1. Radical vision with this mad lips template Today, when [customer segment]want to [desirable activity/ outcome], they have to [current solution] . This is unacceptable, because [shortcomings of current solutions]. We envision a world where [shortcomings resolved]. We are bringing this world about through [basic technology/ approach].
  2. Four components to good product strategy
    1. Real Pain Points means “Who is it for?” and “What is their pain point?”
    2. Designrefers to “What key features would you design into the product?” and “How would you describe your brand and voice?”
    3. Capabilitiestackles the “Why should we be the ones doing it?”and “ What is our unique capability?”
    4. Logisticsis the economics and channels, like “What’s our pricing strategy?” and “What’s the medium through which we deliver this?”
  3. Then prioritize between sustainable and good fit

To Drive Business Success Implement a Data Catalog and Data Inventory

  • Companies have a huge gap between knowing where the data is located simply and what to do with it
  • Three types of metadata
    • Business Metadata: Give us the meaning of data you have in a particular set
    • Technical Metadata: Provide information on the format and structure of data – databases, programming envs, data modeling tools natively available
    • Operational Metadata: Audit trail of information of where the data came from, who created it, etc.
  • “Unfortunately, according to Reeve, new open source technologies, most importantly Hadoop, Hive, and other open source technologies do not have inherent capabilities to handle, Business, Technical AND Operational Metadata requirements. Firms cannot afford this lack as they confront a variety of technologies for Big Data storage, noted Reeve. It makes it difficult for Data Managers to know where the data lives.” http://www.dataversity.net/drive-business-success-implement-data-catalog-data-inventory/

Why You Can’t be Data Driven Without a Data Catalog

  1. A lot of data availability in organizations is “tribal knowledge” which severly limits the impact data has in an organization. Data catalogs should capture tribal knowledge
  2. Data catalogs need to work to have common definitions of important concepts like customer, product, and revenue, especially since different divisions actually will think of those concepts differently.
  3. A solution that one company did was a Looker-power integrated moel with a GitBook data dictionary.

What is a data catalog?

  1. At its core, a data catalog centralizes metadata. “The difference between a data catalog and a data inventory is that a data catalog curates the metadata based on usage.”
  2. Different types of data catalog users falls into three buckets
    1. Data Consumers – data and business analysts
    2. Data Creators – data architects and database engineers
    3. Data Curators – data stewards and data governors
  3. A good data catalog must
    1. Centralize all info on data in one place – structure, quality, definitons, and usages
    2. Allow users to self-service
    3. Auto-populate consistency and with accuracy

Why You Need a Data Catalogue and How to Select One

  1. “A good data catalog serves as a searchable business glossary of data sources and common data definitions gathered from automated data discovery, classification, and cross-data source entity mapping. Automated data catalog population is done via analyzing data values and using complex algorithms to automatically tag data, or by scanning jobs or APIs that collect metadata from tables, views, and stored procedures.”
  2. Should foster search and reuse of existing data in BI tools
  3. Should almost be an open platform where many people can use to see what they want to do with that

10 Tips to Build a Successful Data Catalog

  1. Who – understand the owner or trusted steward for asset
  2. What – aim to for a basic description of an asset as a minimum: business terminology, report functionality, and basic purpose of a dataset
  3. Where – where the underlying assets are

The Data Catalog – A Critical Component to Big Data Success

  1. Most data lakes do not have effective metadata management capabilities that make using them inefficient
    1. Need data access security solutions (role and asset), audit trails of update and access, and inventory of assets (technical and business metadata)
  2. First step is to inventory existing data and make it usable at a data store level – table, file, database, schema, server, or directory
  3. Figure out how to ingest new data in a structure manner, eg. data scientist wants to incorporate new data in modeling

Data Catalog for the Enterprise – What, Why, & What to look for?

  1. With the growth of enormous data lakes, data sets need to be discovered, tagged, and annotated
  2. Data catalogs can also eliminate database duplicity
  3. Challenges of implementing data catalogs include educating org on the value of a single source of data, dealing with tribalism, and

Bridging the gap: How and why product management differs from company to company

  1. NYC vs SF product management disciplines are different due to key ecosystem factors: NYC driven by tech enhancing existing industries and thus sales driven while Bay Area creates entire new categories: vision and collaboration driven. NYC more stable exits but less huge ones
  2. This dichotomy in product management approaches is due to how to bring value to different markets
  3. Successful product managers need six key ingredients
    1. Strategic Thinking
    2. Technical proficiency
    3. Collaboration
    4. Communication
    5. Detail Orientation
    6. User Science

Treehouse Learning

Rest API Basics

  • REST (Representational State Transfer) is really just another layer of meaning on top of HTTP
  • API provides a programmatic interface, a code UI basically, to that same logic and data model. Communication is done through HTTP and burden on creating interface is on users of API, not the creator
  • Easy way to say it – APIs provide code that makes things outside of our application easier to interact inside of our application
  • Resources: usually a model in your application -> these retrieved, created, or modify in API in endpoints representing collections of records
    • api/v1/players/567
  • Client request types to API:
    • GET is used for teching either a collection of resources or single resource.
    • POST is used to add a new resource to a collection, eg. POST to /games to create a new game
    • PUT is HTTP method we use when we want to update a record
    • DELETE is used for sending a DELETE request to a detail record, a URL for a single record, or just deleting that record
  • Requests
    • We can use different aspects of the requests to change the format of our response, the version of the API, and more.
    • Headers make transactions more clear and explicit, eg. Accept (specifies file format requester wants), Accept-Language, Cache-Control
  • Responses
    • Content-Type: text/javascript – > header to specify what content we’re sending
    • Other headers include Last-Modified, Expired, and Status (eg. 200, 500, 404)
    • 200-299 content is good and everything is ok
    • 300-399 request was understood but the requested resource is now located somewhere else. Use these status codes to perform redirects to URLs most of the time
    • 400-499 Error codes, eg wrongly constructed or 404 resource no longer exists
    • 500-599 Server End errors
  • Security
    • Cache: usually a service running in memory that holds recently needed results such as a newly created record or large data se. This helps prevent direct database calls or costly calculations on your data.
    • Rate Limiting: allowing each user only a certain number of requests to our API in a given period to prevent too many requests or DDOS attacks
    • A common authentication method is the use of API toekns – you give your users a token and secret key as a pair and they use those when they make requests to your server so you know they are who they say are.

Planning and Managing the Database

  • Data Definition Language – language that’s used to defined the structure of a database

When Object Literals Are Not Enough

  • Use classes instead of object literals to not repeat so much code over and over again
  • Class is a specification, a blueprint for an object that you provide with a base set of properties and methods
  • In a constructor method, this refers to the object that is being created, which is why it’s the keyword here.

Google Machine Learning Course (30% through highlights)

ML – reduces time programming

  • Scales making sense of data
  • Makes projects customizable much more easily
  • Let’s you solve programming problems that humans can’t do but algos do well
  • Use stats and not logic to solve problems, flips the programming paradigm a bit

Label is the thing we’re picking, eg. Y in linear regression
Features are Xs or way we represent our data, an input variable
– eg. header, words in e-mail, to and from addresses, routing info, time of day
Example: particular instance of data, x, eg. an email

Labeled example has { features, label}: (x, y) – used to train model ( email, spam or not spam)
Unlabeled examples has {features, ?}: (x, ?) – used for making predictions on new data (email, ?)
Model: thing that does predicting. Model maps examples to predicted labels: y’ – defined by internal parameters, which are learned

Framing: What is Supervised Machine Learning? Systems learn to combine input to product useful predictions on never before seen data
* Training means creating or learning the model.
* Inference means applying the trained model to unlabeled examples to make useful predictions (y’)
* Regression models predict continuous values: eg. value of house, probability user will click on an head
* Classification model: predicts discrete values, eg. is the given e-mail message spam or not spam? Is this an image of a dog, cat, or hampster?

Descending into ML
y = wx + b

w refers for weight vectors, gives slope

b gives bias

Loss: loss means how well line is predicting example, eg. distance from line
* loss is on a 0 through positive scale
* Convenient way to define loss for linear regression
* L2Loss also known as squared error = square of difference between prediction and label (observation – prediction)2 = (y-y’)2
* We care about minimizing loss all across datasets
* Measure of how far a model’s predictions are from its label – a measure of how bad the model is

Feature is an input variable of x value – something we know

Bias: b or An intercept or offset from an origin. Bias (also known as the bias term) is referred to as b or w0 in machine learning models.

Inference: process of making predictions by applying trained models to unlabeled examples. In statistics, inference refers to the process of fitting the parameters of a distribution conditioned on some observed data

Unlabeled example
An example that contains features but no label. Unlabeled examples are the input to inference. In semi-supervised and unsupervised learning, unlabeled examples are used during training.

Logistic regression
Model that generates probability for each possible discrete label value in classification problems by applying a sigmoid function to a linear prediction. Can be used for binary or multi-class classifications

Sigmoid function
function that maps logistic or multinomial regression output (log odds) to probabilities, returning a value between 0 and 1. Sigmoid function converts variance

K-means: clustering algorithm from signal analysis

Random Forest
Ensemble approach to finding decision tree the best fits training data by creating many decision trees and then determining the average – the random part of the term refers to building each of the decision trees from a random selection of features, the forest refers to the set of decision trees

Weight
Coefficient for a feature in a linear model or edge in a deep network. Goal of training a linear model is to determine the ideal weight for each feature. If a weight is 0, then its corresponding feature does not contribute to the model

Mean squared error (MSE) average squared loss per data set -> sum squared losses for each individual examples and divide by # examples

Although MSE is commonly-used in machine learning, it is neither the only practical loss function nor the best loss function for all circumstances.

empirical risk minimization (ERM): Choosing the function that minimizes loss on the training set.

sigmoid function: A function that maps logistic or multinomial regression output (log odds) to probabilities, returning a value between 0 and 1. In other words, the sigmoid function converts sd from logistic regression into a probability between 0 and 1.

binary classification: classification task that outputs one of two mutually exclusive classes, eg. hot dog not a hot dog

Logistic Regression
* Prediction method that gives us probability estimates that are calibrated
* Sigmoid something that gives bounded value between 0 and 1
* useful for classification tasks
* regularization important as model will try to drive losses to 0 and weights may go crazy
* Linear Logistic Regression is fast, efficient to train, and efficient to make predictions and scales to massive data and good for low latency data
* A model that generates a probability for each possible discrete label value in classification problems by applying a sigmoid function to a linear prediction. Although logistic regression is often used in binary classification problems, it can also be used in multi-class classification problems (where it becomes called multi-class logistic regression or multinomial regression).

Many problems require a probability estimate as output. Logistic regression is an extremely efficient mechanism for calculating probabilities. Practically speaking, you can use the returned probability in either of the following two ways:
* “As is”
* Converted to a binary category.

Suppose we create a logistic regression model to predict the probability that a dog will bark during the middle of the night. We’ll call that probability:
* p(bark | night)
* If the logistic regression model predicts a p(bark | night) of 0.05, then over a year, the dog’s owners should be startled awake approximately 18 times:
* startled = p(bark | night) * nights
* 18 ~= 0.05 * 365

In many cases, you’ll map the logistic regression output into the solution to a binary classification problem, in which the goal is to correctly predict one of two possible labels (e.g., “spam” or “not spam”).

Early Stopping
* Regularization method the ends model before training loss finishes decreasing. You end when loss on validation dataset starts to increase

Key takeaways:
* Logistic regression models generate probabilities. In order to map a logistic regression value to a binary category, you must define a classification threshold (also called decision threshold), eg the value where you can categorize something as hotdog not a hotdog. (Note: Tuning a threshold for logistic regression is different from tuning hyperparameters such as learning rate)
* Log Loss is the loss function for logistic regression.
* Logistic regression is widely used by many practitioners.

Classification
* We can use logistic regression for classification by using fixed thresholds for probability outputs, eg, it’s spam if it exceeds .8.

You can evaluate classification performance by
* Accuracy: fraction of predictions we got right but has key flaws, eg. if there are class imbalances, such as when positives and negatives are extremely rare for example predicting CTRs. You can have model no features but a bias a feature that causes it ti predict false always that would be highly accurate but has no value

Better is to look are True Positives and False Positives
* True Positives: Correctly Called
* False Positives: Called but not true
* False Negatives: Not predicted and it happened
* True Negatives: Not called and did not happen
* A true positive is an outcome where the model correctly predicts the positive class. Similarly, a true negative is an outcome where the model correctly predicts the negative class.
* A false positive is an outcome where the model incorrectly predicts the positive class. And a false negative is an outcome where the model incorrectly predicts the negative class.

Precision: True positive/all positive predictions, how precisely was positive class right
Recall: True Positives/All Actual Positives: out of all the possible positives, how many did the model correctly identify
* If you raise classification threshold, reduces false positives and raises precision
* We might not know in advance what best classification threshold is – so we evaluate across many possible classification thresholds – this is ROC curve

Prediction Bias
* Sum of all these we predict to all things we observe
* ideally – average of predictions == average of observations
* Logistic predictions should be unbiased
* Bias is a canary, zero bias does not mean all is good but it’s a sanity check. Look for bias in slices of data to guide improvements and debug model

Watch out for class imbalanced sets, where there a significant disparity between the number of positive and negative labels. Eg. 91% accurate predictions but only 1 TP and 8 FN, eg. 8 out of 9 malignant tumors end up undiagnosed.

Calibration Plots Showed Bucketed Bias
* Mean observation versus mean prediction

Precision = TP/(TP + FP) number of labels correctly classified
Recall = TP/(TP + FN) = how many actual positives were identified correctly, attempts to answer the question, how many of the actual positives was identified correctly?
* To evaluate the effectiveness of models, you must examine both precision and recall which are often in tension because improving precision typically reduces recall and vice versa.
* When you increase the classification threshold, then number of false positives decrease, but false negatives increase, so precision increases while recall decreases.
* When you decrease the classification threshold, false positives increase and false negatives negatives decrease, so recall increase while precision decreases.
* eg. If you have a model with 1 TP and 1 FP = 1/(1+1) = precision is 50% and when it predicts a tumor is malignant, it is correct 50% of the time

Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of relevant instance
* Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program’s precision is 5/8 while its recall is 5/12. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3. So, in this case, precision is “how useful the search results are”, and recall is “how complete the results are”.
* In an information retrieval scenario, the instances are documents and the task is to return a set of relevant documents given a search term; or equivalently, to assign each document to one of two categories, “relevant” and “not relevant”. In this case, the “relevant” documents are simply those that belong to the “relevant” category. Recall is defined as the number of relevant documents retrieved by a search divided by the total number of existing relevant documents, while precision is defined as the number of relevant documents retrieved by a search divided by the total number of documents retrieved by that search.
* In information retrieval, a perfect precision score of 1.0 means that every result retrieved by a search was relevant (but says nothing about whether all relevant documents were retrieved) whereas a perfect recall score of 1.0 means that all relevant documents were retrieved by the search (but says nothing about how many irrelevant documents were also retrieved).

ROC
* Receiver Operating Characteristics Curve
* Evaluate every possible classification threshold and look at true positive and false positive rates
* Area under that curve has probabilistic interpretation
* If we pick a random positive and random negative, what’s the probability my model ranks them in the correct order – that’s equal to area under ROC curve

Gives aggregate measure of performance aggregated across all possible classification thresholds
TP Rate X-axis FP Rate Y-Axis
AUC = area under curve
* Probably that model ranks a random positive example more highly than a random negative example:
* One way of interpreting AUC is as the probability that the model ranks a random positive example more highly than a random negative example. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0.

Characteristics of AUC to note:
* AUC is scale-invariant. It measures how well predictions are ranked, rather than their absolute values. Note: this is not always desireable: sometimes we really do need well calibrated probability outputs, AUC does not provide that
* AUC is classification-threshold-invariant. It measures the quality of the model’s predictions irrespective of what classification threshold is chosen.
* Classification-threshold invariance is not always desirable. In cases where there are wide disparities in the cost of false negatives vs. false positives, it may be critical to minimize one type of classification error. For example, when doing email spam detection, you likely want to prioritize minimizing false positives (even if that results in a significant increase of false negatives). AUC isn’t a useful metric for this type of optimization.
Logistic regression predictions should be unbiased.
* That is: “average of predictions” should ≈ “average of observations”. Good models should have near-zero bias.
* Prediction bias is a quantity that measures how far apart those two averages are. That is:
* prediction bias = average number of predictions – average of labels in data set
* Note: “Prediction bias” is a different quantity than bias (the b in wx + b)

A significant nonzero prediction bias tells you there is a bug somewhere in your model, as it indicates that the model is wrong about how frequently positive labels occur.
* For example, let’s say we know that on average, 1% of all emails are spam. If we don’t know anything at all about a given email, we should predict that it’s 1% likely to be spam. Similarly, a good spam model should predict on average that emails are 1% likely to be spam. (In other words, if we average the predicted likelihoods of each individual email being spam, the result should be 1%.) If instead, the model’s average prediction is 20% likelihood of being spam, we can conclude that it exhibits prediction bias.
* Causes are: incomplete feature set, noisy data set, buggy pipeline, biased training sample, overly strong regularization

You might be tempted to correct prediction bias by post-processing the learned model—that is, by adding a calibration layer that adjusts your model’s output to reduce the prediction bias. For example, if your model has +3% bias, you could add a calibration layer that lowers the mean prediction by 3%. However, adding a calibration layer is a bad idea for the following reasons:
* You’re fixing the symptom rather than the cause.
* You’ve built a more brittle system that you must now keep up to date.
* If possible, avoid calibration layers. Projects that use calibration layers tend to become reliant on them—using calibration layers to fix all their model’s sins. Ultimately, maintaining the calibration layers can become a nightmare.

Apr-June 2018 Learning

Books Read (related to work/professional development/betterment):

Articles:

Giving meaning to 100 billion analytics events a day

  1. Tracking events were sent by browser over HTTP to a dedicated component and enqueues them in a Kafka topic. You can build a Kafka equivalent in BigQuery to use as Data Warehouse system
  2. ‘When dealing with tracking events, the first problem you face is the fact that you have to process them unordered, with unknown delays.
    1. The difference between the time the event actually occurred (event time) and the time the event is observed by the system (processing time) ranges from the millisecond up to several hours.’
  3. The key for them was finding ideal batch duration

What is a Predicate Pushdown?

  1. Basic idea is certain parts of SQL queries (the predicates) can be “pushed” to where the data lives and reduces query/processing time by filtering out data earlier rather than later. This allows you to optimize your query by doing things like filtering data before it is transferred over a network, loading into memory, skipping reading entire files or chunks of files.
  2. ‘A “predicate” (in mathematics and functional programming) is a function that returns a boolean (true or false). In SQL queries predicates are usually encountered in the WHEREclause and are used to filter data.’
  3. Predicate pushdowns filters differently in various query environments, eg Hive, Parquet/ORC files, Spark, Redshift Spectrum, etc.

I hate MVPs. So do your customers. Make it SLC instead.

  1. Customers hate MVPS, too M and almost never V – simple, complete, and lovable is the way to go
  2. The success loop of a product “is a function of love, not features”
  3. “An MVP that never gets additional investment is just a bad product. An SLC that never gets additional investment is a good, if modest product.”

Documenting for Success

  1. Keeping User Stories Lean and Precise with:
    1. User Story Objectives
    2. Use Case Description
    3. User Interaction and Wireframing
    4. Validations
    5. Communication Scenarios
    6. Analytics
  2. Challenges
    1. Lack of participation
    2. Documentation can go sour
  3. Solutions
    1. Culture – tradition of open feedback
    2. Stay in Touch with teams for updates
    3. Documentation review and feedback prior to sprint starts
    4. Track your documents

WTF is Strategy

  1. Strategic teaming is what sets apart seniors from juniors
  2. Strategy needs
    1. Mission: Problem you’re trying to solve and who for
    2. Vision: Idealized solution
    3. Strategy: principles and decisions informed by reality and caveated with assumptions that you commit to ahead of dev to ensure likelihood of success in achieving your vision
    4. Roadmap: Concreate steps
    5. Execution: Day-today activities
  3. “Strategy represents the set of guiding principles for your roadmapping and execution tasks to ensure they align with your mission and vision.”

Corporate Culture in Internet Time

  1. “”The dirty little secret of the Internet boom,” says Christopher Meyer, who is the author of “Relentless Growth,” the 1997 management-based-on-Silicon-Valley-principles book, “is that neither startup wizards nor the venture capitalists who fund them know very much about managing in the trenches.”
  2. “ The most critical factor in building a culture is the behavior of corporate leaders, who set examples for everyone else (by what they do, not what they say). From this perspective, the core problem faced by most e-commerce companies is not a lack of culture; it’s too much culture. They already have two significant cultures at play – one of hype and one of craft.”
  3. Leaders need to understand both craft and hype cultures since they have to rely on teams that come from both to deliver. They need to set-up team cultures and infrastructure that supports inter-team learning.

Do You Want to Be Known For Your Writing, or For Your Swift Email Responses? Or How the Patriarchy has fucked up your priorities

  1. Women are conditioned to keep proving themselves – our value is contingent on ability to meet expectation of others or we will be discredited. This is often true, but do you want to a reliable source of work or answering e-mails?
  2. Stop trying to get an A+ in everything, it’s a handicap in making good work. “Again, this speaks most specifically to women, POC, queers, and other “marginalized” folks. I am going to repeat myself, but this shit bears repeating. Patriarchy (and institutional bigotry) conditions us to operate as if we are constantly working at a deficit. In some ways, this is true. You have to work twice as hard to get half the credit. I have spent most of my life trying to be perfect. The best student. The best dishwasher. The best waitress. The best babysitter. The best dominatrix. The best heroin addict. The best professor. I wanted to be good, as if by being good I might prove that I deserved more than the ephemeral esteem of sexist asshats.”

Listen to me: Being good is a terrible handicap to making good work. Stop it right now. Just pick a few secondary categories, like good friend, or good at karaoke. Be careful, however of categories that take into account the wants and needs of other humans. I find opportunities to prove myself alluring. I spent a long time trying to maintain relationships with people who wanted more than I was capable of giving

  1. Stop thinking no as just no but saying yes to doing your best work

Dear Product Roadmap, I’m Breaking Up with You

  1. A major challenge is setting up roadmap priorities without real market feedback, especially in enterprise software
  2. Roadmaps should be planned with assets in place tied closely to business strategy
    1. A clearly defined problem and solution
    2. Understanding of your users’ needs
    3. User Journeys for the current experience
    4. Vision -> Business Goals -> User Goals -> Product Goals -> Prioritize -> Roadmap
  3. Prioritization should be done through the following lens: feasibility, desirability, and viability

The 7 Steps of Machine Learning Google Video

  • Models are created via training
  • Training helps create accurate models that answers questions correctly most of the time
  • This require data to train on
    • Defined features for telling apart beer and wine could be color and alcohol percentage
  • Gathering data, quality and quantity determine how good model can be
  • Put data together and randomize so order doesn’t affect how that determines what is a drink for example
  • Visualize and analyze during data prep if there’s a imbalance in data in the model
  • Data needs to be split, most for (70-80%) and some left for evaluation to test accuracy (20-30%)
  • A big choice is choosing a model – eg some are better for images versus numerical -> in the beer or wine example is only two features to weigh
  • Weights matrix (m for linear)
  • Biases metric (b for linear)
  • Start with random values to test – creates iterations and cycles of training steps and line moves to split wine v beer where you can evaluate the data
  • Parameter tuning: How many times we through the set -> does that lead to more accuracies, eg learning rate how far we are able to shift each line in each step – hyperparameters are experimental process bit more art than science
  • 7 Steps: Gathering Data -> Preparing Data -> Choosing a Model -> Training -> Evaluation -> Hyperparameter Tuning -> Prediction

Qwik Start Baseline Infra Quest: 

  • Cloud Storage Google Consolae
  • Cloud IAM
  • Kubernetes Engine

Treehouse Learning:  

Javascript OPP

  • In JavaScript, state are represented by objects properties and behaviors are represented by object methods.
    • Radio that has properties like station and volume and methods like turning off or changing a station
  • An object’s states are represented by “property” and its behaviors are presented by “method.”
  • Putting properties and methods into a package and attaching it to a variable is called encapsulation.

Intro SQL Window Functions

  • Function available in some variations of SQL that lets you analyze a row in context of entire result set – compare one row to other rows in a query, eg percent of total or moving average

Common Table Expressions using WITH

  • CTE – a SQL query that you name and reuse within a longer query, a temporary result set
  • You place a CTE at the beginning of a complete query using a simple context
--- create CTES using the WITH statement
WTH cte_name AS (
  --- select query goes here
)

--- use CTEs like a table
SELECT * FROM cte_name
  • CTE name is like an alias for the results returned by the query, you can then use the name just like a table name in the queries that follow the CTE
WITH product_details AS (
  SELECT ProductName, CategoryName, UnitPrice, UnitsInStock
  FROM Products
  JOIN Categories ON PRODUCTS.CategoryID = Categories.ID
  WHERE Products.Discontinued = 0
)

SELECT * FROM product_details
ORDER BY CategoryName, ProductName
SELECT CategoryName, COUNT(*) AS unique_product_count, 
SUM(UnitsInStock) AS stock_count
FROM product_details
GROUP BY CategoryName
ORDER BY unique_product_count
  • CTE makes code more readable, organizes queries into reusable modules, you can combine multiple CTEs into a single query, it can better match of how we think of results set in the real world
    • all orders in past month-> all active customers -> all products and categories
    • Each would be a CTE
  • Subqueries create result sets that look just like a table that can be joined to another tables
WITH all_orders AS (
  SELECT EmployeeID, Count(*) AS order_count
  FROM Orders
  GROUP BY EmployeeID
),
late_orders AS (
    SELECT EmployeeID, COUNT(*) AS order_count
    FROM Orders
    WHERE RequiredDate <= ShippedDate
    GROUP BY EmployeeID
)
SELECT Employees.ID, LastName,
all_orders.order_count AS total_order_count,
late_orders.order_count AS late_order_count
FROM Employees
JOIN all_orders ON Employees.ID = all_orders.EmployeeID
JOIN late_orders ON Employees.ID = late_orders.EmployeeID
  • Remember one useful feature of CTES is you can reference them later in other CTEs, eg. revenue_by_employee below pulling from all_sales
  • You can only reference a CTE created earlier in the query, eg first CTE can’t reference the third
WITH
all_sales AS (
  SELECT Orders.Id AS OrderId, Orders.EmployeeId,
  SUM(OrderDetails.UnitPrice * OrderDetails.Quantity) AS invoice_total
  FROM Orders
  JOIN OrderDetails ON Orders.id = OrderDetails.OrderId
  GROUP BY Orders.Id
),
revenue_by_employee AS (
  SELECT EmployeeId, SUM(invoice_total) AS total_revenue
  FROM all_sales
  GROUP BY EmployeeID
),
sales_by_employee AS (
  SELECT EmployeeID, COUNT(*) AS sales_count
  FROM all_sales
  GROUP BY EmployeeID
)
SELECT revenue_by_employee.EmployeeId,
Employees.LastName,
revenue_by_employee.total_revenue,
sales_by_employee.sales_count,
revenue_by_employee.total_revenue/sales_by_employee.sales_count AS avg_revenue_per_sale
FROM revenue_by_employee
JOIN sales_by_employee ON revenue_by_employee.EmployeeID = sales_by_employee.EmployeeID
JOIN Employees ON revenue_by_employee.EmployeeID = Employees.Id
ORDER BY total_revenue DESC

March 2018 Learning

Less than normal last month due to business travel

Books Read (related to work/professional development/betterment):

Articles:

Agile Died While You Were Doing Your Standup

  1. Agile has been implemented poorly to enterprise wholesale by consultancies that mechanizes and dehumanizes teams and doesn’t respect the craft – causing them to deliver outputs instead of outcomes that drive values for customers
  2. The problem Product management, UX, engineer, dev-ops, and other core competencies need to be one team under one leader and give it autonomy and accountability to connect solving problems. If implemented correctly – it empowers teams to work toward shared outcomes with both velocity and accuracy.
  3. Embrace discovery – discovery data matched along shipped experiences creates real customer value and trust that teams can work autnomously with accountability and shipping something that meets both company and user objectives.

 

Avoiding the Unintended Consequences of Casual Feedback

  • Your seniority casts a shadow or the org, your casual feedback may be interpreted as a mandate – make sure it’s clear whether its opinion, strong suggestion, or mandate
    1. Opinion: “one person’s opinion” your title and authority should to enter into the equation
    2. Strong suggestion: falls short of telling team what to do – senior executive draws on experience but provides team to feel empowered to take risks. This is the difficult balance to strike and requires taming of egos to do what’s best – you also have to trust the people you’ve empowered to have the final say.
    3. Mandate: issue to avoid prohibitively costly mistakes – but too often without right justification signals a demotivating lack of trust

 

Ask Women in Product: What are the Top 3 things you look for when hiring a PM?

  1. Influence without authority – figuring out what makes you tick, your team, your customers. Read in between lines. How did you deal with past conflicts
  2. Intellectual curiosity- how did you deal with ambiguous problem or were intimidated
  3. Product sense – name compelling product experience you built
  4. Empathy – unmet needs and pain points – how would you design an alarm clock for the blind
  5. Product intuition – access product, feature, or user flow
  6. Listening and communication skills – read rooms for implicit and explicit

 

Why Isn’t Agile Working?

  1. Waiting time isn’t addressed properly
  2. Doesn’t account well for unplanned work, multitasking, and impacts from shared services
  3. Even though dev goes faster in agile, it has no bearing on making the right product decisions and working to realize benefits. Agile is useful when it services as a catalyst for continuous improvement and the rest of the org structure is in line – eg. DevOps, right management culture, incremental funding v project-based funding, doing less and doing work that matters, looking at shared services, mapping value streams, etc.

 

Treehouse Learning:  

Changing object literal in dice rolling application into constructor function that takes in the number of sites as an argument. Each instance created calls the method for running the base.

function Dice(sides) {

            this.sides = sides;
            this.roll = function() {

                        var randomNumber = Math.floor(Math.random() * this.sides) +1;
                        return randomNumber;

            }

}

var dice= new Dice(6) // new instance of 8 sided die

 

Watch out for applications running code again and again unnecessarily, like in code above. The JavasScript property prototype is like an object literal that can be added to roll property, when we assign a function to it, it becomes a method and is no longer needed in the constructor function. Prototypes can be used as templates for objects, meaning values and behavior can be shared between instances of objects.

Dice.prototype.roll = function diceRoll() {

            var randomNumber = Math.floor(Math.random() * this.sides) +1;
            return randomNumber;

} // shared between all instances in template/prototype


function Dice(sides) {

            this.sides = sides;

}

 

 

 

Feb 2018 Learning

Books Read (related to work/professional development/betterment):

Creativity, Inc.

The Mythical Man Month

Articles:

pm@olin 10 Most Likely to Succeed and pm@olin 11 Capstone

  1. “ A lot of being a PM is rolling with what doesn’t cost very much, and helps make the team happy. You don’t always get the most done by optimizing.”
  2. “For a PM, It’s figuring out how to find a little extra time for the easter egg. It’s doing the extra work to get a cool side project into the product. It’s helping someone else learn a new skill. It’s the thank you cards or the day off after shipping.”
  3. Sometimes something as simple as colored markets to annotate pros and cons helps whiteboarding

Manager Energy Drain

  1. You can color-code your calendar based on what mental energy you will need (eg. 1-on-1 brain, teaching brain, planning brain) to manage that piece and defrag accordingly
  2. The best give you can give direct reports is a messy unscoped project with a bit of safety net to teach them -> give them guidance
  3. Say no to focus energy – don’t be afraid to go back and say no

The MVP is dead. Long live the RAT.

  1. RAT = Risk Assumption Test – after MVP is not a product, but a way of testing whether you’ve found a problem worth solving. RAT emphasizes on building on what’s required to rest beyond your largest unknown
  2. All about rapid testing rather than creeping into perfect code, design, and danger of becoming a product
  3. It’s about maximizing discovery and removing temptations of putting resources on creating a more polished product

Scaling Agile At Spotify: An Interview with Henrik Kniberg

  1. “Autonomy is one of our guiding principles. We aim for independent squads that can each build and release products on their own without having to be tightly coordinated in a big agile framework. We try to avoid big projects altogether (when we can), and thereby minimize the need to coordinate work across many squads.”
  2. “By avoiding big projects, we also minimize the need to standardize our choice of tools.”
  3. The technical architecture is hugely important for the way we are organized. The organizational structure must play in harmony with the technical architecture. Many companies can’t use our way of working because their architecture won’t allow it.
    • We have invested a lot into getting an architecture that supports how we want to work (not the other way around); this has resulted in a tight ecosystem of components and apps, each running and evolving independently. The overall evolution of the ecosystem is guided by a powerful architectural vision.
    • We keep the product design cohesive by having senior product managers work tightly with squads, product owners, and designers. This coordination is tricky sometimes, and is one of our key challenges. Designers work directly with squads, but also spend at least 20% of their time working together with other designers to keep the overall product design consistent.”

Product Management Is Not Project Management

  1. Product management is not about making sure products ship on time – it’s about knowing the customer needs and defining the right product and evangelizing that internally
  2. Too often, Product Managers spend time writing specs, Gantt charts, and workflows instead of on customer problems, customer data, and articulating that to the company.
  3. Measuring Religiously means both analytics + talking to customers

When should you hire a Product Manager?

  1. Toxic things to a Product Management team: when it is too large and has overlaps in responsibility, it results in politics, land grabs for credit, and no clear owner on how t to make decisions
  2. Don’t hire until there’s a pain point – eg can’t prioritize backlog, slow shipping bc of mismatched priorities and poor communication between teams, people don’t know why they’re building what they’re building
  3. “My least favorite way to slice a Product team is “I’ll do the high level strategy and they’ll do details” — it makes it hard for the detail-level person to make good calls. It also makes it harder for the high level person to connect with the rest of the team.”

Continuous Improvement + Quality Assurance

  1. Minimum viable feature set: releasing a feature is decoupled from deploying code. Large features deployed piecemeal over time.
  2. Debugging is twice as hard as writing code in the first place. Focus less on the mitigation of large, catastrophic failures – optimize for recovery rather than failure prevention. Failure is inevitable.
  3. Exploratory testing requires an understanding of the whole system and how it serves a community of users. Customer Experience is as much about technology as it is about product requirements

Building Your Personal Brand Where You Work

  1. Make your boss aware of what you’re doing – women often doers who don’t make it a point to highlight their accomplishments or how busy they are at work. Great tool is informal email reports. Template can be: weekly wins, areas of improvement for my team, what was coming next week, what you need from boss.
  2. Build brand equity with coworkers, because you will need people to defend you. Being liked matters more sometimes. You want an ally at every level, your boss should respect you but it’s also important entry level employees respect you too.
  3. Keep track of your success, remember you wins. Eg. tracking weekly, monthly, bi-annual, annual wins

Product Manger versus Product Owner

  1. “Product Owner is a role you play on a Scrum team. Product Manager is the job”
  2. Product Owner should spend half the time talking to customers and half working with the team is an ideal but should vary. External v internal work will shift depending on maturity and success of product
  3. Product Managers in senior roles should concentrate on defining vision and strategy for teams based on market resarhc, company goals, and current state of products. The ones without Scrum teams or smaller teams can help validate or contribute to strategy fo future products.

How to Run an Effective Meeting

  1. Set the agenda so there is a compass for conversation. Start on time and tend on time.
  2. End with an action plan that has next steps.
  3. Be clear, light bulb or gun – you have an idea or you want people to do it. “Your job as a leader is to be right at the ending of the meeting, not the beginning of the meeting.” Let people speak so you’ve heard all facts and opinions.

Managing Software Engineers *This is totally an article clearly from 2002 and all problematic attitudes therein about not considering people might have things like families

  1. Create work environment where best programmers will be satisfied enough to stay and where average programmers become good
  2. “One of the paradoxes of software engineering is that people with bad ideas and low productivity often think of themselves as supremely capable. They are the last people whom one can expect to fall in line with a good strategy developed by someone else. As for the good programmers who are in fact supremely capable, there is no reason to expect consensus to form among them.”
  3. Ideals to steal
    1. people don’t do what they are told
    2. all performers get the right consequences every day
    3. small, immediate, certain consequences are better than large future uncertain ones
    4. positive reinforcement is more effective than negative reinforcement
    5. ownership leads to high productivity

The What, Why, and How of Master Data Management

  1. Five kinds of data in corporations:
    1. “Unstructured—This is data found in e-mail, white papers like this, magazine articles, corporate intranet portals, product specifications, marketing collateral, and PDF files.
    2. Transactional—This is data related to sales, deliveries, invoices, trouble tickets, claims, and other monetary and non-monetary interactions.
    3. Metadata—This is data about other data and may reside in a formal repository or in various other forms such as XML documents, report definitions, column descriptions in a database, log files, connections, and configuration files.
    4. Hierarchical—Hierarchical data stores the relationships between other data. It may be stored as part of an accounting system or separately as descriptions of real-world relationships, such as company organizational structures or product lines. Hierarchical data is sometimes considered a super MDM domain, because it is critical to understanding and sometimes discovering the relationships between master data.
    5. Master—Master data are the critical nouns of a business and fall generally into four groupings: people, things, places, and concepts. Further categorizations within those groupings are called subject areas, domain areas, or entity types. For example, within people, there are customer, employee, and salesperson. Within things, there are product, part, store, and asset. Within concepts, there are things like contract, warrantee, and licenses. Finally, within places, there are office locations and geographic divisions. Some of these domain areas may be further divided. Customer may be further segmented, based on incentives and history. A company may have normal customers, as well as premiere and executive customers. Product may be further segmented by sector and industry. The requirements, life cycle, and CRUD cycle for a product in the Consumer Packaged Goods (CPG) sector is likely very different from those of the clothing industry. The granularity of domains is essentially determined by the magnitude of differences between the attributes of the entities within them.”
  2. Deciding what to manage it and how it should be managed depends on some of the following criteria: behavior (how it interacts with other data, eg customers buy product- which may be a part of multiple hierarchies describing how they’re sold), life cycle (created, read, updated, deleted, searched – a CRUD cycle), cardinality, lifetime, complexity, value, or volatility, reuse
  3. Master Data Management is the tech, tools, and processes required to create and maintain consistent and accurate lists of master data, including identifying sources of master data, analyzing metadata, appointing data stewards, data-governance program, developing master data model, toolset, infrastructure, generating and testing master data, modify producing and consuming systems, implementing maintenance processes, and creating Master List similar to ETL below:
    1. Normalize data formats
    2. Replace Missing values
    3. Stnadardize Values
    4. Map Attributes
    5. Needs versioning and auditing

Treehouse Learning:  

Object-Oriented-Javascript

  • An object is a container for values in the form of properties and functionality in the form of methods
    • Methods on values can return objects, but they don’t have to return anything at all
  • Accessing or assigning properties is known as getting and setting
  • Native Objects: no matter where your JavaScript programs are run, it will have these objects eg. number, string, object, boolean
  • Host Objects: provided by the host environment, eg. the browser, such as document, console, or element
  • Own Objects: created in own programming eg. characters in a game
  • Objects hide complexity and organize code – known as encapsulation
  • An object literal holds information about a particular thing at a given time – it stores the state of a thing.

Eg.

var person = {
            name: “Lauren”,
            treehouseStudent: true,
            “full name”: “Lauren Smith”
}

Access using dot notation or square brackets

person.name;
person.treehouseStudent;
person[“name”]
person[“treehouseStudent”]
person[“full name”]
  • Each key is actually a string, but Javascript interpreter interprets them as a string
  • Encapsulating code into a single block allows us to keep state and behaviors for a particular thing in one place and code becomes more maintainable

Adding method to an object

var contact = {
  fullName: function printFullName() {
  var firstName = "Andrew";
  var lastName = "Chalkley";
  console.log(firstName + " " + lastName);
  }
}

Anonymous Function

var contact = {
  fullName: function() {
    var firstName = "Andrew";
    var lastName = "Chalkley";
    console.log(firstName + " " + lastName);
  }
}

We don’t know the name of variable to access its properties. Depending on where and how a function is called, this can be different things. Think of this as owner of function, eg. the object of method that is called.

Eg.

var dice = {
            sides: 6,
            roll: function() {
                var randomNumber = Math.floor(Math.random() * this.sides) + 1; // this means object literal of dice in this case
                console.log(randomNumber);
            }
}

var dice10 = {
            sides: 10,
            roll: function() {
                 var randomNumber = Math.floor(Math.random() * this.sides) + 1; // refers to dice10 variable
                 console.log(randomNumber);

            }

}

Object  literals are great for one off objects, if you want to make multiple objects of one type you need constructor functions:

  • Constructor functions describe how an object should be created
  • Create similar objects
  • Each object created is known as an instance of that object type

Constructor function example and new contact instances (an instance is the specific realization of a particular type or object)

Function Contact(name, email) {
    this.name = name;
    this.email = email;
}

var contact = new Contact(“Andrew”, “andrew@andrew.com”);
var contact2 = new Contact(“Bob”, “bb@andrew.com”);

You can create as many object of same type as you like, eg. real world example of:

Media Player

  • Playlist object (initialized by constructor function)
  • Song objects

Jan 2018 Learning

Books Read (work or professional development-related):

Articles

update: how can I be more assertive at work?

  1. “Reframe who had the power in these situations. I don’t need to network with these people, I don’t need these people to recommend me for other jobs because I do not want to work with them again. In a way, by being so upfront about their sexism they were giving me the gift of letting me know immediately not to waste my time, and the people who would be missing out on work is them, because I would never recommend them for jobs in the future and would actively discourage hiring them if I was able. Any short term jobs I miss out on now, are well worth it to form a network of people that are actually respectful that I want to keep working with.”
  2. “Give up on ever having that perfect retort that would wither them to their bones, or beautiful speech that would change their life. Instead, I wrote out some very simple, one line, adjustable scripts for every situation I’d come across.”
    1. ”You don’t need to tell me this, it’s making me uncomfortable.” + Silence. If they apologize, say you appreciate it and walk away
  3. Embrace awkward silence if you don’t want to make scene and just say your one line script. “Let me be uncomfortable without anyone ever being able to say I was being unprofessional.”

Blameless PostMortems and a Just Culture

  1. ‘Having a Just Culture means that you’re making effort to balance safety and It means that by investigating mistakes in a way that focuses on the situational aspects of a failure’s mechanism and the decision-making process of individuals proximate to the failure, an organization can come out safer than it would normally be if it had simply punished the actors involved as a remediation.’
  2. The cycle of name/blame/shame ends up with cover-your-ass engineering in which “Management becomes less aware and informed on how work is being performed day to day, and engineers become less educated on lurking or latent conditions for failure due to silence.”
  3. Understand how failures happened to that they can be learned from and to temper reactions to failure

How to Build a Successful Team

  1. “I hire the best people and get out of their way” is a nice line, but leaders need to play a hands-on role in making sure the group works well together and stays on the right priorities.
  2. Have a few simple priorities
  3. Simple shared scoreboards that affiliate all the tribes of the team so people aren’t arguing about keeping score. When you make statements in difficult conversations, “don’t make statements that include assumptions about the motivations behind someone’s behavior” and focus only on your feelings, reactions, and observations – you don’t really know what’s going on and being accusatory without evidence can derail progress.

SMART criteria Wiki

  • Specific – target an area for specific improvement. Or strategic and specific
  • Measureable – quantify or at least suggest an indicator of progress. Or motivating
  • Assignable – specify who will do it. Or achievable
  • Realistic – given available resources. Or reasonable, resourced, relevant
  • Timebound – specify when results can be achieved. Or testable, time limited, trackable

Start-up Metric for Pirates: AARRR! and Slides

  • Acquisition – users who come to site
    • Visit Site, Landing Page, Doesn’t Abandoned (stays 10+, visits 2+ pages)
    • Comes in via, SEO, SEM, PR, Blogs, etc. other marketing channels
  • Activation – users enjoy 1st visit
    • Views x pages, does z clicks, Signs up for E-mail/Acct/Widget
    • A/B tests critical here
  • Retention – users who come back
    • E-mails Opens/RSS Clickthroughs
    • Weekly e-mails, event-based e-mails, blogs
  • Referral – users who enjoy product enough to refer to others
    • Refers users who visit site, refers users who activate site
    • Campaigns, contests, e-mails
  • Revenue – users conduct some monetization behavior
    • Users generate minimum revenue
    • Users generate break-even revenue
  • Subs, Lead Gen, etc.

Measuring What Matters: How to Pick a Good Metric

  1. Good metrics are comparative, understandable, a ratio or a rate, and changes the way you actually behave
  2. To start something, you need qualitative input from talking to people because quantitative data can be misinterpreted
  3. Look beyond Reporting Metrics and find Exploratory metrics. Look beyond lagging metrics (that have happened in the past) and look for Leading metrics that might have insight into the future – eg. rising complaints

Continuous Integration

  1. Continuous integration is software dev process where work is integrated frequently, multiple times a day, and automated builds detect integration errors as soon as possible and allows software to be developed quickly and cohesively with reduced risk
  2. Requires a maintain a single source repo with a decent source code management system
  3. Everything should be included in automated builds and have multiple environments to run tests

A day in the Life of Joe Leech, Product Consultant

  1. There’s not a single solution or framework that’s a magic bullet to solve clients’ problems. Product management should be relationship people first and build glue across organizations to do so
  2. “Startups are good at speed, big companies are good at making good decisions,” and startups and enterprise have a lot to learn from each other, he believes. Startups are lean and for them fast actions are crucial, but their people often lack the knowledge or skill to make the right decisions, he believes. “Startup founders are especially hard to work with because the company is their baby. They hate formal processes because it gets on the way of getting going.”
  3. Prioritization as PIE (potential, importance, ease)

pm@olin: Presentations (Class 9)

  1. Tell them what you’re going to say, tell, them, and tell them what you said
  2. PRES framework: Present, Reaching Out, Expressive, and Self Knowing
  3. Relatable, to audience good content, humor, and leave audience wondering for more

How to Hire the Right Person

  1. Take them on a tour and see how they respect and if they treat everyone they meet with respect. Same with taking people out for a meal
  2. Ask unusual questions so you reveal more about a person -> not brain teasers. Find natural strengths
  3. Get them to ask questions and see if they’ve done their research, care about goals and culture

The Secret to Becoming a Better Data Visualization Practitioner

  1. Be able to express all intentions behind design decisions, including citing best practices or evidence from research to show you are using logic over feelings
  2. Document unconscious choices – write down all the micro-decisions that lead you to your current state
  3. Visualize differences in decisions, be ready and practiced in whiteboarding before and after decisions

Treehouse Learning

AJAX Basics Treehouse

  • AJAX form request example and responding to a submit event
    • Select form
    • Add JQuery submit method
    • Stop form from submitting
    • Retrieve value user inputted with JQuery’s val method

$(document).ready(function() {

$(‘form’).submit(function (event) {

event.preventDefault(); // stops browsers normal reaction to event, eg prevent from leaving page in this case

var $searchField = $(‘#search’);

var $submitButton = $(‘#submit’);

$searchField.prop(“disabled”, true); // disable search field so you can’t type new text

$submitButton.attr(“disabled”, true).val(“searching…”); // as request is happenign, user gets message search is underway

// the AJAX part

var flickerAPI = “http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?&#8221;;

var animal = $searchField.val(); // capturing text user types in the field

var flickrOptions = {

tags: animal,

format: “json”

};

function displayPhotos(data) {

var photoHTML = ‘<ul>’;

$.each(data.items,function(i,photo) {

photoHTML += ‘<li class=”grid-25 tablet-grid-50″>’;

photoHTML += ‘<a href=”‘ + photo.link + ‘” class=”image”>’;

photoHTML += ‘<img src=”‘ + photo.media.m + ‘”></a></li>’;

}); // end each

photoHTML += ‘</ul>’;

$(‘#photos’).html(photoHTML);

$searchField.prop(“disabled”, false); // renable search field

$submitButton.attr(“disabled”, false).val(“Search”); // renable submit button and put value back

}

$.getJSON(flickerAPI, flickrOptions, displayPhotos);

}); // end click

}); // end ready

Understanding “this” in JavaScript

  • this is a special keyword to give access to a specific context – access values, methods, and other objects on a context basis
  • JavaScript interpreter assigns a value to this based on where it appears
    • In normal function calls
    • Within methods on object
      • This, when it’s being called by a method on an object, will always reference object itself and this.anyKey will look up any value that exists

var Portland = {

bridges: 12,

airport: 1,

soccerTeams: 1,

logNumberofBridges: function() {

console.log(“There are “ + this.bridges + “ bridges in Portland!”)

}

}

Would print There are 12 bridges in Portland!

  • Within an object that has been constructed
  • Invoked with .call. apply or bind
  • When we use a constructor function to create a new object, this will actually refer to the object that is created, nto the constructor function.
  • here’s a basic constructor function

var City = function(name, state) {

this,name = name || ‘Portland’;

this.state = state || ‘Oregon’;

};

portland = new City();  // new instance of constructor function

seattle = new City(‘Seattle’, ‘Washington’);

 

results in -> console.log(portland); console.log(seattle);

 

{ name: ‘Portland’, state: ‘Oregon’}

{ name: ‘Seattle’, state: ‘Washington’}

  • The first object is the one created without any parameters, so it defaults to Portland and Oregon.
  • The second one uses Seattle and Washington
  • They keyword does not correspond to the constructor function (city) but corresponds to the instance object itself. This basically allows you to create applications with highly replicable code