NYC ProductCon 2019 Round-Up Notes

 

My personal notes from Product School’s NYC ProductCon. Overall some really good speakers, but not very organized. If I hadn’t been able to get a free ticket, I would have been displeased. Product School owes it to both the participants and speakers to be more polished, also what conference in 2019 doesn’t have Wifi and insanely oversells like this?

 

Nir Eyal’s Talk: Indistractable: How to Control Your Attention and Choose Your Life

  • Escaping distraction is a superpower
  • Psychological escape of discomfort – why ppl go to apps
  • “Time management is pain management”
  • Indistractable is his latest book book
  • Way to manage is to note sensation you feel and write it
    • That way you avoid fake time wasting eg checking e-mail on phone but not doing anything
  • Feel curious instead of contempt about your uncomfortable feelings
  • Surf the urge – they crest and subside – ten minute rule do anything you want, either be with sensation or be curious and get into task at hand
  • Know your Intent for work: You can’t call something a distraction unless you know what it distracted you from
  • Schedule your days with timeboxes or someone else will
  • Turn values into time and make time, not just to do lists
  • Get out of low value work, use tech or delegate
  • Less time communicating and more time concentrating
  • Drug round vest as example of an innovation to hack distraction for nurses and avoid medication errors
  • When see see alerts and such, ask yourself, “Is the trigger serving me, or am I serving it?”
  • Cleanup desktop, change notification settings, put sign on your desk, etc
  • Forest app and Self control app and other digital tools can help
  • Self compassion is a key above else, how would you talk to a friend

Data Analytics for Better Product Decision Making

  • Mixpanel talk a sales deck and missed opportunity
  • “Intuition is key. As PMs we never have the full info, we have to make a judgement call based on data we’re getting.”
  • Key summary points
    • Collect accurate data
    • Identify trends
    • Understand the why
    • Set goals and create hypothesis
    • Engage

Morgan Brown: Product Brief – The Primary Artefact

  • What’s a Product Brief:
    • PRD
    • Spec
    • Product Proposal
  • Product briefs are among most inconsistent experiences as PM
  • “Artefact actors can use to identify a clear business goal, the actors involved in achieving the and deliverables to achieve that goal.”
  • Challenges
    • Who is it for?
    • Format – slides, confluence, docs, etc.
    • When?
  • Impact Mapping Framework is a recommended book
  • First step
    • Company goals – what are you meant to impact
    • Spotify examples: MAU growth, sub rev, creator livelihood
  • Actors to consider – people who influence goal
    • Teams
    • Entities
      • Artists
      • Local Govts
    • Departments
  • Impact assessment of actors
    • Example: Artist control of catalog – ask yourself what can they do?
  • Prioritization Framework for Stakeholders
    • Reach
    • Impact
    • Cost
    • Effort
    • Social Responsibility
    • Eco Sustainability
  • OKR planning is helped with this framework and socializing your stakeholders
  • Impact mapping workshop first month of quarter
  • Monthly OKR checking
  • Need an agenda on calendar for concrete feedback loop

 

This slideshow requires JavaScript.

Jason Nichols: Product Management and AI art

  • They key is asking kind of question you want answered

 

This slideshow requires JavaScript.

  • Loss function key to AI problems
    • Stick to a specific variable per model
  • Chains of models propagate errors and then exponentially propagates errors
  • Model processing will have business rules, it needs to be codified very stringently in post-processing, eg. if it’s alcohol, don’t sell to minor
  • Supervised data for good labelled data sets
  • Supervised v unsupervised depending on the problem
  • How will your machine learn?
    • Do you have downtime to learn? How do you handle that when you have a system that’s 24 hours – you can’t really do this background from your prod stream if you don’t factor this in
    • How do you build ci-cd and kpis that block release to prod
  • How you measuring KPIs and what are you trying to solve before building the model?
  • There is no such thing as ground truth
  • Precision, Accuracy, Recall – these are misused all the time. Accuracy is the most misused and they usually mean Recall
    • WalMart running out of chickens problem
  • Recall typically has business cost -> people leave the store if there are no rotisserie chickens
  • If you doing anomaly correction – if you don’t know what incidence rate you can’t build a test environment. Need user research and base sampling

Nate Franklin: Powering the Next Generation of Products

  • ETE experiences – Nike as flagship
  • Industries need to optimize for LTV
    • Peloton has 96% retention rate
    • Idea and experience so ppl come back over and over again
  • Marketing will focus on growth mindset and not only good brand awareness but entire customer experience
  • 3 challenges
    • Key challenge is integrating entire data ecosystem and have it be high quality, no point in just hoarding arbitrary data
    • Systematic growth and experimentation – no just optimize button color for funnels – find best ideas that create great experiences
    • “Most ideas fail to show value”
    • Contextual Awareness is lacking in systems, eg. Facebook friendship celebrating house burning down, LinkedIn suggesting last job. Good: metromile and giving sweeping alerts 

      This slideshow requires JavaScript.

Andrea Chesleigh: Lessons in Product Leadership

  • Know who you are, what you’ll be flexible about, what you’ll cave into
  • Know enough details to add value but don’t micromanage

Build v Buy Panel

  • Build v Buy is based on culture – eg Zygna if it touches my core product we build it
  • Buy mentality: Any product is an iceberg
  • Vendors: anciallary needs like fraud protection and chat bot
  • Validated hypothesis for a feature – using a vendor for an initial beta who has the use case and then scaling or building own experience using the same concept
  • Big question to ask is does building give competitive advantage and is that your core competency versus time to market
  • Vendor is always upside and downside, eg. is it gonna break my app, is the start-up reliable?
  • Idea of keyword v non-keyword lead companies/models. Something broad and searchable versus something much more specific and obscure that it’s difficult you have a framework to evaluate that you need to evaluate with the vendor.
  • Does a feature request become a service business for a specific customer or does it become a part of product that adds more value?
  • Hacker one program priceline, get external contractors to try to break the site and then form a backlog out of it

Vivek Bedi: Disrupting 160-Year-Old Company

  • Relationship-based business (visiting ranches in Texas) -> Digital experience with low bounce rate from 3% of users three years ago
  • Small start-up + core values of 160 year old company
  • Pizza pie times (two pizza rule)
  • Small teams that match the company’s operating
  • Everything goes to changing the way people work to transform companies
  • Clear roles and responsibilities and bridges across teams
  • Structuring teams in a way a customer would understand
  • Resarch Obsessed Mindset – Journey Based Org – Pizza pod team
  • Deep ethnographic research, shadow sessions
  • 50-65 is the age of users adopting mobile which upends conventional thinking
  • Understanding generational differences and how they all matter
  • Every company has two types of competitors
    • Incumbents
    • Disrupters
  • How can you show but not tell stakeholders, be able to virtually walk them through a customer experience
  • Third culture of combining old company values with start-up culture

img_4906.jpg

 Abigail Hart Gray: A User Guide to Product Design

  • It started with the apple imacs clear
  • Design
    • UX/Experience Designer/Info Architect
    • Visual Design: branded experience – make experience individual v websites that all seem the same but logo
    • Content:
    • Research
  • Design Maturity Report from EnVision
    • 41% of companies are at the bottom of design maturity – “pretty it up”
    • 21% of companies are at second to bottom – “Design as faciliators” – do participatory design exercises, etc. but they don’t or can’t push back
    • 62% are at the bottom 2-box
  • What does great look like?
    • Why should you care? Revenue chart
  • When you begin to measure things you can design towards that and you can tell good stories
  • Being design driven pays (across six dimensions as a result of HBR study)
  • Analytics really is What people are doing. The Why is different and qualitative
    • Get to the Will – surveys and concepts testing on scale to predict what people are doing
  • Start with something small to add value to you don’t threaten the status quo immediately – create value where there wasn’t much

 

Advertisements

March-June 2019 Learning

Articles Read 

Engineering Management Philosophies and Why They Matter Even if You are Not a Manager

  1. Internal Team Success, External Team Collaboration, Company-wide Responsibilities & Culture, and Strategic Direction and Impact are key buckets of focus
  2. Do not take things at a face value and learn to overc-ommunicate
  3. Being an effective leader is helping the team make decisions rather than making decisions for them

How to choose the right UX metrics for your product

  1. Two-prong approach of the quality of user experience (using HEART framework) and goals of product/project (using Goals-Signals-Metrics)
  2. HEART Framework
    1. Happiness
    2. Engagement
    3. Adoption
    4. Retention
    5. Task Success
  3. Goals-Signals-Metrics layered on top

What is Data Egress? Managing Data Egress to Prevent Sensitive Data Loss

  1. Data egress refers to data leaving a network in transit to an external location. Examples include outbound e-mail messages, cloud uploads, files moved to external storage, copying to usb drive, FTP/HTTP transfer, etc. Data ingress refers to when data outside network is traveling into the network.
  2. Egress filtering involves monitoring egress traffic to monitor for signs of malicious activity.
  3. Data exfiltration refers to techniques that can result of loss, theft, or exposure of sensitive data. Eg. stealing USB drives or encrypting or modifying data prior to filteratio or using services to mask location or traffic.

WTF is deal ID?

  1. Deal identifier is the unique number of an automated ad buy. This is the identifier used to match buyers and seller individually. This identifier implies a previously agreed-upon set of parameters, a more narrow criteria to programmatic and private marketplaces.
  2. Deal ID allows publishers to specify terms and kind of inventory available to different types of advertisers.
  3. Deal ID can be thought of as an automated insertion order, better flexibility but controlling for the parameters of an ad deal.

Responses to Negative Data: Four Senior Leadership Archetypes.

  1. Most senior leaders in org came up when data wasn’t so accurate and available
  2. You have bubble kings who ignore the data and Attackers on the other end
    1. Deal with Bubbles: form relationships and justify decisions.
    2. Deal with Attackers: get out or provide solutions and not just data
  3. Rationalizer who sow doubt and Curious ones who ask the wwhy
    1. Deal with Rationalizers: need to bring overwhelming analytical competence
    2. Deal with Curious: be joyous and work hard

The Engineering Manager: Working with Product Marketing

  1. Great Marketing market makes code you’re writing something people need to have
    1. “A world-class engineer, designer, product manager and product marketer can really change the world.”
  2. Teach them your features and work with your team and let them practice their narrative to your team
  3. Build in feature toggling – batching sets of features as a campaign. Can do targeted or percentage-based rollouts for select feedback as well.

Building Customer Churn Models for Business

  1. “In its simplest form, churn rate is calculated by dividing the number of customer cancellations within a time period by the number of active customers at the start of that period. Very valuable insights can be gathered from this simple analysis — for example, the overall churn rate can provide a benchmark against which to measure the impact of a model. And knowing how churn rate varies by time of the week or month, product line, or customer cohort can help inform simple customer segments for targeting as well.”
  2. Churn can be characterized as
    1. Contractual
      1. Customers buy at intervals or otherwise observed. Eg subscriptions
    2. Non Contractual
      1. Free to buy or not anytime. Churn is not explicit, eg. ecommerce
    3. Voluntary
      1. Customers chose to leave service
    4. Involuntary
      1. Customers forced to discontinue or payments
    5. Good churn models should factor in things like different risk scores, predicting different use cases for probabilities of churn, and have metrics that stakeholders will understand respond to

Getting started with AI? Start here!

  1. Write down labels you’ll accept and how you’d know if the answer is right for one of them and what mistakes might look like. It will save you trouble downstream and put you in right paradigm.
  2. Remember: the goal of analytics is to generate inspiration/inform the decision maker.
  3. ML/AI is for proejcts where the goal is to sue data to automate thing-labeling.
    1. Data mining is about maximizing the speed of discovery while ML/AI is about performance in automation.

Once You’re in the Cloud, How Expensive Is It to Get Out?

  1. Negotiate a good egress rate or account for it just in case
  2. Ingress of course is usually free

How to Build an Amazing Relationship Between Product Management and Marketing

  1. Figure out how to align Product’s metrics/goals with Marketing
    1. User feedback v lead gen
  2. Start early on products/planning across divisions
  3. Have transparent strategic goals and defined roles off the bat

How Product Marketers Want to Work With Product Managers

  1. Show Product Marketers the full plan from the start
  2. Work with the Product Marketer to connecting your product or part of it to the full system experience
  3. Share dasta and customer stories as the perspective from either side is different but important for collaboration and positiong

Second-Order Thinking: What Smart People Use to Outperform

  1. Don’t seize the first available option, no matter how good it seems, before you’ve asked questions and explored. It’s asking and then what?
  2. Think about how others in ecosystem will respond to business decisions, your suppliers, regulators, etc.
  3. Think in terms of ten minutes, ten weeks, ten months, ten years, etc.

Why a High Performing Product Marketing Team Is the Key to Growth

  1. Product marketers should champion the voice of the customer, more akin to a sociologist or psychologist than a product manager or a technology
  2. Product marketers can perform win/loss analysis, customer profiles, segmentation, buyer personas, etc.
  3. Knows how to bundle feature, understands market + category insights vis a vis the competitive environment

Why Modeling Churn is Difficult

  1. Churn = CustomersLostDuringPeriod/CustomersAtBeginningOfPeriod
  2. Difficultly is looking in true rate of churn to account for the differences period to period as moving toward accounting for seasonality, etc.
  3. A stochastic model is one way to approach this problem as it allows for random variation of the inputs on a time basis.

Content Targeting Driving Brand Growth Without Collecting User Data

  1. There are avenues to meaning create content targeting strategies using channels and demo not reliant on user data effectively still
  2. Integrated Content Targeting improves overall media experiences still
  3. This also increase trust and length of time engaged

LetsLearnAI: What Is Feature Engineering for Machine Learning?

  1. “Feature engineering is the process of using domain knowledge of the data to create features that make machine learning algorithms work. If feature engineering is done correctly, it increases the predictive power of machine learning algorithms by creating features from raw data that help facilitate the machine learning process. Feature Engineering is an art.”
  2. Combining two columns like lat and long together into one feature is known as Crossed Column and can help the model learn better.
  3. Bucketized columns are sometimes useful, eg like pooling age ranges, 25-35, etc.

What Is Regularization In Machine Learning?

  1. Regularization is used to solve the problem of overfitting in machine learning models – that is when models learn too much from the noise in the training data that it negatively impacts the performance of the model on new data.
  2. There are two types of Regularization
    1. L1: Lasso regularization: Adds a penalty to the error function. The penalty is the sum of the absolute values of weights.
    2. L2: Ridge regularization: Adds penalty using the sum of squared values of the weights
  3. Generally, good models do not give more weight to a particular feature – the weights are evenly distributed using regularization to solve for overfitting.

What Are L1 and L2 Loss Functions? L1 vs L2 Loss Function

  1. L1 Loss Function minimizes the error in ML models by using the sum of all absolute differences between true and predicted value. Call LAD or Least Absolute Deviations
  2. L2 Loss Function Least Square Errors or LS minimizes the error using the sum of all squared differences between predicted and true value
  3. L2 generally is preferred but does not work well if the data set outliers because the squared differences will lead to a larger error.

 

Learning to explain Gains/Lifts better as an outcome of models from Machine Learning

Cumulative Gains and Lift Charts

  1. A Cumulative Gains chart shows the percentage of overall number of cases in a given category “gained” by targeting a percentage of total number of cases.
    1. Each point on curve, x-axis is percentage of total cases “gained” by y-axis category value
    2. Diagonal line is the baseline, eg. if you select 20% of cases from scored dataset at random you would expect to gain 20% in all cases of category
    3. What makes a desirable gain depends on the cost of errors, eg. Type 1 and Type II errors as you move up
  2. Lift Chart is derived from cumulative gains chart
    1. Values on y-axis correspond to ratio of cumulative chain for curve to baseline
    2. It’s another way of looking at Gains Chart

 

Treehouse

Introduction to Algorithms

  • O(1): Constant: takes constant time regardless of n, doesn’t change. Ideal because input time doesn’t matter
  • O(log n): Logarithmic (sometimes called sublinear) runtime, as n grows large, the operation grows thoroughly and flattons out
  • O(n) Linear Time: eg. reading item on every list
  • O(n2) Quadratic Time: eg. for any given value of n, we carry out n^2 of operations
  • Cubic Runtimes: n^3 of operations
  • Quasilinear Runtimes: O(n log n)
    • For every value of n, we are going to execute a log n number of operations. n times long n
    • Lies between a linear runtime and quadratic runtime
    • Sorting algorithm is where you see this
    • Merge Sorts is an example that takes a long time in terms of quasilinear run time
  • Polynomial runtime O(n^k) – if for a given value of time is n raised to k power
    • Anything bounded by this is considered to have a polynomial runtime or be efficient
  • Exponential runtime O(x^n) algorithmns are too expensive to be used, eg brute force algorithms analogy to manually testing each combo on a lock to break it, eg. three combo locker 1000 values, and four is 1000
    • Traveling Salesman analogy (eg multiple routes) or factorial On!
    • Knowing off the bat that a problem is somewhat unsolvable in a realistic time means you can focus your efforts on other aspects of the problem.
  • Worse Case Complexity
    • When evaluating the run time for an algorithm, we say that the algorithm has as its upper bound, the same run time as its least efficient step in algorithm.
    • The run time of the algorithm in the worst case is O(logn n) or big O log of log n or a logarithmic.

linear_search.py

def linear_search(list, target):
            “””
            Returns the index positioningg of the target if found, else returns home
            “””

            for i in range(0, len(list)):
                 if list[i] == target:
                       return i
            return None

def verify(index):

            if index is not None:
                        print(“Target found at index: “, index)
            else:
                        print(“Target not found in list”)


result = linear_search(numbers, 12)
verify(result)


“Target not found in list”

 

In worse case scenario this loop here will have to go through entire range of values and read every element on list. This gives Big O value of N or running in linear time.

Binary Search

def binary_search(list, target):

  first = 0
  last - len(list) - 1

  while first <= last:
    midpoint = (first + last)//2

    if list[midpoint] == target:
      return midpoint
    elif list[midpoint] < target:
      first = midpoint + 1 # point to value after midpoint
    else:
      last = midpoint - 1 # if greater than midpoint, point to value after midpoint

    return None

recusive_binary_search.py

def recursive_binary_search(list, target):
    if len(list) == 0:
      return False
    else:
      midpoint = (len(list))//2

      if list[midpoint] == target:
        return True
      else:
        if list[midpoint] < target:
          return recursive_binary_search(list[midpoint+1:], target)#new list using slice operation
        else:
          return recursive_binary_search(list[:midpoint], target)

def verify(result):
  print("Target found: ", result)

numbers = [1, 2, 3, 4, 5, 6, 7, 8]
result = recursive_binary_search(numbers, 12)
verify(result)

result = recursive_binary_search(numbers,6)
verify(result)

 

Recursive Functions

  • A recursive function is one that calls itself
  • When writing a recursive function, you always need a stopping condition, often called the base case
    • Eg the empty list in example of above or finding the midpoint
  • The number of times a recursive function calls itself is called Recursive Depth
  • An iterative solution means it generally implemented using a loop of some guide versus a recursive solution is one that involves a set of stopping conditions and a function that calls itself
  • In functional languages, we avoid changing data that is given to a function
    • Python on flipside, prefers iterative solutions and has Maximum Recursion Depth (or how many times function can call itself)
  • Space Complexity
    • Space Complexity is a measure of how much more working storage or extra storage is needed as an algorithm grows
    • For example, recursive binary search runs in O(log n) in Python
    • Tail Optimization in some programming languages, such as Swift, if the recursive call is the last line of code in the function, reduces space and computing overhead of recursive functions. Python does not implement tail optimization so the iterative version will be more optimal for safe

 

Reporting with SQL (Review time!)

  • Ordering
    • SELECT <columns> FROM <table> ORDER BY <column>;

SELECT * FROM customers ORDER BY last_name ASC, first_name ASC;

  • Limiting
    • SELECT * FROM <table> LIMIT <# of rows>;
    • SELECT * FROM campaigns ORDER BY sales DESC LIMIT 3;
  • Offset
    • The offset keyword is used with SELECT and ORDER BY to provide a range to select records
    • — SELECT * FROM <table> LIMIT <# of rows> OFFSET <skipped rows>;
    • SELECT * FROM orders LIMIT 50 OFFSET 100;
  • Manipulating Text
  • Aggregation
  • Date times

 

Jan-Feb Learning 2019

What’s This?

I’m trying to give myself at least half an hour during the workdays (or at least blocking two hours or so a week at least) to learn something new – namely taking classes/reviewing what I know on Treehouse, reading job related articles, and reading career-related books. Tracking notables here on a monthly basis as a self-commitment and to retain in memory and as reference. I tell off posting this the last six months with work and life has been insanely busy and my notes inconsistent across proprietary work versus my own, but worth a round-up here. Posting with good intentions for next year. Reminding myself that if I don’t capture every bit I did, it’s alright. Just keep yourself accountable.

Books Read

Inspired: How To Create Products Customers Love

Some favorite quotes from Kindle highlights:

  • This means constantly creating new value for their customers and for their business. Not just tweaking and optimizing existing products (referred to as value capture) but, rather, developing each product to reach its full potential. Yet, many large, enterprise companies have already embarked on a slow death spiral. They become all about leveraging the value and the brand that was created many years or even decades earlier. The death of an enterprise company rarely happens overnight, and a large company can stay afloat for many years. But, make no mistake about it, the organization is sinking, and the end state is all but certain.
  • The little secret in product is that engineers are typically the best single source of innovation; yet, they are not even invited to the party in this process.
  • To summarize, these are the four critical contributions you need to bring to your team: deep knowledge (1) of your customer, (2) of the data, (3) of your business and its stakeholders, and (4) of your market and industry.
  • In the products that succeed, there is always someone like Jane, behind the scenes, working to get over each and every one of the objections, whether they’re technical, business, or anything else. Jane led the product discovery work and wrote the first spec for AdWords. Then she worked side by side with the engineers to build and launch the product, which was hugely successful.
  • Four key competencies: (1) team development, (2) product vision, (3) execution, and (4) product culture.
  • It’s usually easy to see when a company has not paid attention to the architecture when they assemble their teams—it shows up a few different ways. First, the teams feel like they are constantly fighting the architecture. Second, interdependencies between teams seem disproportionate. Third, and really because of the first two, things move slowly, and teams don’t feel very empowered.
  • I strongly prefer to provide the product team with a set of business objectives—with measurable goals—and then the team makes the calls as to what are the best ways to achieve those goals. It’s part of the larger trend in product to focus on outcome and not output.

In my experience working with companies, only a few companies are strong at both innovation and execution. Many are good at execution but weak at innovation; some are strong at innovation and just okay at execution; and a depressing number of companies are poor at both innovation and execution (usually older companies that lost their product mojo a long time ago, but still have a strong brand and customer base to lean on).

 

Articles Read

Machine learning – is the emperor wearing clothes

  1. “The purpose of a machine learning algorithm is to pick the most sensible place to put a fence in your data.”
  2. Different algorithms, eg. vector classifier, decision tree, neural network, use different kinds of fences
  3. Neural networks give you a very flexible boundary which is why they’re so hot now

Some Key Machine Learning Definitions

  1. “A model is overfitting if it fits the training data too well and there is a poor generalization of new data.”
  2. Regularization is used to estimate a preferred complexity of a machine learning model so that the model generalizes to avoid overfitting and underfitting by adding a penalty on different parameters of the model – but this reduces the freedom of the model
  3. “Hyperparameters cannot be estimated from the training data. Hyperparameters of a model are set and tuned depending on a combination of some heuristics and the experience and domain knowledge of the data scientist.”

Audiences-Based Planning versus Index-Based Planning

  • Index is the relative composition of a target audience of a specific program or network as compared to the average size audience in TV universe to give marketers/agencies a gauge of the value of a program or network relative to others using the relative concentrations of a specific target audience
  • Audience-based buying is not account for relative composition of an audience or the context within the audience is likely to be found but rather values the raw number of individuals in a target audience who watch given program, their likelihood of being exposed to an ad, and the cost of reaching them with a particular spot. Really it’s buying audiences versus buying a particular program
  • Index-based campaigns follow TV planning model: maximum number of impressions of a given audience at minimum price -> buy high-indexing media against a traditional age/demo: note this doesn’t include precision index targeting
  • Huge issue is tv is insanely fragmented so even if campaigns are hitting GRP targets, they’re doing so by increasing frequency rather than total reach
  • Note: GRP is measure of a size of an ad campaign by medium or schedule – not size of audience reached. GRPs quantify impressions as a percentage of target population and this percent may thus be greater than 100. This is meant to measure impressions in relation to number of people and is the metric used to compare strength of components in a media plan. There are several ways to calculate GRPs, eg. GRP % = 100 * Reach % * Avg Freq or even just rating TV rating with a rating of 4 gets placed on 5 episodes = 20 GRPS
  • Index-based planning is about impressions delivered over balancing of reach and frequency. Audience-based is about reaching likely customers for results
  • DSPs, etc. should be about used optimized algorithms to assign users probability of being exposed to a spot to maximize probabilities of a specific target-audience reach
  • Audience-based planning is about maximizing reach in most efficient way possible whereas index-based buying values audience composition ratios

Finding the metrics that matter for your product

  1. “Where most startups trip up is they don’t know how to ask the right questions before they start measuring.”
  2. Heart Framework Questions:
    • If we imagine an ideal customer who is getting value from our product, what actions are they taking?
    • What are the individual steps a user needs to take in our product in order to achieve a goal?
    • Is this feature designed to solve a problem that all our users have, or just a subset of our users?
  3. Key points in a customer journey:
    1. Intent to use: The action or actions customers take that tell us definitively they intend to use the product or feature.
    2. Activation: The point at which a customer first derives real value from the product or feature.
    3. Engagement: The extent to which a customer continues to gain value from the product or feature (how much, how often, over how long a period of time etc).

A Beginners Guide to Finding the Product Metrics That Matter

  1. It’s actually hard to find what metrics that matter, and there’s a trap of picking too many indicators
  2. Understand where your metrics fall under, eg. the HEART framework: Happiness, Engagement, Adoption, Retention, Task Success
  3. Don’t measure all you can and don’t fall into the vanity metrics trap, instead examples of good customer-oriented metrics:
    • Customer retention
    • Net promoter score
    • Churn rate
    • Conversions
    • Product usage
    • Key user actions per session
    • Feature usage
    • Customer Acquisition costs
    • Monthly Recurring Revenue
    • Customer Lifetime Value

Algorithms and Data Structures

Intro to Algorithms

  • Algorithm steps a program takes to complete a task – the key skill to derive is to be able to identify which algorithm or data structure is best for the task at hand
  • Algorithm:
    • Clearly defined problem statement, input, and output
    • Distinct steps need to be a specific order
    • Should produce a consistent result
    • Should finish in finite amount of time
  • Evaluating Linear and Binary Search Example
  • Correctness
    • 1) in every run against all possible values in input data, we always get output we expect
    • 2) algorithm should always terminate
  • Efficiency:
  • Time Complexity: how long it takes
  • Space Complexity: amount of memory taken on computer
  • Best case, Average case, Worst case

Efficiency of an Algorithm

  • Worst case scenario/Order of Growth used to evaluate
  • Big O: Theoretical definition of a complexity of an algorithm as a function of the size O(n) – order of magnitude of complexity
  • Logarithmic pattern: in general for a given value of n, the number of tries it takes to find the worst case scenario is log of n + 1 or O(log n)
  • Logarithmic or sublinear runtimes are preferred to linear because they are more efficient

Google Machine Learning Crash Course

Reducing Loss

  • Update model parameters by computing gradient – negative gradient tells us how to adjust model parameters to reduce lost
  • Gradient: derivative of loss with respect to weights and biases
  • Take small (negative) Gradient Steps to reduce lost known as gradient descent
  • Neural nets: strong dependency on initial values
  • Stochastic Gradient Descent: one example at a time
  • Mini-Batch Gradient Descent: batches of 10-1000 – losses and gradients averaged over the batch
  • Machine learning model gets trained with an initial guess for weights and bias and iteratively adjusting those guesses until weights and bias have the lowest possible loss
  • Convergence refers to when a state is reached during training in which training loss and validation loss change very little or not with each iteration after a number of iterations – additional training on current data set will not improve the model at this point
  • For regression problems, the resulting plot of loss vs. w1 will always be convex
  • Because calculating loss for every conceivable value of w1 over an entire data set would be an inefficient way of finding the convergence point – gradient descent allows us to calculate loss convergence iteratively
  • The first step is to pick a starting value for w1. The starting point doesn’t matter so many algorithms just use 0 or a random value.
  • The gradient descent algorithm calculates the gradient of the loss curve at the starting point as the vector of partial derivatives with respect to weights. Note that a gradient is a vector so it has both a direction and magnitude
  • The gradient always points in the direction of the steepest increase in the loss function and the gradient descent algorithm takes a step in the direction of the negative gradient in order to reduce loss as quickly as possible
  • The gradient descent algorithm adds some fraction of the gradient’s magnitude to the starting point and steps and repeats this process to get closer to the minimum
  • Gradient descent algorithms multiply the gradient by a scalar learn as the learning rate/step size
    • eg. If gradient magnitude is 3.5 and the learning rate is .01, the gradient descent algorithm will pick the next point .025 away from the previous point
  • Hyperparameters are the knobs that programmers tweak in machine learning algorithms. You want to pick a goldilocks learning rate, learning rate too small will take too long, too large, the next point will bounce haphazardly and could overshoot the minimum
  • Batch is the total number of examples you use to calculate the gradient in a single iteration in gradual descent
  • Redundancy becomes more likely as the batch size grows and there are diminishing returns in after awhile in smoothing out noisy gradients
  • Stochastic gradient descent is a batch size one 1 – a single example. With enough iterations this can work but is super noisy
  • Mini-batch stochastic gradient descent: compromise between full-batch and SGD, usually between 10 to 1000 examples chosen at random, reduces the noise more than SGD but more effective than full batch

First Steps with Tensorflow

  • Tensorflow is a framework for building ML models. TFX provides toolkits that allows you construct models at your preferred layer of abstraction.
  • The estimator class encapsulates logic that builds a TFX graph and runs a TF session graph, in TFX is a computation specification – nodes in graph represent operations. Edges are directed and represent the passing of an operation (a Tensor) as an operand to another operation

 

 

 

 

Jul-Dec Learning

What’s This?

I’m trying to give myself at least half an hour during the workdays (or at least blocking two hours or so a week at least) to learn something new – namely taking classes/reviewing what I know on Treehouse, reading job related articles, and reading career-related books. Tracking notables here on a monthly basis as a self-commitment and to retain in memory and as reference. I tell off posting this the last six months with work and life has been insanely busy and my notes inconsistent across proprietary work versus my own, but worth a round-up here. Posting with good intentions for next year. Reminding myself that if I don’t capture every bit I did, it’s alright. Just keep yourself accountable.

Books Read:

So Good They Can’t Ignore You

Key Points:

  • It’s not about passion – it’s about gaining career capital so you have more agency over a career you want.
  • Control traps 1) you don’t have enough career capital to do what you want 2) employers don’t want you to change/advance/slowdown because you have skills valuable to them
  • Good jobs have autonomy, financial viability, and mission – you can’t get there on passion alone.
  • Figure out if the market you wish to succeed in is winner-take-all, one killer skill, eg. screenwriting is all about just getting a script read or auction-based, diverse collection of skills, eg. running a complex business.
  • Make many little bets and try different things that give instant feedback to see what is working or not and show what you’re doing in a venue that will get you noticed.
  • On Learning
    • Research bible-routine – summarize what you might work on – description of result and strategies used to do it.
    • Hour-tally and strain – just work on for an hour and keep track of it
    • Theory-Notebook – brainstorm notebook that you deliberately keep track of info in
    • Carve out time to research and independent projects
  • “Working right trumps finding the right work” p228
  • Good visual summary

The Manager’s Path

Key Points:

  • “Your manager should be the person who shows you the larger picture of how your work fits into the team’s goals, and helps you feel a sense of purpose in the day-to-day work”
  • “Developing a sense of ownership and authority for your work and not relying for manager to set the tone”
  • “Especially as you become more senior, remember that your manager expects you to bring solutions, not problems”
  • “Strong engineering managers can identify the shortest path through the systems to implement new futures”
  • Dedicate 20% of time in planning meetings to sustainability work
  • “Be carefully that locally negative people don’t stay in that mindset on your team for long. The kind of toxic drama that is created by these energy vampires is hard for even the best manager to combat. The best defense is a good offense in this case”
    • You are not their parent – treat them as adults and don’t get emotionally invested in every disagreement they have with you personally.

Articles:

What is a predicate pushdown? In mapreduce

  1. Concept is if you issue a query to run in one place you’d spawn a lot of network traffic, making that query slow and costly. However, if yo updush down parts of the query to where data is stored and thus filter out most of the data, you reduce network traffic.
  2. You filter conditions as True or False – predicates, and pushdown query to where the data resides
  3. For example, you don’t need to pass through every single column for every map reduce job in the pipeline for no reason so you filter so you don’t read the other columns

What is a predicate pushdown?

  1. The basic idea is to push certain parts of SQL queries (the predicates) to where the data lives to optimize the query by filtering out data earlier rather than later so it skips reading entire files or chunks of files to reduce network traffic/processing time
  2. This is usually done with a function that returns a boolean in the where cause to filter out data
  3. Eg example below where clause “WHERE a.country = ‘Argentina’”
SELECT *
  a.*
FROM
  table1 a
JOIN 
  table2 b ON a.id = b.id
WHERE
  a.country = 'Argentina';

The Leaders Calendar

  1. 6 hours a day of non-work time, half with family and some downtime with hobbies
  2. Setting norms and expectations with e-mail is essential. For example sending e-mails from CEO late at night sets a wrong example for the company or CEO’s time is spend cc’d on endless irrelevant items.
  3. Be agenda-driven to optimize limited time and also not only let the loudest voices stand out so that the important work can get done, not just the work that appears the most urgent be work on strategy.
    1. A key way to do this is to limit routine activities that can be given to a direct report

What People Don’t Tell You About Product Management

  1. “Product Management is a great job if you like being involved in all aspects of a product — but it’s not a great job if you want to feel like a CEO.”
    1. You don’t necessary make the strategy, have resources, and have the ability to fire people. Your job is to get it done by being resourceful and convincing.
  2. Product Managers should channel the needs to the customer and follow a product from conception, dev, launch, and beyond. Be a cross functional leader coordinating between R&D, Sales, Marketing, Manufacturing, and Operations. Leadership and coordination are key. Your job is to make strategy happen and convincing people you work with.
  3. “For me, product management is exciting and stressful for the same reason: there’s unpredictability, there’s opportunity to create something new (which also means it may be unproven), and you’re usually operating with less data than you’d like, and everything is always a little bit broken.”

Web Architecture 101

  1. In web dev you almost always want to scale horizontally, meaning you add more machines into your pool of resources, versus vertically, meaning scaling by adding more powers (eg. CPU, RAM) to an existing machine, this redundancy allows you to have another plan so your applications keep running if a server goes down and makes your app more fault tolerant. You can also minimally couple different parts of the app backend to run on different servers.
  2. Job queues store lists of jobs that need to be run asynchronously – eg Google does not search the entire internet every time you do a search, it crawls the web asynchronously and updates search indexes along the way
  3. Typical data pipeline: firehouse that provides streaming interface to ingest and process data (eg. Kinesis and Kafka) -> raw data as well as final transformed/augmented data saved to cloud storage (eg. S3) -> data loaded into a data warehouse for analysis (eg. Redshift)

Running in Circles – Why Agile Isn’t Working and What We Do Differently

  1. “People in our industry think they stopped doing waterfall and switched to agile. In reality they just switched to high-frequency waterfall.”
  2. “Only management can protect attention. Telling the team to focus only works if the business is backing them up.”
  3. Think of software development as going uphill when you’re finding out the complexity/uncertainty and then downhill when you have certainty.

Product Managers – You Are Not the CEO of Anything

  1. Too many product managers think their role is that of an authoritarian CEO (with no power) and often disastrous because they think they have all the answers.
  2. You gain credibility through your actions and leadership skills.
  3. Product management is a team sportafter all, and the best teams don’t have bosses – they have coaches who ensure all the skills and experiences needed are present on the team, that everyone is in the right place, knows where the goal is, and then gets out of the way and lets the team do what they do best in order to reach that goal.”

Product Prioritization: How Do You Decide What Belongs in Your Product?

  1. Radical vision with this mad lips template Today, when [customer segment]want to [desirable activity/ outcome], they have to [current solution] . This is unacceptable, because [shortcomings of current solutions]. We envision a world where [shortcomings resolved]. We are bringing this world about through [basic technology/ approach].
  2. Four components to good product strategy
    1. Real Pain Points means “Who is it for?” and “What is their pain point?”
    2. Designrefers to “What key features would you design into the product?” and “How would you describe your brand and voice?”
    3. Capabilitiestackles the “Why should we be the ones doing it?”and “ What is our unique capability?”
    4. Logisticsis the economics and channels, like “What’s our pricing strategy?” and “What’s the medium through which we deliver this?”
  3. Then prioritize between sustainable and good fit

To Drive Business Success Implement a Data Catalog and Data Inventory

  • Companies have a huge gap between knowing where the data is located simply and what to do with it
  • Three types of metadata
    • Business Metadata: Give us the meaning of data you have in a particular set
    • Technical Metadata: Provide information on the format and structure of data – databases, programming envs, data modeling tools natively available
    • Operational Metadata: Audit trail of information of where the data came from, who created it, etc.
  • “Unfortunately, according to Reeve, new open source technologies, most importantly Hadoop, Hive, and other open source technologies do not have inherent capabilities to handle, Business, Technical AND Operational Metadata requirements. Firms cannot afford this lack as they confront a variety of technologies for Big Data storage, noted Reeve. It makes it difficult for Data Managers to know where the data lives.” http://www.dataversity.net/drive-business-success-implement-data-catalog-data-inventory/

Why You Can’t be Data Driven Without a Data Catalog

  1. A lot of data availability in organizations is “tribal knowledge” which severly limits the impact data has in an organization. Data catalogs should capture tribal knowledge
  2. Data catalogs need to work to have common definitions of important concepts like customer, product, and revenue, especially since different divisions actually will think of those concepts differently.
  3. A solution that one company did was a Looker-power integrated moel with a GitBook data dictionary.

What is a data catalog?

  1. At its core, a data catalog centralizes metadata. “The difference between a data catalog and a data inventory is that a data catalog curates the metadata based on usage.”
  2. Different types of data catalog users falls into three buckets
    1. Data Consumers – data and business analysts
    2. Data Creators – data architects and database engineers
    3. Data Curators – data stewards and data governors
  3. A good data catalog must
    1. Centralize all info on data in one place – structure, quality, definitons, and usages
    2. Allow users to self-service
    3. Auto-populate consistency and with accuracy

Why You Need a Data Catalogue and How to Select One

  1. “A good data catalog serves as a searchable business glossary of data sources and common data definitions gathered from automated data discovery, classification, and cross-data source entity mapping. Automated data catalog population is done via analyzing data values and using complex algorithms to automatically tag data, or by scanning jobs or APIs that collect metadata from tables, views, and stored procedures.”
  2. Should foster search and reuse of existing data in BI tools
  3. Should almost be an open platform where many people can use to see what they want to do with that

10 Tips to Build a Successful Data Catalog

  1. Who – understand the owner or trusted steward for asset
  2. What – aim to for a basic description of an asset as a minimum: business terminology, report functionality, and basic purpose of a dataset
  3. Where – where the underlying assets are

The Data Catalog – A Critical Component to Big Data Success

  1. Most data lakes do not have effective metadata management capabilities that make using them inefficient
    1. Need data access security solutions (role and asset), audit trails of update and access, and inventory of assets (technical and business metadata)
  2. First step is to inventory existing data and make it usable at a data store level – table, file, database, schema, server, or directory
  3. Figure out how to ingest new data in a structure manner, eg. data scientist wants to incorporate new data in modeling

Data Catalog for the Enterprise – What, Why, & What to look for?

  1. With the growth of enormous data lakes, data sets need to be discovered, tagged, and annotated
  2. Data catalogs can also eliminate database duplicity
  3. Challenges of implementing data catalogs include educating org on the value of a single source of data, dealing with tribalism, and

Bridging the gap: How and why product management differs from company to company

  1. NYC vs SF product management disciplines are different due to key ecosystem factors: NYC driven by tech enhancing existing industries and thus sales driven while Bay Area creates entire new categories: vision and collaboration driven. NYC more stable exits but less huge ones
  2. This dichotomy in product management approaches is due to how to bring value to different markets
  3. Successful product managers need six key ingredients
    1. Strategic Thinking
    2. Technical proficiency
    3. Collaboration
    4. Communication
    5. Detail Orientation
    6. User Science

Treehouse Learning

Rest API Basics

  • REST (Representational State Transfer) is really just another layer of meaning on top of HTTP
  • API provides a programmatic interface, a code UI basically, to that same logic and data model. Communication is done through HTTP and burden on creating interface is on users of API, not the creator
  • Easy way to say it – APIs provide code that makes things outside of our application easier to interact inside of our application
  • Resources: usually a model in your application -> these retrieved, created, or modify in API in endpoints representing collections of records
    • api/v1/players/567
  • Client request types to API:
    • GET is used for teching either a collection of resources or single resource.
    • POST is used to add a new resource to a collection, eg. POST to /games to create a new game
    • PUT is HTTP method we use when we want to update a record
    • DELETE is used for sending a DELETE request to a detail record, a URL for a single record, or just deleting that record
  • Requests
    • We can use different aspects of the requests to change the format of our response, the version of the API, and more.
    • Headers make transactions more clear and explicit, eg. Accept (specifies file format requester wants), Accept-Language, Cache-Control
  • Responses
    • Content-Type: text/javascript – > header to specify what content we’re sending
    • Other headers include Last-Modified, Expired, and Status (eg. 200, 500, 404)
    • 200-299 content is good and everything is ok
    • 300-399 request was understood but the requested resource is now located somewhere else. Use these status codes to perform redirects to URLs most of the time
    • 400-499 Error codes, eg wrongly constructed or 404 resource no longer exists
    • 500-599 Server End errors
  • Security
    • Cache: usually a service running in memory that holds recently needed results such as a newly created record or large data se. This helps prevent direct database calls or costly calculations on your data.
    • Rate Limiting: allowing each user only a certain number of requests to our API in a given period to prevent too many requests or DDOS attacks
    • A common authentication method is the use of API toekns – you give your users a token and secret key as a pair and they use those when they make requests to your server so you know they are who they say are.

Planning and Managing the Database

  • Data Definition Language – language that’s used to defined the structure of a database

When Object Literals Are Not Enough

  • Use classes instead of object literals to not repeat so much code over and over again
  • Class is a specification, a blueprint for an object that you provide with a base set of properties and methods
  • In a constructor method, this refers to the object that is being created, which is why it’s the keyword here.

Google Machine Learning Course (30% through highlights)

ML – reduces time programming

  • Scales making sense of data
  • Makes projects customizable much more easily
  • Let’s you solve programming problems that humans can’t do but algos do well
  • Use stats and not logic to solve problems, flips the programming paradigm a bit

Label is the thing we’re picking, eg. Y in linear regression
Features are Xs or way we represent our data, an input variable
– eg. header, words in e-mail, to and from addresses, routing info, time of day
Example: particular instance of data, x, eg. an email

Labeled example has { features, label}: (x, y) – used to train model ( email, spam or not spam)
Unlabeled examples has {features, ?}: (x, ?) – used for making predictions on new data (email, ?)
Model: thing that does predicting. Model maps examples to predicted labels: y’ – defined by internal parameters, which are learned

Framing: What is Supervised Machine Learning? Systems learn to combine input to product useful predictions on never before seen data
* Training means creating or learning the model.
* Inference means applying the trained model to unlabeled examples to make useful predictions (y’)
* Regression models predict continuous values: eg. value of house, probability user will click on an head
* Classification model: predicts discrete values, eg. is the given e-mail message spam or not spam? Is this an image of a dog, cat, or hampster?

Descending into ML
y = wx + b

w refers for weight vectors, gives slope

b gives bias

Loss: loss means how well line is predicting example, eg. distance from line
* loss is on a 0 through positive scale
* Convenient way to define loss for linear regression
* L2Loss also known as squared error = square of difference between prediction and label (observation – prediction)2 = (y-y’)2
* We care about minimizing loss all across datasets
* Measure of how far a model’s predictions are from its label – a measure of how bad the model is

Feature is an input variable of x value – something we know

Bias: b or An intercept or offset from an origin. Bias (also known as the bias term) is referred to as b or w0 in machine learning models.

Inference: process of making predictions by applying trained models to unlabeled examples. In statistics, inference refers to the process of fitting the parameters of a distribution conditioned on some observed data

Unlabeled example
An example that contains features but no label. Unlabeled examples are the input to inference. In semi-supervised and unsupervised learning, unlabeled examples are used during training.

Logistic regression
Model that generates probability for each possible discrete label value in classification problems by applying a sigmoid function to a linear prediction. Can be used for binary or multi-class classifications

Sigmoid function
function that maps logistic or multinomial regression output (log odds) to probabilities, returning a value between 0 and 1. Sigmoid function converts variance

K-means: clustering algorithm from signal analysis

Random Forest
Ensemble approach to finding decision tree the best fits training data by creating many decision trees and then determining the average – the random part of the term refers to building each of the decision trees from a random selection of features, the forest refers to the set of decision trees

Weight
Coefficient for a feature in a linear model or edge in a deep network. Goal of training a linear model is to determine the ideal weight for each feature. If a weight is 0, then its corresponding feature does not contribute to the model

Mean squared error (MSE) average squared loss per data set -> sum squared losses for each individual examples and divide by # examples

Although MSE is commonly-used in machine learning, it is neither the only practical loss function nor the best loss function for all circumstances.

empirical risk minimization (ERM): Choosing the function that minimizes loss on the training set.

sigmoid function: A function that maps logistic or multinomial regression output (log odds) to probabilities, returning a value between 0 and 1. In other words, the sigmoid function converts sd from logistic regression into a probability between 0 and 1.

binary classification: classification task that outputs one of two mutually exclusive classes, eg. hot dog not a hot dog

Logistic Regression
* Prediction method that gives us probability estimates that are calibrated
* Sigmoid something that gives bounded value between 0 and 1
* useful for classification tasks
* regularization important as model will try to drive losses to 0 and weights may go crazy
* Linear Logistic Regression is fast, efficient to train, and efficient to make predictions and scales to massive data and good for low latency data
* A model that generates a probability for each possible discrete label value in classification problems by applying a sigmoid function to a linear prediction. Although logistic regression is often used in binary classification problems, it can also be used in multi-class classification problems (where it becomes called multi-class logistic regression or multinomial regression).

Many problems require a probability estimate as output. Logistic regression is an extremely efficient mechanism for calculating probabilities. Practically speaking, you can use the returned probability in either of the following two ways:
* “As is”
* Converted to a binary category.

Suppose we create a logistic regression model to predict the probability that a dog will bark during the middle of the night. We’ll call that probability:
* p(bark | night)
* If the logistic regression model predicts a p(bark | night) of 0.05, then over a year, the dog’s owners should be startled awake approximately 18 times:
* startled = p(bark | night) * nights
* 18 ~= 0.05 * 365

In many cases, you’ll map the logistic regression output into the solution to a binary classification problem, in which the goal is to correctly predict one of two possible labels (e.g., “spam” or “not spam”).

Early Stopping
* Regularization method the ends model before training loss finishes decreasing. You end when loss on validation dataset starts to increase

Key takeaways:
* Logistic regression models generate probabilities. In order to map a logistic regression value to a binary category, you must define a classification threshold (also called decision threshold), eg the value where you can categorize something as hotdog not a hotdog. (Note: Tuning a threshold for logistic regression is different from tuning hyperparameters such as learning rate)
* Log Loss is the loss function for logistic regression.
* Logistic regression is widely used by many practitioners.

Classification
* We can use logistic regression for classification by using fixed thresholds for probability outputs, eg, it’s spam if it exceeds .8.

You can evaluate classification performance by
* Accuracy: fraction of predictions we got right but has key flaws, eg. if there are class imbalances, such as when positives and negatives are extremely rare for example predicting CTRs. You can have model no features but a bias a feature that causes it ti predict false always that would be highly accurate but has no value

Better is to look are True Positives and False Positives
* True Positives: Correctly Called
* False Positives: Called but not true
* False Negatives: Not predicted and it happened
* True Negatives: Not called and did not happen
* A true positive is an outcome where the model correctly predicts the positive class. Similarly, a true negative is an outcome where the model correctly predicts the negative class.
* A false positive is an outcome where the model incorrectly predicts the positive class. And a false negative is an outcome where the model incorrectly predicts the negative class.

Precision: True positive/all positive predictions, how precisely was positive class right
Recall: True Positives/All Actual Positives: out of all the possible positives, how many did the model correctly identify
* If you raise classification threshold, reduces false positives and raises precision
* We might not know in advance what best classification threshold is – so we evaluate across many possible classification thresholds – this is ROC curve

Prediction Bias
* Sum of all these we predict to all things we observe
* ideally – average of predictions == average of observations
* Logistic predictions should be unbiased
* Bias is a canary, zero bias does not mean all is good but it’s a sanity check. Look for bias in slices of data to guide improvements and debug model

Watch out for class imbalanced sets, where there a significant disparity between the number of positive and negative labels. Eg. 91% accurate predictions but only 1 TP and 8 FN, eg. 8 out of 9 malignant tumors end up undiagnosed.

Calibration Plots Showed Bucketed Bias
* Mean observation versus mean prediction

Precision = TP/(TP + FP) number of labels correctly classified
Recall = TP/(TP + FN) = how many actual positives were identified correctly, attempts to answer the question, how many of the actual positives was identified correctly?
* To evaluate the effectiveness of models, you must examine both precision and recall which are often in tension because improving precision typically reduces recall and vice versa.
* When you increase the classification threshold, then number of false positives decrease, but false negatives increase, so precision increases while recall decreases.
* When you decrease the classification threshold, false positives increase and false negatives negatives decrease, so recall increase while precision decreases.
* eg. If you have a model with 1 TP and 1 FP = 1/(1+1) = precision is 50% and when it predicts a tumor is malignant, it is correct 50% of the time

Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of relevant instance
* Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program’s precision is 5/8 while its recall is 5/12. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3. So, in this case, precision is “how useful the search results are”, and recall is “how complete the results are”.
* In an information retrieval scenario, the instances are documents and the task is to return a set of relevant documents given a search term; or equivalently, to assign each document to one of two categories, “relevant” and “not relevant”. In this case, the “relevant” documents are simply those that belong to the “relevant” category. Recall is defined as the number of relevant documents retrieved by a search divided by the total number of existing relevant documents, while precision is defined as the number of relevant documents retrieved by a search divided by the total number of documents retrieved by that search.
* In information retrieval, a perfect precision score of 1.0 means that every result retrieved by a search was relevant (but says nothing about whether all relevant documents were retrieved) whereas a perfect recall score of 1.0 means that all relevant documents were retrieved by the search (but says nothing about how many irrelevant documents were also retrieved).

ROC
* Receiver Operating Characteristics Curve
* Evaluate every possible classification threshold and look at true positive and false positive rates
* Area under that curve has probabilistic interpretation
* If we pick a random positive and random negative, what’s the probability my model ranks them in the correct order – that’s equal to area under ROC curve

Gives aggregate measure of performance aggregated across all possible classification thresholds
TP Rate X-axis FP Rate Y-Axis
AUC = area under curve
* Probably that model ranks a random positive example more highly than a random negative example:
* One way of interpreting AUC is as the probability that the model ranks a random positive example more highly than a random negative example. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0.

Characteristics of AUC to note:
* AUC is scale-invariant. It measures how well predictions are ranked, rather than their absolute values. Note: this is not always desireable: sometimes we really do need well calibrated probability outputs, AUC does not provide that
* AUC is classification-threshold-invariant. It measures the quality of the model’s predictions irrespective of what classification threshold is chosen.
* Classification-threshold invariance is not always desirable. In cases where there are wide disparities in the cost of false negatives vs. false positives, it may be critical to minimize one type of classification error. For example, when doing email spam detection, you likely want to prioritize minimizing false positives (even if that results in a significant increase of false negatives). AUC isn’t a useful metric for this type of optimization.
Logistic regression predictions should be unbiased.
* That is: “average of predictions” should ≈ “average of observations”. Good models should have near-zero bias.
* Prediction bias is a quantity that measures how far apart those two averages are. That is:
* prediction bias = average number of predictions – average of labels in data set
* Note: “Prediction bias” is a different quantity than bias (the b in wx + b)

A significant nonzero prediction bias tells you there is a bug somewhere in your model, as it indicates that the model is wrong about how frequently positive labels occur.
* For example, let’s say we know that on average, 1% of all emails are spam. If we don’t know anything at all about a given email, we should predict that it’s 1% likely to be spam. Similarly, a good spam model should predict on average that emails are 1% likely to be spam. (In other words, if we average the predicted likelihoods of each individual email being spam, the result should be 1%.) If instead, the model’s average prediction is 20% likelihood of being spam, we can conclude that it exhibits prediction bias.
* Causes are: incomplete feature set, noisy data set, buggy pipeline, biased training sample, overly strong regularization

You might be tempted to correct prediction bias by post-processing the learned model—that is, by adding a calibration layer that adjusts your model’s output to reduce the prediction bias. For example, if your model has +3% bias, you could add a calibration layer that lowers the mean prediction by 3%. However, adding a calibration layer is a bad idea for the following reasons:
* You’re fixing the symptom rather than the cause.
* You’ve built a more brittle system that you must now keep up to date.
* If possible, avoid calibration layers. Projects that use calibration layers tend to become reliant on them—using calibration layers to fix all their model’s sins. Ultimately, maintaining the calibration layers can become a nightmare.

March 2018 Learning

Less than normal last month due to business travel

Books Read (related to work/professional development/betterment):

Articles:

Agile Died While You Were Doing Your Standup

  1. Agile has been implemented poorly to enterprise wholesale by consultancies that mechanizes and dehumanizes teams and doesn’t respect the craft – causing them to deliver outputs instead of outcomes that drive values for customers
  2. The problem Product management, UX, engineer, dev-ops, and other core competencies need to be one team under one leader and give it autonomy and accountability to connect solving problems. If implemented correctly – it empowers teams to work toward shared outcomes with both velocity and accuracy.
  3. Embrace discovery – discovery data matched along shipped experiences creates real customer value and trust that teams can work autnomously with accountability and shipping something that meets both company and user objectives.

 

Avoiding the Unintended Consequences of Casual Feedback

  • Your seniority casts a shadow or the org, your casual feedback may be interpreted as a mandate – make sure it’s clear whether its opinion, strong suggestion, or mandate
    1. Opinion: “one person’s opinion” your title and authority should to enter into the equation
    2. Strong suggestion: falls short of telling team what to do – senior executive draws on experience but provides team to feel empowered to take risks. This is the difficult balance to strike and requires taming of egos to do what’s best – you also have to trust the people you’ve empowered to have the final say.
    3. Mandate: issue to avoid prohibitively costly mistakes – but too often without right justification signals a demotivating lack of trust

 

Ask Women in Product: What are the Top 3 things you look for when hiring a PM?

  1. Influence without authority – figuring out what makes you tick, your team, your customers. Read in between lines. How did you deal with past conflicts
  2. Intellectual curiosity- how did you deal with ambiguous problem or were intimidated
  3. Product sense – name compelling product experience you built
  4. Empathy – unmet needs and pain points – how would you design an alarm clock for the blind
  5. Product intuition – access product, feature, or user flow
  6. Listening and communication skills – read rooms for implicit and explicit

 

Why Isn’t Agile Working?

  1. Waiting time isn’t addressed properly
  2. Doesn’t account well for unplanned work, multitasking, and impacts from shared services
  3. Even though dev goes faster in agile, it has no bearing on making the right product decisions and working to realize benefits. Agile is useful when it services as a catalyst for continuous improvement and the rest of the org structure is in line – eg. DevOps, right management culture, incremental funding v project-based funding, doing less and doing work that matters, looking at shared services, mapping value streams, etc.

 

Treehouse Learning:  

Changing object literal in dice rolling application into constructor function that takes in the number of sites as an argument. Each instance created calls the method for running the base.

function Dice(sides) {

            this.sides = sides;
            this.roll = function() {

                        var randomNumber = Math.floor(Math.random() * this.sides) +1;
                        return randomNumber;

            }

}

var dice= new Dice(6) // new instance of 8 sided die

 

Watch out for applications running code again and again unnecessarily, like in code above. The JavasScript property prototype is like an object literal that can be added to roll property, when we assign a function to it, it becomes a method and is no longer needed in the constructor function. Prototypes can be used as templates for objects, meaning values and behavior can be shared between instances of objects.

Dice.prototype.roll = function diceRoll() {

            var randomNumber = Math.floor(Math.random() * this.sides) +1;
            return randomNumber;

} // shared between all instances in template/prototype


function Dice(sides) {

            this.sides = sides;

}

 

 

 

Feb 2018 Learning

Books Read (related to work/professional development/betterment):

Creativity, Inc.

The Mythical Man Month

Articles:

pm@olin 10 Most Likely to Succeed and pm@olin 11 Capstone

  1. “ A lot of being a PM is rolling with what doesn’t cost very much, and helps make the team happy. You don’t always get the most done by optimizing.”
  2. “For a PM, It’s figuring out how to find a little extra time for the easter egg. It’s doing the extra work to get a cool side project into the product. It’s helping someone else learn a new skill. It’s the thank you cards or the day off after shipping.”
  3. Sometimes something as simple as colored markets to annotate pros and cons helps whiteboarding

Manager Energy Drain

  1. You can color-code your calendar based on what mental energy you will need (eg. 1-on-1 brain, teaching brain, planning brain) to manage that piece and defrag accordingly
  2. The best give you can give direct reports is a messy unscoped project with a bit of safety net to teach them -> give them guidance
  3. Say no to focus energy – don’t be afraid to go back and say no

The MVP is dead. Long live the RAT.

  1. RAT = Risk Assumption Test – after MVP is not a product, but a way of testing whether you’ve found a problem worth solving. RAT emphasizes on building on what’s required to rest beyond your largest unknown
  2. All about rapid testing rather than creeping into perfect code, design, and danger of becoming a product
  3. It’s about maximizing discovery and removing temptations of putting resources on creating a more polished product

Scaling Agile At Spotify: An Interview with Henrik Kniberg

  1. “Autonomy is one of our guiding principles. We aim for independent squads that can each build and release products on their own without having to be tightly coordinated in a big agile framework. We try to avoid big projects altogether (when we can), and thereby minimize the need to coordinate work across many squads.”
  2. “By avoiding big projects, we also minimize the need to standardize our choice of tools.”
  3. The technical architecture is hugely important for the way we are organized. The organizational structure must play in harmony with the technical architecture. Many companies can’t use our way of working because their architecture won’t allow it.
    • We have invested a lot into getting an architecture that supports how we want to work (not the other way around); this has resulted in a tight ecosystem of components and apps, each running and evolving independently. The overall evolution of the ecosystem is guided by a powerful architectural vision.
    • We keep the product design cohesive by having senior product managers work tightly with squads, product owners, and designers. This coordination is tricky sometimes, and is one of our key challenges. Designers work directly with squads, but also spend at least 20% of their time working together with other designers to keep the overall product design consistent.”

Product Management Is Not Project Management

  1. Product management is not about making sure products ship on time – it’s about knowing the customer needs and defining the right product and evangelizing that internally
  2. Too often, Product Managers spend time writing specs, Gantt charts, and workflows instead of on customer problems, customer data, and articulating that to the company.
  3. Measuring Religiously means both analytics + talking to customers

When should you hire a Product Manager?

  1. Toxic things to a Product Management team: when it is too large and has overlaps in responsibility, it results in politics, land grabs for credit, and no clear owner on how t to make decisions
  2. Don’t hire until there’s a pain point – eg can’t prioritize backlog, slow shipping bc of mismatched priorities and poor communication between teams, people don’t know why they’re building what they’re building
  3. “My least favorite way to slice a Product team is “I’ll do the high level strategy and they’ll do details” — it makes it hard for the detail-level person to make good calls. It also makes it harder for the high level person to connect with the rest of the team.”

Continuous Improvement + Quality Assurance

  1. Minimum viable feature set: releasing a feature is decoupled from deploying code. Large features deployed piecemeal over time.
  2. Debugging is twice as hard as writing code in the first place. Focus less on the mitigation of large, catastrophic failures – optimize for recovery rather than failure prevention. Failure is inevitable.
  3. Exploratory testing requires an understanding of the whole system and how it serves a community of users. Customer Experience is as much about technology as it is about product requirements

Building Your Personal Brand Where You Work

  1. Make your boss aware of what you’re doing – women often doers who don’t make it a point to highlight their accomplishments or how busy they are at work. Great tool is informal email reports. Template can be: weekly wins, areas of improvement for my team, what was coming next week, what you need from boss.
  2. Build brand equity with coworkers, because you will need people to defend you. Being liked matters more sometimes. You want an ally at every level, your boss should respect you but it’s also important entry level employees respect you too.
  3. Keep track of your success, remember you wins. Eg. tracking weekly, monthly, bi-annual, annual wins

Product Manger versus Product Owner

  1. “Product Owner is a role you play on a Scrum team. Product Manager is the job”
  2. Product Owner should spend half the time talking to customers and half working with the team is an ideal but should vary. External v internal work will shift depending on maturity and success of product
  3. Product Managers in senior roles should concentrate on defining vision and strategy for teams based on market resarhc, company goals, and current state of products. The ones without Scrum teams or smaller teams can help validate or contribute to strategy fo future products.

How to Run an Effective Meeting

  1. Set the agenda so there is a compass for conversation. Start on time and tend on time.
  2. End with an action plan that has next steps.
  3. Be clear, light bulb or gun – you have an idea or you want people to do it. “Your job as a leader is to be right at the ending of the meeting, not the beginning of the meeting.” Let people speak so you’ve heard all facts and opinions.

Managing Software Engineers *This is totally an article clearly from 2002 and all problematic attitudes therein about not considering people might have things like families

  1. Create work environment where best programmers will be satisfied enough to stay and where average programmers become good
  2. “One of the paradoxes of software engineering is that people with bad ideas and low productivity often think of themselves as supremely capable. They are the last people whom one can expect to fall in line with a good strategy developed by someone else. As for the good programmers who are in fact supremely capable, there is no reason to expect consensus to form among them.”
  3. Ideals to steal
    1. people don’t do what they are told
    2. all performers get the right consequences every day
    3. small, immediate, certain consequences are better than large future uncertain ones
    4. positive reinforcement is more effective than negative reinforcement
    5. ownership leads to high productivity

The What, Why, and How of Master Data Management

  1. Five kinds of data in corporations:
    1. “Unstructured—This is data found in e-mail, white papers like this, magazine articles, corporate intranet portals, product specifications, marketing collateral, and PDF files.
    2. Transactional—This is data related to sales, deliveries, invoices, trouble tickets, claims, and other monetary and non-monetary interactions.
    3. Metadata—This is data about other data and may reside in a formal repository or in various other forms such as XML documents, report definitions, column descriptions in a database, log files, connections, and configuration files.
    4. Hierarchical—Hierarchical data stores the relationships between other data. It may be stored as part of an accounting system or separately as descriptions of real-world relationships, such as company organizational structures or product lines. Hierarchical data is sometimes considered a super MDM domain, because it is critical to understanding and sometimes discovering the relationships between master data.
    5. Master—Master data are the critical nouns of a business and fall generally into four groupings: people, things, places, and concepts. Further categorizations within those groupings are called subject areas, domain areas, or entity types. For example, within people, there are customer, employee, and salesperson. Within things, there are product, part, store, and asset. Within concepts, there are things like contract, warrantee, and licenses. Finally, within places, there are office locations and geographic divisions. Some of these domain areas may be further divided. Customer may be further segmented, based on incentives and history. A company may have normal customers, as well as premiere and executive customers. Product may be further segmented by sector and industry. The requirements, life cycle, and CRUD cycle for a product in the Consumer Packaged Goods (CPG) sector is likely very different from those of the clothing industry. The granularity of domains is essentially determined by the magnitude of differences between the attributes of the entities within them.”
  2. Deciding what to manage it and how it should be managed depends on some of the following criteria: behavior (how it interacts with other data, eg customers buy product- which may be a part of multiple hierarchies describing how they’re sold), life cycle (created, read, updated, deleted, searched – a CRUD cycle), cardinality, lifetime, complexity, value, or volatility, reuse
  3. Master Data Management is the tech, tools, and processes required to create and maintain consistent and accurate lists of master data, including identifying sources of master data, analyzing metadata, appointing data stewards, data-governance program, developing master data model, toolset, infrastructure, generating and testing master data, modify producing and consuming systems, implementing maintenance processes, and creating Master List similar to ETL below:
    1. Normalize data formats
    2. Replace Missing values
    3. Stnadardize Values
    4. Map Attributes
    5. Needs versioning and auditing

Treehouse Learning:  

Object-Oriented-Javascript

  • An object is a container for values in the form of properties and functionality in the form of methods
    • Methods on values can return objects, but they don’t have to return anything at all
  • Accessing or assigning properties is known as getting and setting
  • Native Objects: no matter where your JavaScript programs are run, it will have these objects eg. number, string, object, boolean
  • Host Objects: provided by the host environment, eg. the browser, such as document, console, or element
  • Own Objects: created in own programming eg. characters in a game
  • Objects hide complexity and organize code – known as encapsulation
  • An object literal holds information about a particular thing at a given time – it stores the state of a thing.

Eg.

var person = {
            name: “Lauren”,
            treehouseStudent: true,
            “full name”: “Lauren Smith”
}

Access using dot notation or square brackets

person.name;
person.treehouseStudent;
person[“name”]
person[“treehouseStudent”]
person[“full name”]
  • Each key is actually a string, but Javascript interpreter interprets them as a string
  • Encapsulating code into a single block allows us to keep state and behaviors for a particular thing in one place and code becomes more maintainable

Adding method to an object

var contact = {
  fullName: function printFullName() {
  var firstName = "Andrew";
  var lastName = "Chalkley";
  console.log(firstName + " " + lastName);
  }
}

Anonymous Function

var contact = {
  fullName: function() {
    var firstName = "Andrew";
    var lastName = "Chalkley";
    console.log(firstName + " " + lastName);
  }
}

We don’t know the name of variable to access its properties. Depending on where and how a function is called, this can be different things. Think of this as owner of function, eg. the object of method that is called.

Eg.

var dice = {
            sides: 6,
            roll: function() {
                var randomNumber = Math.floor(Math.random() * this.sides) + 1; // this means object literal of dice in this case
                console.log(randomNumber);
            }
}

var dice10 = {
            sides: 10,
            roll: function() {
                 var randomNumber = Math.floor(Math.random() * this.sides) + 1; // refers to dice10 variable
                 console.log(randomNumber);

            }

}

Object  literals are great for one off objects, if you want to make multiple objects of one type you need constructor functions:

  • Constructor functions describe how an object should be created
  • Create similar objects
  • Each object created is known as an instance of that object type

Constructor function example and new contact instances (an instance is the specific realization of a particular type or object)

Function Contact(name, email) {
    this.name = name;
    this.email = email;
}

var contact = new Contact(“Andrew”, “andrew@andrew.com”);
var contact2 = new Contact(“Bob”, “bb@andrew.com”);

You can create as many object of same type as you like, eg. real world example of:

Media Player

  • Playlist object (initialized by constructor function)
  • Song objects

PM Hack Panel Notes

Two weeks ago, I got to go PM Hack for a hot second, a hackathon for PMs and aspiring PMs put together by Jason Shen and Johanna Beyenbach and hosted by Wayup. I’m really bummed I actually only got to stay for maybe half the day because my actual PM job called me in on a Sunday, but it was definitely unique and one of the cooler initiatives I’ve seen to get people’s hands dirty on Product Management work. In a previous life, I’ve gone to hackathons as a developer, and there is something really inspiring, educational, and rewarding about working with a group of strangers to create something workable in a matter of hours or days.

One thing I did get to stay for an enjoy was a panel by some esteemed folks in the business so to speak – so I thought I’d put down my notes here to keep top of mind:

pmhackpanel.jpg

Some awesome Product Managers: Elan Miller (Midnight), Inga Chen (Squarespace), Lauren Ulmer (Dormify), and Joan Huang (Flatiron Health)

  • Emotional intelligence > IQ in PM roles
  • You need to understand yourself and your vision first
  • Constant tension at work between tending to firedrills v longer range thinking -> one key to working on this is working internal marketing for buy-in on longer term strategy
  • Good pms are always obsessing or communicating and good listening
  • Status update at right level of context – know how to communicate to junior level devs to executives
  • Saying no is a part of your job
  • Your job is to also bring the team and org together
  • Be cognizant of what step of the product life cycle are you able to work in and think about what is possible to change and is it possible
  • Team Cultures (build it out) + Users (joy)
  • Managing different dependencies across teams is key
  • Your job is to also define and interpret metrics correctly
  • The bigger the org the more stakeholder communication versus direct time to users
  • Be careful not to over optimize for the negative vocal batch of users versus the majority of users
  • As with everything, it’s right place right time with right skill set so you gotta angle to make to happen
  • GV Design Sprint can be a useful problem solving process
  • When you’re interviewing for a PM job: communicate you know a company’s business when you interview :
    • Mini deck to intro yourself, how you can solve company’s problem, and show you’ve done your hw and are more than your resume
    • Understand levers to business model (how does business makes money)
    • Apply to fewer jobs and make sure you’re interested in problems the product is trying to solve
    • Find side projects outside of your typical product development life cycle
    • Treat yourself as a product
    • Having a POV and being polarizing can be an advantage
    • Remember you can help them with particular problem you’re trying to solve even if you aren’t from that vertical – you could be bringing a fresh perspective to their problems