What’s This?
I’m trying to give myself at least half an hour during the workdays (or at least blocking two hours or so a week at least) to learn something new – namely taking classes/reviewing what I know on Treehouse, reading job related articles, and reading career-related books. Tracking notables here on a monthly basis as a self-commitment and to retain in memory and as reference. I tell off posting this the last six months with work and life has been insanely busy and my notes inconsistent across proprietary work versus my own, but worth a round-up here. Posting with good intentions for next year. Reminding myself that if I don’t capture every bit I did, it’s alright. Just keep yourself accountable.
Books Read:
So Good They Can’t Ignore You
Key Points:
- It’s not about passion – it’s about gaining career capital so you have more agency over a career you want.
- Control traps 1) you don’t have enough career capital to do what you want 2) employers don’t want you to change/advance/slowdown because you have skills valuable to them
- Good jobs have autonomy, financial viability, and mission – you can’t get there on passion alone.
- Figure out if the market you wish to succeed in is winner-take-all, one killer skill, eg. screenwriting is all about just getting a script read or auction-based, diverse collection of skills, eg. running a complex business.
- Make many little bets and try different things that give instant feedback to see what is working or not and show what you’re doing in a venue that will get you noticed.
- On Learning
- Research bible-routine – summarize what you might work on – description of result and strategies used to do it.
- Hour-tally and strain – just work on for an hour and keep track of it
- Theory-Notebook – brainstorm notebook that you deliberately keep track of info in
- Carve out time to research and independent projects
- “Working right trumps finding the right work” p228
- Good visual summary
The Manager’s Path
Key Points:
- “Your manager should be the person who shows you the larger picture of how your work fits into the team’s goals, and helps you feel a sense of purpose in the day-to-day work”
- “Developing a sense of ownership and authority for your work and not relying for manager to set the tone”
- “Especially as you become more senior, remember that your manager expects you to bring solutions, not problems”
- “Strong engineering managers can identify the shortest path through the systems to implement new futures”
- Dedicate 20% of time in planning meetings to sustainability work
- “Be carefully that locally negative people don’t stay in that mindset on your team for long. The kind of toxic drama that is created by these energy vampires is hard for even the best manager to combat. The best defense is a good offense in this case”
- You are not their parent – treat them as adults and don’t get emotionally invested in every disagreement they have with you personally.
Articles:
What is a predicate pushdown? In mapreduce
- Concept is if you issue a query to run in one place you’d spawn a lot of network traffic, making that query slow and costly. However, if yo updush down parts of the query to where data is stored and thus filter out most of the data, you reduce network traffic.
- You filter conditions as True or False – predicates, and pushdown query to where the data resides
- For example, you don’t need to pass through every single column for every map reduce job in the pipeline for no reason so you filter so you don’t read the other columns
What is a predicate pushdown?
- The basic idea is to push certain parts of SQL queries (the predicates) to where the data lives to optimize the query by filtering out data earlier rather than later so it skips reading entire files or chunks of files to reduce network traffic/processing time
- This is usually done with a function that returns a boolean in the where cause to filter out data
- Eg example below where clause “WHERE a.country = ‘Argentina’”
SELECT *
a.*
FROM
table1 a
JOIN
table2 b ON a.id = b.id
WHERE
a.country = 'Argentina';
The Leaders Calendar
- 6 hours a day of non-work time, half with family and some downtime with hobbies
- Setting norms and expectations with e-mail is essential. For example sending e-mails from CEO late at night sets a wrong example for the company or CEO’s time is spend cc’d on endless irrelevant items.
- Be agenda-driven to optimize limited time and also not only let the loudest voices stand out so that the important work can get done, not just the work that appears the most urgent be work on strategy.
- A key way to do this is to limit routine activities that can be given to a direct report
What People Don’t Tell You About Product Management
- “Product Management is a great job if you like being involved in all aspects of a product — but it’s not a great job if you want to feel like a CEO.”
- You don’t necessary make the strategy, have resources, and have the ability to fire people. Your job is to get it done by being resourceful and convincing.
- Product Managers should channel the needs to the customer and follow a product from conception, dev, launch, and beyond. Be a cross functional leader coordinating between R&D, Sales, Marketing, Manufacturing, and Operations. Leadership and coordination are key. Your job is to make strategy happen and convincing people you work with.
- “For me, product management is exciting and stressful for the same reason: there’s unpredictability, there’s opportunity to create something new (which also means it may be unproven), and you’re usually operating with less data than you’d like, and everything is always a little bit broken.”
Web Architecture 101
- In web dev you almost always want to scale horizontally, meaning you add more machines into your pool of resources, versus vertically, meaning scaling by adding more powers (eg. CPU, RAM) to an existing machine, this redundancy allows you to have another plan so your applications keep running if a server goes down and makes your app more fault tolerant. You can also minimally couple different parts of the app backend to run on different servers.
- Job queues store lists of jobs that need to be run asynchronously – eg Google does not search the entire internet every time you do a search, it crawls the web asynchronously and updates search indexes along the way
- Typical data pipeline: firehouse that provides streaming interface to ingest and process data (eg. Kinesis and Kafka) -> raw data as well as final transformed/augmented data saved to cloud storage (eg. S3) -> data loaded into a data warehouse for analysis (eg. Redshift)
Running in Circles – Why Agile Isn’t Working and What We Do Differently
- “People in our industry think they stopped doing waterfall and switched to agile. In reality they just switched to high-frequency waterfall.”
- “Only management can protect attention. Telling the team to focus only works if the business is backing them up.”
- Think of software development as going uphill when you’re finding out the complexity/uncertainty and then downhill when you have certainty.
Product Managers – You Are Not the CEO of Anything
- Too many product managers think their role is that of an authoritarian CEO (with no power) and often disastrous because they think they have all the answers.
- You gain credibility through your actions and leadership skills.
- “Product management is a team sportafter all, and the best teams don’t have bosses – they have coaches who ensure all the skills and experiences needed are present on the team, that everyone is in the right place, knows where the goal is, and then gets out of the way and lets the team do what they do best in order to reach that goal.”
Product Prioritization: How Do You Decide What Belongs in Your Product?
- Radical vision with this mad lips template Today, when [customer segment]want to [desirable activity/ outcome], they have to [current solution] . This is unacceptable, because [shortcomings of current solutions]. We envision a world where [shortcomings resolved]. We are bringing this world about through [basic technology/ approach].
- Four components to good product strategy
- Real Pain Points means “Who is it for?” and “What is their pain point?”
- Designrefers to “What key features would you design into the product?” and “How would you describe your brand and voice?”
- Capabilitiestackles the “Why should we be the ones doing it?”and “ What is our unique capability?”
- Logisticsis the economics and channels, like “What’s our pricing strategy?” and “What’s the medium through which we deliver this?”
- Then prioritize between sustainable and good fit
To Drive Business Success Implement a Data Catalog and Data Inventory
- Companies have a huge gap between knowing where the data is located simply and what to do with it
- Three types of metadata
- Business Metadata: Give us the meaning of data you have in a particular set
- Technical Metadata: Provide information on the format and structure of data – databases, programming envs, data modeling tools natively available
- Operational Metadata: Audit trail of information of where the data came from, who created it, etc.
- “Unfortunately, according to Reeve, new open source technologies, most importantly Hadoop, Hive, and other open source technologies do not have inherent capabilities to handle, Business, Technical AND Operational Metadata requirements. Firms cannot afford this lack as they confront a variety of technologies for Big Data storage, noted Reeve. It makes it difficult for Data Managers to know where the data lives.” http://www.dataversity.net/drive-business-success-implement-data-catalog-data-inventory/
Why You Can’t be Data Driven Without a Data Catalog
- A lot of data availability in organizations is “tribal knowledge” which severly limits the impact data has in an organization. Data catalogs should capture tribal knowledge
- Data catalogs need to work to have common definitions of important concepts like customer, product, and revenue, especially since different divisions actually will think of those concepts differently.
- A solution that one company did was a Looker-power integrated moel with a GitBook data dictionary.
What is a data catalog?
- At its core, a data catalog centralizes metadata. “The difference between a data catalog and a data inventory is that a data catalog curates the metadata based on usage.”
- Different types of data catalog users falls into three buckets
- Data Consumers – data and business analysts
- Data Creators – data architects and database engineers
- Data Curators – data stewards and data governors
- A good data catalog must
- Centralize all info on data in one place – structure, quality, definitons, and usages
- Allow users to self-service
- Auto-populate consistency and with accuracy
Why You Need a Data Catalogue and How to Select One
- “A good data catalog serves as a searchable business glossary of data sources and common data definitions gathered from automated data discovery, classification, and cross-data source entity mapping. Automated data catalog population is done via analyzing data values and using complex algorithms to automatically tag data, or by scanning jobs or APIs that collect metadata from tables, views, and stored procedures.”
- Should foster search and reuse of existing data in BI tools
- Should almost be an open platform where many people can use to see what they want to do with that
10 Tips to Build a Successful Data Catalog
- Who – understand the owner or trusted steward for asset
- What – aim to for a basic description of an asset as a minimum: business terminology, report functionality, and basic purpose of a dataset
- Where – where the underlying assets are
The Data Catalog – A Critical Component to Big Data Success
- Most data lakes do not have effective metadata management capabilities that make using them inefficient
- Need data access security solutions (role and asset), audit trails of update and access, and inventory of assets (technical and business metadata)
- First step is to inventory existing data and make it usable at a data store level – table, file, database, schema, server, or directory
- Figure out how to ingest new data in a structure manner, eg. data scientist wants to incorporate new data in modeling
Data Catalog for the Enterprise – What, Why, & What to look for?
- With the growth of enormous data lakes, data sets need to be discovered, tagged, and annotated
- Data catalogs can also eliminate database duplicity
- Challenges of implementing data catalogs include educating org on the value of a single source of data, dealing with tribalism, and
Bridging the gap: How and why product management differs from company to company
- NYC vs SF product management disciplines are different due to key ecosystem factors: NYC driven by tech enhancing existing industries and thus sales driven while Bay Area creates entire new categories: vision and collaboration driven. NYC more stable exits but less huge ones
- This dichotomy in product management approaches is due to how to bring value to different markets
- Successful product managers need six key ingredients
- Strategic Thinking
- Technical proficiency
- Collaboration
- Communication
- Detail Orientation
- User Science
Treehouse Learning
Rest API Basics
- REST (Representational State Transfer) is really just another layer of meaning on top of HTTP
- API provides a programmatic interface, a code UI basically, to that same logic and data model. Communication is done through HTTP and burden on creating interface is on users of API, not the creator
- Easy way to say it – APIs provide code that makes things outside of our application easier to interact inside of our application
- Resources: usually a model in your application -> these retrieved, created, or modify in API in endpoints representing collections of records
- Client request types to API:
- GET is used for teching either a collection of resources or single resource.
- POST is used to add a new resource to a collection, eg. POST to /games to create a new game
- PUT is HTTP method we use when we want to update a record
- DELETE is used for sending a DELETE request to a detail record, a URL for a single record, or just deleting that record
- Requests
- We can use different aspects of the requests to change the format of our response, the version of the API, and more.
- Headers make transactions more clear and explicit, eg. Accept (specifies file format requester wants), Accept-Language, Cache-Control
- Responses
- Content-Type: text/javascript – > header to specify what content we’re sending
- Other headers include Last-Modified, Expired, and Status (eg. 200, 500, 404)
- 200-299 content is good and everything is ok
- 300-399 request was understood but the requested resource is now located somewhere else. Use these status codes to perform redirects to URLs most of the time
- 400-499 Error codes, eg wrongly constructed or 404 resource no longer exists
- 500-599 Server End errors
- Security
- Cache: usually a service running in memory that holds recently needed results such as a newly created record or large data se. This helps prevent direct database calls or costly calculations on your data.
- Rate Limiting: allowing each user only a certain number of requests to our API in a given period to prevent too many requests or DDOS attacks
- A common authentication method is the use of API toekns – you give your users a token and secret key as a pair and they use those when they make requests to your server so you know they are who they say are.
Planning and Managing the Database
- Data Definition Language – language that’s used to defined the structure of a database
When Object Literals Are Not Enough
- Use classes instead of object literals to not repeat so much code over and over again
- Class is a specification, a blueprint for an object that you provide with a base set of properties and methods
- In a constructor method, this refers to the object that is being created, which is why it’s the keyword here.
Google Machine Learning Course (30% through highlights)
ML – reduces time programming
- Scales making sense of data
- Makes projects customizable much more easily
- Let’s you solve programming problems that humans can’t do but algos do well
- Use stats and not logic to solve problems, flips the programming paradigm a bit
Label is the thing we’re picking, eg. Y in linear regression
Features are Xs or way we represent our data, an input variable
– eg. header, words in e-mail, to and from addresses, routing info, time of day
Example: particular instance of data, x, eg. an email
Labeled example has { features, label}: (x, y) – used to train model ( email, spam or not spam)
Unlabeled examples has {features, ?}: (x, ?) – used for making predictions on new data (email, ?)
Model: thing that does predicting. Model maps examples to predicted labels: y’ – defined by internal parameters, which are learned
Framing: What is Supervised Machine Learning? Systems learn to combine input to product useful predictions on never before seen data
* Training means creating or learning the model.
* Inference means applying the trained model to unlabeled examples to make useful predictions (y’)
* Regression models predict continuous values: eg. value of house, probability user will click on an head
* Classification model: predicts discrete values, eg. is the given e-mail message spam or not spam? Is this an image of a dog, cat, or hampster?
Descending into ML
y = wx + b
w refers for weight vectors, gives slope
b gives bias
Loss: loss means how well line is predicting example, eg. distance from line
* loss is on a 0 through positive scale
* Convenient way to define loss for linear regression
* L2Loss also known as squared error = square of difference between prediction and label (observation – prediction)2 = (y-y’)2
* We care about minimizing loss all across datasets
* Measure of how far a model’s predictions are from its label – a measure of how bad the model is
Feature is an input variable of x value – something we know
Bias: b or An intercept or offset from an origin. Bias (also known as the bias term) is referred to as b or w0 in machine learning models.
Inference: process of making predictions by applying trained models to unlabeled examples. In statistics, inference refers to the process of fitting the parameters of a distribution conditioned on some observed data
Unlabeled example
An example that contains features but no label. Unlabeled examples are the input to inference. In semi-supervised and unsupervised learning, unlabeled examples are used during training.
Logistic regression
Model that generates probability for each possible discrete label value in classification problems by applying a sigmoid function to a linear prediction. Can be used for binary or multi-class classifications
Sigmoid function
function that maps logistic or multinomial regression output (log odds) to probabilities, returning a value between 0 and 1. Sigmoid function converts variance
K-means: clustering algorithm from signal analysis
Random Forest
Ensemble approach to finding decision tree the best fits training data by creating many decision trees and then determining the average – the random part of the term refers to building each of the decision trees from a random selection of features, the forest refers to the set of decision trees
Weight
Coefficient for a feature in a linear model or edge in a deep network. Goal of training a linear model is to determine the ideal weight for each feature. If a weight is 0, then its corresponding feature does not contribute to the model
Mean squared error (MSE) average squared loss per data set -> sum squared losses for each individual examples and divide by # examples
Although MSE is commonly-used in machine learning, it is neither the only practical loss function nor the best loss function for all circumstances.
empirical risk minimization (ERM): Choosing the function that minimizes loss on the training set.
sigmoid function: A function that maps logistic or multinomial regression output (log odds) to probabilities, returning a value between 0 and 1. In other words, the sigmoid function converts sd from logistic regression into a probability between 0 and 1.
binary classification: classification task that outputs one of two mutually exclusive classes, eg. hot dog not a hot dog
Logistic Regression
* Prediction method that gives us probability estimates that are calibrated
* Sigmoid something that gives bounded value between 0 and 1
* useful for classification tasks
* regularization important as model will try to drive losses to 0 and weights may go crazy
* Linear Logistic Regression is fast, efficient to train, and efficient to make predictions and scales to massive data and good for low latency data
* A model that generates a probability for each possible discrete label value in classification problems by applying a sigmoid function to a linear prediction. Although logistic regression is often used in binary classification problems, it can also be used in multi-class classification problems (where it becomes called multi-class logistic regression or multinomial regression).
Many problems require a probability estimate as output. Logistic regression is an extremely efficient mechanism for calculating probabilities. Practically speaking, you can use the returned probability in either of the following two ways:
* “As is”
* Converted to a binary category.
Suppose we create a logistic regression model to predict the probability that a dog will bark during the middle of the night. We’ll call that probability:
* p(bark | night)
* If the logistic regression model predicts a p(bark | night) of 0.05, then over a year, the dog’s owners should be startled awake approximately 18 times:
* startled = p(bark | night) * nights
* 18 ~= 0.05 * 365
In many cases, you’ll map the logistic regression output into the solution to a binary classification problem, in which the goal is to correctly predict one of two possible labels (e.g., “spam” or “not spam”).
Early Stopping
* Regularization method the ends model before training loss finishes decreasing. You end when loss on validation dataset starts to increase
Key takeaways:
* Logistic regression models generate probabilities. In order to map a logistic regression value to a binary category, you must define a classification threshold (also called decision threshold), eg the value where you can categorize something as hotdog not a hotdog. (Note: Tuning a threshold for logistic regression is different from tuning hyperparameters such as learning rate)
* Log Loss is the loss function for logistic regression.
* Logistic regression is widely used by many practitioners.
Classification
* We can use logistic regression for classification by using fixed thresholds for probability outputs, eg, it’s spam if it exceeds .8.
You can evaluate classification performance by
* Accuracy: fraction of predictions we got right but has key flaws, eg. if there are class imbalances, such as when positives and negatives are extremely rare for example predicting CTRs. You can have model no features but a bias a feature that causes it ti predict false always that would be highly accurate but has no value
Better is to look are True Positives and False Positives
* True Positives: Correctly Called
* False Positives: Called but not true
* False Negatives: Not predicted and it happened
* True Negatives: Not called and did not happen
* A true positive is an outcome where the model correctly predicts the positive class. Similarly, a true negative is an outcome where the model correctly predicts the negative class.
* A false positive is an outcome where the model incorrectly predicts the positive class. And a false negative is an outcome where the model incorrectly predicts the negative class.
Precision: True positive/all positive predictions, how precisely was positive class right
Recall: True Positives/All Actual Positives: out of all the possible positives, how many did the model correctly identify
* If you raise classification threshold, reduces false positives and raises precision
* We might not know in advance what best classification threshold is – so we evaluate across many possible classification thresholds – this is ROC curve
Prediction Bias
* Sum of all these we predict to all things we observe
* ideally – average of predictions == average of observations
* Logistic predictions should be unbiased
* Bias is a canary, zero bias does not mean all is good but it’s a sanity check. Look for bias in slices of data to guide improvements and debug model
Watch out for class imbalanced sets, where there a significant disparity between the number of positive and negative labels. Eg. 91% accurate predictions but only 1 TP and 8 FN, eg. 8 out of 9 malignant tumors end up undiagnosed.
Calibration Plots Showed Bucketed Bias
* Mean observation versus mean prediction
Precision = TP/(TP + FP) number of labels correctly classified
Recall = TP/(TP + FN) = how many actual positives were identified correctly, attempts to answer the question, how many of the actual positives was identified correctly?
* To evaluate the effectiveness of models, you must examine both precision and recall which are often in tension because improving precision typically reduces recall and vice versa.
* When you increase the classification threshold, then number of false positives decrease, but false negatives increase, so precision increases while recall decreases.
* When you decrease the classification threshold, false positives increase and false negatives negatives decrease, so recall increase while precision decreases.
* eg. If you have a model with 1 TP and 1 FP = 1/(1+1) = precision is 50% and when it predicts a tumor is malignant, it is correct 50% of the time
Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of relevant instance
* Suppose a computer program for recognizing dogs in photographs identifies 8 dogs in a picture containing 12 dogs and some cats. Of the 8 identified as dogs, 5 actually are dogs (true positives), while the rest are cats (false positives). The program’s precision is 5/8 while its recall is 5/12. When a search engine returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3. So, in this case, precision is “how useful the search results are”, and recall is “how complete the results are”.
* In an information retrieval scenario, the instances are documents and the task is to return a set of relevant documents given a search term; or equivalently, to assign each document to one of two categories, “relevant” and “not relevant”. In this case, the “relevant” documents are simply those that belong to the “relevant” category. Recall is defined as the number of relevant documents retrieved by a search divided by the total number of existing relevant documents, while precision is defined as the number of relevant documents retrieved by a search divided by the total number of documents retrieved by that search.
* In information retrieval, a perfect precision score of 1.0 means that every result retrieved by a search was relevant (but says nothing about whether all relevant documents were retrieved) whereas a perfect recall score of 1.0 means that all relevant documents were retrieved by the search (but says nothing about how many irrelevant documents were also retrieved).
ROC
* Receiver Operating Characteristics Curve
* Evaluate every possible classification threshold and look at true positive and false positive rates
* Area under that curve has probabilistic interpretation
* If we pick a random positive and random negative, what’s the probability my model ranks them in the correct order – that’s equal to area under ROC curve
Gives aggregate measure of performance aggregated across all possible classification thresholds
TP Rate X-axis FP Rate Y-Axis
AUC = area under curve
* Probably that model ranks a random positive example more highly than a random negative example:
* One way of interpreting AUC is as the probability that the model ranks a random positive example more highly than a random negative example. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0.
Characteristics of AUC to note:
* AUC is scale-invariant. It measures how well predictions are ranked, rather than their absolute values. Note: this is not always desireable: sometimes we really do need well calibrated probability outputs, AUC does not provide that
* AUC is classification-threshold-invariant. It measures the quality of the model’s predictions irrespective of what classification threshold is chosen.
* Classification-threshold invariance is not always desirable. In cases where there are wide disparities in the cost of false negatives vs. false positives, it may be critical to minimize one type of classification error. For example, when doing email spam detection, you likely want to prioritize minimizing false positives (even if that results in a significant increase of false negatives). AUC isn’t a useful metric for this type of optimization.
Logistic regression predictions should be unbiased.
* That is: “average of predictions” should ≈ “average of observations”. Good models should have near-zero bias.
* Prediction bias is a quantity that measures how far apart those two averages are. That is:
* prediction bias = average number of predictions – average of labels in data set
* Note: “Prediction bias” is a different quantity than bias (the b in wx + b)
A significant nonzero prediction bias tells you there is a bug somewhere in your model, as it indicates that the model is wrong about how frequently positive labels occur.
* For example, let’s say we know that on average, 1% of all emails are spam. If we don’t know anything at all about a given email, we should predict that it’s 1% likely to be spam. Similarly, a good spam model should predict on average that emails are 1% likely to be spam. (In other words, if we average the predicted likelihoods of each individual email being spam, the result should be 1%.) If instead, the model’s average prediction is 20% likelihood of being spam, we can conclude that it exhibits prediction bias.
* Causes are: incomplete feature set, noisy data set, buggy pipeline, biased training sample, overly strong regularization
You might be tempted to correct prediction bias by post-processing the learned model—that is, by adding a calibration layer that adjusts your model’s output to reduce the prediction bias. For example, if your model has +3% bias, you could add a calibration layer that lowers the mean prediction by 3%. However, adding a calibration layer is a bad idea for the following reasons:
* You’re fixing the symptom rather than the cause.
* You’ve built a more brittle system that you must now keep up to date.
* If possible, avoid calibration layers. Projects that use calibration layers tend to become reliant on them—using calibration layers to fix all their model’s sins. Ultimately, maintaining the calibration layers can become a nightmare.