Journey in Machine Learning: Diving into Supervised and Unsupervised Learning

Found a great course on Coursera from DeepLearning.AI: Supervised Machine Learning: Regression and Classification. Here are my notes from week 1.

Machine Learning is an expansive field that teaches computers to learn from data, making decisions and predictions based on the information fed into them. At the core of machine learning are three primary learning paradigms: Supervised, Unsupervised Learning and Reinforcement Learning (out of scope for this course).

Supervised Learning: Learning with Guidance

Supervised learning is akin to learning with a teacher. The “teacher” provides the algorithm with the correct answers (labels) during training, guiding it to make accurate predictions. This form of learning can be broken down into two major types:

  • Regression: This involves predicting a continuous quantity (continuous quantity means a numerical value that can take on any amount within a range). For instance, estimating the price of a house based on its features (size, location, etc.) falls under regression. The goal is to predict a numerical value from an infinite set of possibilities.
  • Classification: Classification tasks involve predicting a category or class from a finite set of options. Whether determining if a tumor is benign or malignant based on its characteristics, or identifying an email as spam or not, classification algorithms aim to place the input into specific categories.

Both regression and classification deal with labeled data, where the input data (features) are paired with the correct output (target).

Unsupervised Learning: Discovery in Data

In contrast to supervised learning, unsupervised learning involves analyzing and modeling data that hasn’t been labeled. The algorithm tries to find patterns or inherent structures within the data. Key areas include:

  • Clustering: Grouping similar data points together based on their features, such as organizing news articles on similar topics or segmenting customers with similar behaviors.
  • Anomaly Detection: Identifying unusual data points, which can be crucial for fraud detection or monitoring system health.
  • Dimensionality Reduction: Simplifying data without losing its essential characteristics, making it easier to analyze and visualize.

The Learning Process

Regardless of the type, the learning process in machine learning involves a training set, which includes input features and the corresponding output targets. A learning algorithm processes this data to produce a model, represented as a function (f), which makes predictions (ŷ). The goal is to minimize the difference between the predicted output and the actual output (y), improving the model’s accuracy.

Cost Function

The performance of a model is evaluated using a cost function, which measures how well the model’s predictions match the actual data. In regression, for example, the cost function calculates the average squared difference between the predicted and actual values across all training examples. Adjusting the model’s parameters aims to minimize this cost, optimizing the model’s performance.

Will have more updates after my next week of learning!

2 лайка