Understand the core concepts of ML
A set of rules or instructions followed by a computer to solve a problem or perform a computation. In ML, algorithms learn patterns from data to make predictions or decisions.
The simulation of human intelligence processes by machines, especially computer systems. ML is a subset of AI.
An algorithm used to train neural networks by calculating the gradient of the loss function with respect to the weights and biases.
In ML, bias refers to the error introduced by approximating a real-world problem, which may be complex, by a simplified model. It can also refer to unfair prejudice in data or algorithms.
An unsupervised learning task that involves grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar to each other than to those in other groups.
A supervised learning task where the goal is to assign input data to one of several predefined categories or classes.
A technique for evaluating the performance of a model on unseen data by partitioning the data into multiple folds, training on a subset, and testing on the remaining fold.
A subfield of machine learning based on artificial neural networks with multiple layers (deep architectures). It excels at learning complex patterns from large datasets.
A tree-like model of decisions and their possible consequences. Used in classification and regression, it splits data based on feature values.
The process of using domain knowledge to extract features from raw data, which can improve the performance of machine learning models.
An iterative optimization algorithm used to find the minimum of a function. In ML, it's used to minimize the loss function by adjusting model parameters.
A popular clustering algorithm that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or cluster centroid).
A function that measures the error between the predicted output and the actual output of a model. The goal of training is to minimize this function.
A type of artificial intelligence (AI) that allows systems to automatically learn and improve from experience without being explicitly programmed. ML algorithms use data to find patterns and make predictions.
The process of assessing the performance and accuracy of a trained machine learning model using various metrics and techniques.
A computational model inspired by the structure and function of biological neural networks. Consists of interconnected nodes (neurons) organized in layers, used for complex pattern recognition.
A phenomenon in ML where a model learns the training data too well, including its noise and outliers, leading to poor generalization on new, unseen data.
A supervised learning task where the goal is to predict a continuous numerical value based on input features.
Techniques used to prevent overfitting in machine learning models by adding a penalty term to the loss function, which discourages complex models.
A type of ML where the algorithm learns from a labeled dataset, meaning each data point is paired with its correct output or label.
The dataset used to train a machine learning model. The model learns patterns and relationships from this data.
A phenomenon in ML where a model is too simple to capture the underlying structure of the data, leading to poor performance on both training and unseen data.
A type of ML where the algorithm learns from an unlabeled dataset, identifying patterns and structures without predefined outcomes.
In ML, variance refers to the amount by which the model's prediction would change if it were trained on a different training dataset. High variance can lead to overfitting.