Quick Navigation

DEEP LEARNING#1

A subset of machine learning using neural networks with multiple layers to analyze data representations.

NEURAL NETWORK#2

A computational model inspired by the human brain, consisting of interconnected nodes (neurons) that process data.

IMAGE CLASSIFICATION#3

The task of assigning labels to images based on their content, often using deep learning techniques.

TENSORFLOW#4

An open-source deep learning library developed by Google, widely used for building and training neural networks.

PYTORCH#5

An open-source machine learning library developed by Facebook, known for its flexibility and ease of use in deep learning.

ACTIVATION FUNCTION#6

A mathematical function applied to a neuron's output, introducing non-linearity to the model.

LOSS FUNCTION#7

A measure of how well a neural network's predictions match the actual outcomes, guiding the training process.

OVERFITTING#8

A modeling error that occurs when a neural network learns the training data too well, failing to generalize to new data.

DATA AUGMENTATION#9

Techniques used to artificially expand a training dataset by creating modified versions of existing data.

CONFUSION MATRIX#10

A table used to evaluate the performance of a classification model, showing true vs. predicted classifications.

HYPERPARAMETER TUNING#11

The process of optimizing model parameters that are not learned during training to improve performance.

TRAINING SET#12

A subset of data used to train a model, helping it learn patterns and make predictions.

VALIDATION SET#13

A subset of data used to tune model parameters and prevent overfitting during training.

TEST SET#14

A separate subset of data used to evaluate the final model's performance after training.

NEURON#15

A basic unit of a neural network that receives input, processes it, and produces output.

LAYER#16

A collection of neurons in a neural network, where each layer processes the input data in stages.

OPTIMIZER#17

An algorithm used to adjust the weights of a neural network to minimize the loss function during training.

EPOCH#18

One complete pass through the entire training dataset during the training process.

BATCH SIZE#19

The number of training examples utilized in one iteration of model training.

REGULARIZATION#20

Techniques used to prevent overfitting by adding constraints to the model during training.

TRANSFER LEARNING#21

A method where a pre-trained model is fine-tuned on a new task, saving time and resources.

NORMALIZATION#22

The process of scaling input data to improve the training speed and performance of a neural network.

CIFAR-10#23

A widely used dataset in image classification tasks, containing 60,000 32x32 color images in 10 classes.

PERFORMANCE METRICS#24

Quantitative measures used to evaluate the effectiveness of a model, such as accuracy and loss.