Key Components of Deep Learning

Overview of key components

Objective Function (Loss/Cost Function)

= quantifies how well the neural network's predictions match the expected outputs.

Learning Rule (Optimization Algorithm)

= dictates how the model updates its parameters (weights and biases) to minimize the objective function.

Architecture (Model Structure)

= layout and connectivity of the network — how many layers, how many neurons per layer, and how the layers are connected.

Architecture Input Type Handles Sequence? Task Type Supervised? Typical Use Cases
Feedforward Neural Networks (FNNs) (simple multi-layer perceptrons) Tabular / vector data No Discriminative (classification, regression) Yes Credit scoring, house price prediction
Convolutional Neural Networks (CNN) Spatial (images, videos) No Discriminative / Generative Yes / (some unsupervised) Image classification, segmentation, image generation
Recurrent Neural Networks (RNN) Sequential (text, time series) Yes Discriminative Yes Sentiment analysis, language modeling, stock prediction
Transformer Sequential (text, audio) Yes (via attention) Discriminative & Generative Yes / No Translation, summarization, protein folding, chatbots
Autoencoders Any (images, text, signals) No / Yes (with RNNs) Generative / Feature Learning No Denoising, anomaly detection, dimensionality reduction
Variational Autoencoders (VAE) Any (images, text, signals) No / Yes (with RNNs) Generative / Feature Learning / Probabilistic Modeling No Generative modeling, smooth latent space, data generation
Generative Adversarial Networks (GANs) Any (often images) No / Yes Generative No Image generation, data augmentation, synthetic data creation

Initialization (Starting Point for Learning)

= refers to how the network’s weights are set before training begins.

Environment (Training Data and Learning Context)

= defines the input data, labels, and context the model is exposed to during learning.