TensorFlow and Python: A Practical Beginner's Guide

TensorFlow is one of the most widely used machine learning frameworks in the world, and Python is its native language. Whether you want to classify images, predict prices, or build a language model, TensorFlow gives you the tools to do it — and this guide will show you how to get started from scratch.

Google originally developed TensorFlow inside its Brain team to power internal research. In 2015, the company released it as open source, and it quickly became a foundational tool in both academia and industry. Today it underpins everything from recommendation systems to medical imaging tools. Learning TensorFlow alongside Python gives you a direct path into applied machine learning — not just theory, but code that runs and produces real results.

What Is TensorFlow?

TensorFlow is an open-source platform for numerical computation and machine learning. At its core, it provides a way to define mathematical operations as a graph, then execute those operations efficiently across CPUs, GPUs, and TPUs. The name itself is descriptive: data flows through the graph as tensors, which are multi-dimensional arrays similar to NumPy arrays but with additional properties that make them suited for large-scale computation.

The framework has two major eras. TensorFlow 1.x required you to build a computational graph first and then run it inside a special session — a pattern that was powerful but difficult to debug. TensorFlow 2.x, introduced in 2019, moved to eager execution by default, meaning operations run immediately and you can inspect values on the fly just like regular Python code. This made the framework dramatically easier to learn and use.

Note

TensorFlow is licensed under the Apache 2.0 open-source license and maintained primarily by Google, with contributions from a large global community. It runs on Windows, macOS, and Linux, and supports Python 3.10 through 3.13 as of the latest release.

Understanding Tensors

Before writing any model code, it helps to understand what a tensor actually is. Think of it as a generalization of familiar data structures. A scalar (single number) is a rank-0 tensor. A vector (a list of numbers) is a rank-1 tensor. A matrix (a grid of numbers) is a rank-2 tensor. A rank-3 tensor adds another dimension — for example, a color image with height, width, and three color channels.

TensorFlow tensors behave a lot like NumPy arrays. You can do math on them, slice them, reshape them, and convert between the two formats easily. The key difference is that TensorFlow tensors can be automatically sent to a GPU for faster computation, and TensorFlow tracks them through a computation graph for automatic differentiation — the mathematical foundation of training neural networks.

import tensorflow as tf

# A rank-0 tensor (scalar)
scalar = tf.constant(7)

# A rank-1 tensor (vector)
vector = tf.constant([1.0, 2.0, 3.0])

# A rank-2 tensor (matrix)
matrix = tf.constant([[1, 2], [3, 4], [5, 6]])

# Check shape and data type
print(matrix.shape)   # TensorShape([3, 2])
print(matrix.dtype)   # tf.int32

# Basic math works just like NumPy
a = tf.constant([1.0, 2.0, 3.0])
b = tf.constant([4.0, 5.0, 6.0])
print(tf.add(a, b))   # [5. 7. 9.]

# Convert to NumPy
numpy_array = matrix.numpy()
print(type(numpy_array))  # <class 'numpy.ndarray'>
Pro Tip

Use tf.constant() when the value will not change, and tf.Variable() when the value needs to be updated during training — such as weights and biases in a neural network.

Installing TensorFlow

Installation is straightforward via pip. TensorFlow requires Python 3.10 or higher. If you plan to use a GPU, you will also need a compatible NVIDIA GPU with CUDA 12.2 support, though for learning purposes a CPU-only installation works fine.

# Install the latest TensorFlow (CPU)
pip install tensorflow

# Verify the installation
import tensorflow as tf
print(tf.__version__)  # 2.21.0 as of March 2026

# Check if a GPU is available
print(tf.config.list_physical_devices('GPU'))
Note

If you are on macOS with Apple Silicon (M1/M2/M3/M4), use tensorflow-macos instead. Google Colab is another great option — it gives you free GPU access and has TensorFlow pre-installed, so you can start experimenting without any local setup at all.

Building Models with Keras

Keras is TensorFlow's high-level API for building and training neural networks. It was originally a standalone library, but since TensorFlow 2.0 it has been tightly integrated as tf.keras. Starting with Keras 3.0, it became a fully independent multi-backend library that can also run on JAX and PyTorch, but for TensorFlow beginners the tf.keras interface is where you will spend most of your time.

Keras models are built from layers. You stack layers together, compile the model with a loss function and optimizer, and then call fit() to train on your data. The API is deliberately simple — you can describe a complete neural network in ten lines of code.

The Sequential API

The Sequential API is the most beginner-friendly approach. You add layers one at a time in order, and Keras connects them automatically. This works well for straightforward feedforward networks.

import tensorflow as tf
from tensorflow import keras

# Build a simple neural network with Sequential
model = keras.Sequential([
    keras.layers.Dense(64, activation='relu', input_shape=(20,)),
    keras.layers.Dense(32, activation='relu'),
    keras.layers.Dense(1, activation='sigmoid')
])

# Summarize the architecture
model.summary()

The Functional API

When your model needs branches, shared layers, or multiple inputs and outputs, the Functional API gives you more control. You define tensors explicitly and pass them through layers, which makes the data flow visible in your code.

import tensorflow as tf
from tensorflow import keras

inputs = keras.Input(shape=(20,))
x = keras.layers.Dense(64, activation='relu')(inputs)
x = keras.layers.Dropout(0.3)(x)
x = keras.layers.Dense(32, activation='relu')(x)
outputs = keras.layers.Dense(1, activation='sigmoid')(x)

model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
    optimizer='adam',
    loss='binary_crossentropy',
    metrics=['accuracy']
)

Your First Neural Network

The best way to learn TensorFlow is to train an actual model. Below is a complete, runnable example that trains a classifier on the MNIST handwritten digit dataset — a classic benchmark that ships with Keras so there is nothing extra to download.

import tensorflow as tf
from tensorflow import keras
import numpy as np

# Load the MNIST dataset (handwritten digits 0-9)
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# Normalize pixel values from [0, 255] to [0, 1]
x_train = x_train.astype('float32') / 255.0
x_test  = x_test.astype('float32') / 255.0

# Flatten 28x28 images into 784-element vectors
x_train = x_train.reshape(-1, 784)
x_test  = x_test.reshape(-1, 784)

# Build the model
model = keras.Sequential([
    keras.layers.Dense(128, activation='relu', input_shape=(784,)),
    keras.layers.Dropout(0.2),
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dense(10, activation='softmax')  # 10 output classes
])

# Compile
model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

# Train for 5 epochs
history = model.fit(
    x_train, y_train,
    epochs=5,
    batch_size=64,
    validation_split=0.1,
    verbose=1
)

# Evaluate on test data
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=0)
print(f'Test accuracy: {test_acc:.4f}')  # Typically around 0.98

Running this will train a model that correctly identifies handwritten digits with roughly 98% accuracy — in under a minute on most laptops. Each epoch is one complete pass through the training data. The Dropout layer randomly zeroes out 20% of neuron outputs during training, which reduces overfitting and helps the model generalize to new data it has not seen before.

Pro Tip

Save your trained model with model.save('my_model.keras') and reload it later with keras.models.load_model('my_model.keras'). The .keras format is the recommended format as of Keras 3 — it is more portable and reliable than the older SavedModel format for most use cases.

What Is New in TensorFlow 2.x

TensorFlow has moved fast since the 2.0 release. The most recent stable release as of March 2026 is TensorFlow 2.21.0, published on March 6, 2026. Here are the developments worth knowing about as a learner.

Eager Execution by Default

The single biggest change from TensorFlow 1.x is eager execution. Operations now run immediately, line by line, which makes the code feel like regular Python. You can print tensors, set breakpoints, and debug exactly as you would with any other Python program. The old session-based approach (tf.Session()) is gone entirely in TensorFlow 2.

LiteRT Replacing TensorFlow Lite

TensorFlow Lite — the on-device inference runtime for mobile and edge hardware — is being moved to a new independent repository called LiteRT. The tf.lite module is being deprecated, and future TensorFlow Python packages will not include it. If you are building for Android, iOS, or embedded systems, you will want to track the LiteRT project directly rather than depending on tf.lite in your code.

Heads Up

If your project uses tf.lite, plan your migration to LiteRT. The deprecation is active and the module will be removed from future TensorFlow Python packages. The new APIs are available in Kotlin and C++ today.

NumPy 2.0 as the Default

TensorFlow now compiles with NumPy 2.0 by default, aligning with where the broader scientific Python ecosystem is heading. If you have existing code that relied on NumPy 1.x behaviors, it is worth reviewing any array operations for compatibility.

Hermetic CUDA Support

For GPU users, TensorFlow has added hermetic CUDA support. Rather than depending on your locally installed CUDA toolkit, the build system downloads a specific, pinned version of CUDA automatically. This produces more reproducible builds and eliminates one of the more frustrating setup problems beginners encounter when configuring GPU environments.

Keras 3 and Multi-Backend Support

Since Keras 3.0, Keras has become backend-agnostic. While tf.keras continues to work, Keras can now also run computations on JAX or PyTorch under the hood. For TensorFlow users this is mostly transparent — your Keras code works the same — but it signals a broader shift where model code can be written once and deployed across frameworks. Release announcements and updates for Keras 3 are now published on keras.io directly.

"TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources." — TensorFlow GitHub

Key Takeaways

  1. Tensors are the foundation. Everything in TensorFlow flows through multi-dimensional arrays called tensors. Getting comfortable creating, reshaping, and doing math on tensors will make the rest of the framework click.
  2. Keras makes model-building accessible. The Sequential and Functional APIs let you build, compile, and train neural networks with very little boilerplate. Start with Sequential for simple models and graduate to Functional when your architecture becomes more complex.
  3. Eager execution changed everything. TensorFlow 2.x feels like writing regular Python. You no longer need to think in terms of sessions or static graphs for most use cases.
  4. TF Lite is moving to LiteRT. If you are targeting mobile or edge devices, start with LiteRT rather than the deprecated tf.lite module.
  5. The current release is 2.21.0. Published March 6, 2026, it supports Python 3.10 through 3.13 and requires CUDA 12.2 for GPU use. Always install into a virtual environment to keep dependencies clean.

TensorFlow has a steep learning curve if you try to absorb everything at once, but the Keras layer makes it possible to be productive quickly. Start by running the MNIST example above, then explore the official TensorFlow tutorials to work through image classification, text generation, and time series forecasting with real datasets. The gap between understanding the basics and building something genuinely useful is shorter than it looks.

back to articles