Combining two or more lists into one is one of the everyday operations in Python, yet the language offers at least seven distinct ways to do it. Each carries different implications for memory, performance, mutability, and readability. This article walks through every major approach, explains exactly what happens under the hood, and tells you when to reach for each one.
Python's list type is one of the most flexible data structures in any mainstream programming language. According to the official Python documentation, a list is a "mutable sequence, typically used to store collections of homogeneous items" — though in practice Python lists can and do hold mixed types. Because they are mutable and ordered, joining multiple lists together is a routine task. The problem is that Python provides so many ways to do it that beginners often pick one at random and never learn the tradeoffs. That is what this article addresses directly.
What List Concatenation Actually Means
Concatenation in the general sense means joining sequences end to end. When two lists are concatenated, the result is a new sequence that contains all the elements of the first list followed by all the elements of the second. The key word there is "new." In Python, the + operator and several other approaches produce a brand-new list object in memory, leaving the originals untouched. Other approaches, like extend(), modify one of the originals in place. The distinction matters enormously when lists are large, when you need the originals to remain unchanged, or when multiple references point to the same object.
Python's data model draws a precise line between operations that return new objects and operations that mutate existing ones. The official language reference describes this as the difference between "immutable" operations and "in-place" operations. Lists support both, and understanding which category each concatenation method falls into is the foundation of everything that follows.
Throughout this article, "concatenation" refers specifically to combining lists. The term also applies to strings and other sequences in Python, but list behavior has its own specific rules that do not always match string behavior.
The + Operator
The + operator is the first thing most Python learners reach for, and for good reason: it reads exactly like addition and produces an obvious, predictable result.
a = [1, 2, 3]
b = [4, 5, 6]
c = a + b
print(c) # [1, 2, 3, 4, 5, 6]
print(a) # [1, 2, 3] -- unchanged
print(b) # [4, 5, 6] -- unchanged
The + operator calls the __add__ dunder method on the list object. Internally, Python allocates a new list, copies all pointers from the first list into it, then copies all pointers from the second list. The result is always a new object. You can verify this with id():
a = [1, 2, 3]
b = [4, 5, 6]
c = a + b
print(id(a) == id(c)) # False -- c is a new object
This approach is clean and readable, but it carries a real cost when lists are large. Because Python must allocate a new block of memory and copy every element, the time complexity of a + b is O(n + m) where n and m are the lengths of the two lists. Memory usage also peaks at the combined size during the copy. For two lists each containing a million integers, that copy is noticeable.
The + operator can also be chained across multiple lists in a single expression:
result = [1, 2] + [3, 4] + [5, 6]
print(result) # [1, 2, 3, 4, 5, 6]
Each + in that chain creates an intermediate list. Chaining three lists with + creates two intermediate lists before yielding the final result. For a handful of short lists this is inconsequential; for many large lists it can become a bottleneck.
The + operator only works between two lists. Attempting [1, 2] + (3, 4) raises a TypeError: can only concatenate list (not "tuple") to list. If you need to join a list with a tuple or another iterable, you must convert it first or use a different method.
The extend() Method and += Augmented Assignment
Where + always builds something new, extend() modifies the list it is called on. It appends all elements from the iterable you pass as an argument directly into the existing list object.
a = [1, 2, 3]
b = [4, 5, 6]
a.extend(b)
print(a) # [1, 2, 3, 4, 5, 6]
print(b) # [4, 5, 6] -- unchanged
The Python documentation describes extend() as equivalent to a[len(a):] = iterable, which is a slice assignment that appends all items. The critical implementation detail is that CPython's list_extend function pre-allocates space for the incoming items in a single resize operation before copying them in. This means extend() is faster than calling append() in a loop because it avoids repeated reallocation.
"Extend the list by appending all the items from the iterable. Equivalent to a[len(a):] = iterable." — Python 3 Documentation, docs.python.org/3/tutorial/datastructures.html
A closely related syntax is the += augmented assignment operator. On lists, a += b calls __iadd__, which in turn calls extend() internally. The in-place nature of this is easy to miss:
a = [1, 2, 3]
b = [4, 5, 6]
original_id = id(a)
a += b
print(id(a) == original_id) # True -- same object, mutated in place
print(a) # [1, 2, 3, 4, 5, 6]
Contrast this with the + operator:
a = [1, 2, 3]
b = [4, 5, 6]
original_id = id(a)
a = a + b
print(id(a) == original_id) # False -- a now points to a new object
This difference matters in code that passes lists around or holds references to them. If another variable holds a reference to a before the += call, both variables will see the updated contents afterward, because they both point to the same underlying object. This is a common source of bugs.
extend() accepts any iterable, not just lists. You can extend a list with a tuple, a range, a generator, a set, or any other object that Python can iterate over. This makes it significantly more flexible than the + operator in real-world code where the data sources are not always the same type.
a = [1, 2, 3]
a.extend(range(4, 7)) # from a range
a.extend((7, 8, 9)) # from a tuple
a.extend({10, 11, 12}) # from a set (order not guaranteed)
print(a) # [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] (set order may vary)
Unpacking with the * Operator
Python 3.5 introduced the ability to use the * unpacking operator inside list literals, a feature described in PEP 448 (Additional Unpacking Generalizations). This syntax is elegant and surprisingly powerful.
a = [1, 2, 3]
b = [4, 5, 6]
c = [*a, *b]
print(c) # [1, 2, 3, 4, 5, 6]
Like the + operator, this always creates a new list. Where it excels over + is in handling multiple iterables of different types in a single, readable expression:
a = [1, 2]
b = (3, 4)
c = range(5, 7)
d = [*a, *b, *c]
print(d) # [1, 2, 3, 4, 5, 6]
PEP 448 was authored by Joshua Landau and accepted for Python 3.5. The motivation behind it was to remove the awkward asymmetry where *args and **kwargs were allowed inside function calls but not inside collection literals. The result is that any iterable can be unpacked directly into a list literal without needing to call list() or extend() first.
"This PEP proposes extended usages of the*iterable unpacking operator and**dictionary unpacking operators to allow unpacking in more positions." — PEP 448, peps.python.org/pep-0448/
The * unpacking syntax also integrates cleanly into situations where you want to insert elements between two merged lists:
a = [1, 2, 3]
b = [7, 8, 9]
combined = [*a, 4, 5, 6, *b]
print(combined) # [1, 2, 3, 4, 5, 6, 7, 8, 9]
This kind of inline construction is not possible with extend() alone. It is one of the more underused features among intermediate Python developers, and in many cases it produces the clearest code of any of the approaches covered here.
List Comprehensions
A list comprehension can concatenate lists by iterating over a sequence of lists and yielding each element. The idiomatic form looks like this:
lists = [[1, 2], [3, 4], [5, 6]]
combined = [x for sublist in lists for x in sublist]
print(combined) # [1, 2, 3, 4, 5, 6]
This is sometimes called "flattening" when applied to a list of lists. The comprehension iterates over each sublist, then over each element within that sublist. It is equivalent to a nested for loop:
combined = []
for sublist in lists:
for x in sublist:
combined.append(x)
The comprehension version is generally preferred for its conciseness. It also has the advantage that the inner expression can be filtered or transformed during the merge:
a = [1, 2, 3, 4]
b = [5, 6, 7, 8]
evens_only = [x for lst in [a, b] for x in lst if x % 2 == 0]
print(evens_only) # [2, 4, 6, 8]
No other concatenation method allows this kind of simultaneous merge-and-filter in a single readable line. When the goal is not just joining but transforming during the join, a list comprehension is often the right tool.
List comprehensions create a new list. They do not modify any of the source lists. The double for loop syntax (for sublist in lists for x in sublist) can be confusing at first; read it left to right as nested loops, outermost to innermost.
itertools.chain()
The itertools module, part of Python's standard library, provides chain() as a lazy, memory-efficient alternative to any of the copy-based methods above. Rather than building a new list immediately, chain() returns an iterator that yields elements from each iterable one after another, without materializing a combined list in memory.
import itertools
a = [1, 2, 3]
b = [4, 5, 6]
result = itertools.chain(a, b)
print(list(result)) # [1, 2, 3, 4, 5, 6]
The key distinction is that itertools.chain(a, b) does not copy anything. It stores references to a and b and produces elements on demand when iterated. If you only need to iterate over the combined sequence once — say, to feed it into sum(), a for loop, or max() — there is no reason to build a full list at all:
import itertools
a = [1, 2, 3]
b = [4, 5, 6]
total = sum(itertools.chain(a, b))
print(total) # 21
The Python documentation describes chain() as producing "a chain object whose __next__() method returns elements from the first iterable until it is exhausted, then elements from the next iterable, until all of the iterables are exhausted." This lazy evaluation is the defining property.
For cases where you have a list of lists (rather than individual named variables), itertools.chain.from_iterable() is the appropriate variant:
import itertools
lists = [[1, 2], [3, 4], [5, 6]]
result = list(itertools.chain.from_iterable(lists))
print(result) # [1, 2, 3, 4, 5, 6]
chain.from_iterable() takes a single iterable whose elements are themselves iterables, and flattens them lazily. It is equivalent to chain(*lists) but avoids unpacking the entire outer list into function arguments, which matters when the outer list is very long.
Use itertools.chain() when you are concatenating large lists and only need to iterate over the result once. Use list(itertools.chain(...)) only when you actually need a concrete list — for indexing, sorting, or multiple iterations. Materializing unnecessarily is the one performance mistake people make with chain().
append() in a Loop: What Not to Do
It is worth addressing a pattern that appears frequently in early Python code: using append() inside a loop to merge one list into another.
# Common but inefficient pattern
a = [1, 2, 3]
b = [4, 5, 6]
for item in b:
a.append(item)
print(a) # [1, 2, 3, 4, 5, 6]
This produces the correct result but is strictly worse than extend() for a specific reason: each call to append() potentially triggers a reallocation of the list's internal buffer. CPython's list implementation uses a growth factor of approximately 1.125 (formally described in CPython source as new_allocated = (newsize >> 3) + (newsize < 9 ? 3 : 6) + newsize), which means the list will resize multiple times as elements are added one by one. extend(), by contrast, calls the internal list_extend function which computes the required size once, resizes once, and then copies all items in a tight C loop.
The Python documentation explicitly recommends extend() over repeated append() calls when adding multiple elements. Alex Martelli, longtime Python contributor and co-author of the Python Cookbook, noted in a Stack Overflow answer that "a.extend(b) does the exact same thing as a += b and is marginally preferred for clarity." The difference in real benchmarks between append-in-a-loop and extend() grows with list size and becomes significant at tens of thousands of elements.
Do not confuse append() with extend(). Calling a.append(b) where b is a list will add the entire list b as a single nested element inside a, giving you [1, 2, 3, [4, 5, 6]] rather than [1, 2, 3, 4, 5, 6]. This is one of the most common bugs in beginner Python code.
a = [1, 2, 3]
b = [4, 5, 6]
a.append(b) # Wrong for concatenation
print(a) # [1, 2, 3, [4, 5, 6]] -- b is nested, not merged
a = [1, 2, 3]
a.extend(b) # Correct
print(a) # [1, 2, 3, 4, 5, 6]
Edge Cases and Common Mistakes
Concatenating an Empty List
All of the methods described above handle empty lists gracefully. An empty list is a valid iterable with zero elements, so concatenating it with any other list simply returns (or mutates to) the non-empty content.
a = [1, 2, 3]
b = []
print(a + b) # [1, 2, 3]
a.extend(b)
print(a) # [1, 2, 3]
Shallow vs. Deep Copies
Every method in this article produces a shallow copy when it builds a new list. This means the new list contains references to the same objects as the originals, not copies of those objects. If your lists contain mutable elements like other lists or dictionaries, mutating an element in the original will be visible in the concatenated result and vice versa.
inner = [1, 2, 3]
a = [inner, 4]
b = [5, 6]
c = a + b
inner.append(99)
print(c) # [[1, 2, 3, 99], 4, 5, 6] -- inner was modified through c too
If you need independent copies of the nested objects, you must use copy.deepcopy() from the standard library after concatenation. The Python documentation for the copy module states that "a deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original." This is the only safe approach when nested mutability is a concern.
Concatenating Nested Lists (Flattening Only One Level)
None of the methods above recursively flatten nested lists. They all flatten exactly one level. If you have a list of lists of lists, you get a list of lists as the result:
import itertools
data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
flat_one_level = list(itertools.chain.from_iterable(data))
print(flat_one_level) # [[1, 2], [3, 4], [5, 6], [7, 8]]
# Still nested at the inner level
For recursive flattening, you need a custom recursive function or a library like more-itertools, which provides more_itertools.collapse() for arbitrarily nested sequences.
Using sum() to Concatenate
A pattern occasionally seen in Python code uses sum() with an empty list as the initial value to concatenate a list of lists:
lists = [[1, 2], [3, 4], [5, 6]]
result = sum(lists, [])
print(result) # [1, 2, 3, 4, 5, 6]
This works because sum() applies the + operator repeatedly, starting with the initial value. The result is equivalent to [] + [1, 2] + [3, 4] + [5, 6]. The Python documentation explicitly warns against this use: "For some use cases, there are good alternatives to sum(). The preferred, fast way to concatenate a sequence of strings is by calling ''.join(sequence). To add floating point values with extended precision, see math.fsum(). To concatenate a series of iterables, consider using itertools.chain()." In other words, using sum() for list concatenation is technically valid but creates O(n^2) copies and is considered poor style. Avoid it in production code.
Type Compatibility
The + operator requires both operands to be lists. Every other method discussed — extend(), +=, * unpacking, list comprehensions, and itertools.chain() — accepts any iterable on the right-hand side. This makes them universally preferable when working with heterogeneous data sources.
import itertools
a_list = [1, 2, 3]
a_tuple = (4, 5, 6)
a_range = range(7, 10)
# This works with extend, *, chain, comprehension:
result = [*a_list, *a_tuple, *a_range]
print(result) # [1, 2, 3, 4, 5, 6, 7, 8, 9]
# This does NOT work:
# a_list + a_tuple -- TypeError
Key Takeaways
- Use
+for simple, readable joins of two or three small lists where you need a new list and the originals unchanged. Avoid it in loops or with large lists due to repeated allocation costs. - Use
extend()or+=when you want to modify a list in place and performance matters. This is the fastest method for adding many elements to an existing list. Be aware that it mutates the original, which affects any other references pointing to it. - Use
*unpacking inside a list literal for clean, flexible joins across multiple iterables of different types. This is the most versatile readable approach introduced in Python 3.5 and later. - Use a list comprehension when you need to transform or filter during the merge. No other method combines joining and conditional logic in a single expression with the same clarity.
- Use
itertools.chain()when concatenating large lists and you only need one iteration over the result. It is the most memory-efficient approach because it never copies data unless you explicitly calllist()on it. - Never use
append()in a loop as a substitute forextend(). The distinction betweenappend()(adds a single element) andextend()(adds all elements of an iterable) is one of the most common sources of bugs for new Python programmers. - Remember that all concatenation in Python is shallow. If your lists contain mutable nested objects, the references are shared between the source lists and the result. Use
copy.deepcopy()when you need full independence.
List concatenation is one of those operations that seems trivial until it causes a subtle bug or a performance problem at scale. Knowing which method to use — and why — is the difference between Python code that works and Python code that works well. The language's flexibility in offering so many approaches is a feature, not a trap, once you understand the tradeoffs each one makes.
Sources
- Python 3 Documentation — Data Structures: docs.python.org/3/tutorial/datastructures.html
- Python 3 Documentation — itertools.chain: docs.python.org/3/library/itertools.html#itertools.chain
- Python 3 Documentation — copy module: docs.python.org/3/library/copy.html
- PEP 448 — Additional Unpacking Generalizations: peps.python.org/pep-0448/
- Python 3 Documentation — Built-in Functions, sum(): docs.python.org/3/library/functions.html#sum
- CPython source — listobject.c, list_extend implementation: github.com/python/cpython/blob/main/Objects/listobject.c