Happy anniversary! As we near completion of this book it is also the 50th anniversary of the first papers on automatic dynamic memory management, or garbage collection, written by McCarthy and Collins in 1960. Garbage collection was born in the Lisp programming language. By a curious coincidence, we started writing on the tenth anniversary of the first International Symposium on Memory Management, held in October 1998, almost exactly 40 years after the implementation of Lisp started in 1958. McCarthy recollects that the first online demonstration was to an MIT Industrial Liaison Symposium. It was important to make a good impression but unfortunately, mid-way through the demonstration, the IBM 7041 exhausted (all of!) its 32k words of memory - McCarthy's team had omitted to refresh the Lisp core image from a previous rehearsal - and its Flexowriter printed, at ten characters per second,


and so on at great length, taking all the time remaining for the demonstration. McCarthy and the audience collapsed in laughter. Fifty years on, garbage collection is no joke but an essential component of modern programming language implementations. Indeed, Visual Basic (introduced in 1991) is probably the only widely used language developed since 1990 not to adopt automatic memory management, but even its modern incarnation, VB.NET (2002), relies on the garbage collector in Microsoft's Common Language Runtime.

The advantages that garbage collected languages offer to software development are legion. It eliminates whole classes of bugs, such as attempting to follow dangling pointers that still refer to memory that has been reclaimed or worse, reused in another context. It is no longer possible to free memory that has already been freed. It reduces the chances of programs leaking memory, although it cannot cure all errors of this kind. It greatly simplifies the construction and use of concurrent data structures [Herlihy and Shavit, 2008]. Above all, the abstraction offered by garbage collection provides for better software engineering practice. It simplifies user interfaces and leads to code that is easier to understand and to maintain, and hence more reliable. By removing memory management worries from interfaces, it leads to code that is easier to reuse.

The memory management field has developed at an ever increasing rate in recent years, in terms of both software and hardware. In 1996, a typical Intel Pentium processor had a clock speed of 120 MHz although high-end workstations based on Digital's Alpha chips could run as fast as 266 MHz! Today's top-end processors run at over 3 GHz and multicore chips are ubiquitous. The size of main memory deployed has similarly increased nearly 1000-fold, from a few megabytes to four gigabytes being common in desktop machines today. Nevertheless, the advances made in the performance of DRAM memory continue to lag well behind those of processors.
At that time, we wrote that we did not argue that "garbage collection is a panacea
for all memory management problems,'" and in particular pointed out that "the problem of garbage collection for hard real-time programming [where deadlines must be met without fail] has yet to be solved [Jones, 1996]. Yet today, hard real-time collectors have moved out of the research laboratory and into commercially deployed systems. Nevertheless, although many problems have been solved by modern garbage collector implementations, new hardware, new environments and new applications continue to throw up new research challenges for memory management.

The audience

In this book, we have tried to bring together the wealth of experience gathered
by automatic memory management researchers and developers over the past fifty
years. The literature is huge - our online bibliography contains 2,500 entries at the time of writing. We discuss and compare the most important approaches and state-of-the-art techniques in a single, accessible framework. We have taken care to present algorithms and concepts using a consistent style and terminology. These are described in detail, often with pseudocode and illustrations. Where it is critical to performance, we pay attention to low level details, such as the choice of primitive operations for synchronisation or how hardware components such as caches influence algorithm design.

In particular, we address the new challenges presented to garbage collection by advances in hardware and software over the last decade or so. The gap in performance between processors and memory has by and large continued to widen. Processor clock speeds have increased, more and more cores are being placed on each die and configurations with multiple processor modules are common. This book focuses strongly on the consequences of these changes for designers and implementers of high performance garbage collectors. Their algorithms must take locality into account since cache performance is critical. Increasing numbers of application programs are multithreaded and run on multicore processors. Memory managers must be designed to avoid becoming a sequential bottleneck. On the other hand, the garbage collector itself should be designed to take advantage of the parallelism provided by new hardware. In Jones [1996], we did not consider at all how we might run multiple collector threads in parallel. We devoted but a single chapter to incremental and concurrent collection, which
seemed exotic then.

We are sensitive throughout this book to the opportunities and limitations provided by modern hardware. We address locality issues throughout. From the outset, we assume that application programs may be multithreaded. Although we cover many of the more simple and traditional algorithms, we also devote nearly half of the book to discussing parallel, incremental, concurrent and real-time garbage collection.

We hope that this survey will help postgraduate students, researchers and developers who are interested in the implementation of programming languages. The book should also be useful to undergraduate students taking advanced courses in programming languages, compiler construction, software engineering or operating systems. Furthermore, we hope that it will give professional programmers better insight into the issues that the garbage collector faces and how different collectors work and that, armed with this knowledge, they will be better able to select and configure the choice of garbage collectors that many languages offer. The almost universal adoption of garbage collection by modern programming languages makes a thorough understanding of this topic essential for any programmer.

Structure of the book

Chapter 2 starts by considering why automatic storage reclamation is desirable, and briefly introduces the ways in which different garbage collection strategies can be compared. It ends with a description of the abstractions and pseudocode notation used throughout the rest of the book.

The next four chapters discuss the classical garbage collection building blocks in detail. We look at mark-sweep, mark-compact and copying garbage collection, followed by reference counting. These strategies are covered in depth, with particular focus on their implementation on modern hardware. Readers looking for a gentler introduction might also consult our earlier book Garbage Collection: Algorithms for Automatic Dynamic Memory Management, Richard Jones and Rafael Lins, Wiley, 1996. The next chapter compares the strategies and algorithms covered in Chapters 2 to 5 in depth, assessing their strengths, weaknesses and applicability to different contexts.

How storage is reclaimed depends on how it is allocated. Chapter 7 considers different techniques for allocating memory and examines the extent to which automatic garbage collection leads to allocator policies that are different to those of explicit malloc/free memory management.

The first seven chapters make the implicit assumption that all objects in the heap are managed in the same way. However, there are many reasons why that would be a poor design. Chapters 8 to 10 consider why we might want to partition the heap into different spaces, and how we might manage those spaces. We look at generational garbage collection, one of the most successful strategies for managing objects, how to handle large objects and many other partitioned schemes.

The interface with the rest of the run-time system is one of the trickiest aspects of building a collector.2 We devote Chapter 11 to the run-time interface, including finding pointers, safe points at which to collect, and read and write barriers, and Chapter 12 to language-specific concerns such as finalisation and weak references.

Next we turn our attention to concurrency. We set the scene in Chapter 13 by examining what modern hardware presents to the garbage collection implementer, and looking at algorithms for synchronisation, progress, termination and consensus. In Parallel we see how we can execute multiple collector threads in parallel while all the application threads are halted. In the next four chapters we consider a wide range of concurrent collectors, in which we relax this 'stop-the-world' requirement in order to allow collection to take place with only the briefest, if any, interruptions to the user program. Finally, Chapter 19 takes this to its most challenging extreme, garbage collection for hard real-time systems.

At the end of each chapter, we offer a summary of issues to consider. These are intended to provoke the reader into asking what requirements their system has and how they can be met. What questions need to be answered about the behaviour of the client program, their operating system or the underlying hardware? These summaries are not intended as a substitute for reading the chapter. Above all, they are not intended as canned solutions, but we hope that they will provide a focus for further analysis.

Finally, what is missing from the book? We have only considered automatic techniques for memory management embedded in the run-time system. Thus, even when a language specification mandates garbage collection, we have not discussed in much depth other mechanisms for memory management that it may also support. The most obvious example is the use of 'regions' [Tofte and Talpin, 1994], most prominently used in the Real-Time Specification for Java. We pay attention only briefly to questions of region inferencing or stack allocation and very little at all to other compile-time analyses intended to replace, or at least assist, garbage collection. Neither do we address how best to use techniques such as reference counting in the client program, although this is popular in languages like C++. Finally, the last decade has seen little new research on distributed garbage collection. In many ways, this is a shame since we expect lessons learnt in that field also to be useful to those developing collectors for the next generation of machines with heterogeneous collections of highly non-uniform memory architectures. Nevertheless, we do not discuss distributed garbage collection here.

Online resources

The web page accompanying the book can be found at

It includes a number of resources including our comprehensive bibliography. The bibliography at the end of this book contains over 400 references. However, our comprehensive online database contains over 2500 garbage collection related publications. This database can be searched online or downloaded as , PostScript or PDF. As well as details of the article, papers, books, theses and so on, the bibliography also contains abstracts for some entries and URLs or DOIs for most of the electronically available ones.

We continually strive to keep this bibliography up to date as a service to the community. Richard ( would be very grateful to receive further entries (or corrections).


We thank our many colleagues for their support for this new book. It is certain that without their encouragement (and pressure), this work would not have got off the ground. In particular, we thank Steve Blackburn, Hans Boehm, David Bacon, Cliff Click, David Detlefs, Daniel Frampton, Robin Garner, Barry Hayes, Laurence Hellyer, Maurice Herlihy, Martin Hirzel, Tomáš Kalibera, Doug Lea, Simon Marlow, Alan Mycroft, Cosmin Oancea, Erez Petrank, Fil Pizlo, Tony Printezis, John Reppy, David Siegwart, Gil Tene and Mario Wolczko, all of whom have answered our many questions or given us excellent feedback on early drafts. We also pay tribute to the many computer scientists who have worked on automatic memory management since 1958: without them there would be nothing to write about.

We are very grateful to Randi Cohen, our long-suffering editor at Taylor and Francis, for her support and patience. She has always been quick to offer help and slow to chide us for our tardiness. We also thank Elizabeth Haylett and the Society of Authors for her service, which we recommend highly to other authors.

Richard Jones, Antony Hosking, Eliot Moss

Above all, I am grateful to Robbie. How she has borne the stress of another book, whose writing has yet again stretched well over the planned two years, I will never know. I owe you everything! I also doubt whether this book would have seen the light of day without the inexhaustible enthusiasm of my co-authors. Tony, Eliot, it has been a pleasure and an honour writing with knowledgeable and diligent colleagues. I am also grateful to the Engineering and Physical Sciences Research Council for the support of my work through grants EP/D078342/1, EP/F06523X/1 and EP/H026975/1, and to the Royal Academy of Engineering's Distinguished Visiting Fellowship.

Richard Jones

In the summer of 2002 Richard and I hatched plans to write a follow-up to his 1996 book. There had been lots of new work on GC in those six years, and it seemed there was demand for an update. Little did we know then that it would be another nine years before the current volume would appear. Richard, your patience is much appreciated. As conception turned into concrete planning, Eliot's offer to pitch in was gratefully accepted; without his sharing the load we would still be labouring anxiously. Much of the early planning and writing was carried out while I was on sabbatical with Richard in 2008, with funding from Britain's Engineering and Physical Sciences Research Council through grant EP/F06523X/1 and the United States' National Science Foundation whose support we gratefully acknowledge. Mandi, without your encouragement and willingness to live out our own Canterbury tale this project would not have been possible.

Antony Hosking

Thank you to my co-authors for inviting me into their project, already largely conceived and being proposed for publication. You were a pleasure to work with (as always), and tolerant of my sometimes idiosyncratic writing style. A formal thank you is also due the Royal Academy of Engineering, who supported my visit to the UK in November 2009 through a Distinguished Visitung Fellowship, which greatly advanced the book. Other funding agencies supported the work indirectly by helping us attend conferences and meetings at which we could gain some face to face working time for the book as well. And most of all many thanks to my "girls," who endured my absences, physical and otherwise. Your support was essential and is deeply appreciated!

Eliot Moss

Material in this book is partly based upon work supported by the National Science Foundation under Grant No. 0811691. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation

  1. The IBM 704's legacy to the Lisp world includes the terms car (in Lisp) and cdr. The 704's 36-bit words included two 15-bit parts, the address and decrement parts. Lisp's list or cons cell stored pointers in these two parts. The head of the list, the car (in Lisp), could be obtained using the 704's car 'Contents of the Address part of Register' instruction, and the tail, the cdr, with its cdr `Contents of the Decrement part of Register' instruction.
  2. And one that we passed on in Jones [1996]!