Preface

This is the second edition of The Garbage Collection Handbook. How is it that, more than sixty years after the introduction of garbage collection in 1958 and exactly thirty years1 since the first International Workshop on Memory Management (the forerunner of the International Symposium on Memory Management) in 1992, garbage collection in particular and memory management more generally remain such vital topics of research and development?

Garbage collection was born in the Lisp programming language. McCarthy recollects that the first online demonstration was to an MIT Industrial Liaison Symposium. It was important to make a good impression but, unfortunately, midway through the demonstration, the IBM 7042 exhausted (all of!) its 32k words of memory — McCarthy's team had omitted to refresh the Lisp core image from a previous rehearsal — and its Flexowriter printed, at ten characters per second,

THE GARBAGE COLLECTOR HAS BEEN CALLED. SOME INTERESTING STATISTICS ARE AS FOLLOWS:

and so on at great length, taking all the time remaining for the demonstration. McCarthy and the audience collapsed in laughter. Over sixty years on now, garbage collection is no joke but an essential component of modern programming language implementations. Indeed, Rust (first released in 2012) and Visual Basic (introduced in 1991) are probably the only widely used languages developed since 1990 not to adopt automatic memory management of some kind, yet Rust provides a reference counted smart pointer and VB.NET (2002), a more modern incarnation of Visual Basic, relies on the garbage collector in Microsoft's Common Language Runtime.

Garbage collection continues to be a vibrant area of research and is far from a solved problem. In Blackburn's words3, “Seismic technology shifts, unprecedented scale, concerns for energy efficiency and security, a diversity of programming languages and above all, ubiquity, have conspired to make memory management more challenging, interesting, and important than ever.” Memory management sits at the crucial interface between the hardware and the programming language. It presents deep problems and challenges, both theoretical and engineering. Garbage collection is probably more important and affects more people than ever before. It is in their phones and their browsers; it is in the servers in the cloud on which so much of modern life depends. Despite the growth in capacity on every device from the smallest one in your pocket to the largest systems in enterprise-scale warehouses, memory is not free and efficient access is vital. For example, recently Hunter et al. [2022] found (and then recovered) 26% of memory that was wasted due to fragmentation in Google’s warehouse-scale computers: a significant cost in both financial and energy terms. The advantages that garbage collected languages offer to software development are legion. It eliminates whole classes of bugs, such as attempting to follow dangling pointers that still refer to memory that has been reclaimed or worse, reused in another context. Thus, it helps reduce memory vulnerabilities, by far the most pressing security concern today. It makes it no longer possible to free memory that has already been freed. It reduces the chances of programs leaking memory, although it cannot cure all errors of this kind. It greatly simplifies the construction and use of concurrent data structures [Herlihy and Shavit, 2008]. Above all, the abstraction offered by garbage collection provides for better software engineering practice. It simplifies user interfaces and leads to code that is easier to understand and to maintain, and hence more reliable. By removing memory management worries from interfaces, it leads to code that is easier to reuse.

The memory management field has developed at an ever-increasing rate in recent years, in terms of both software and hardware. In 1996, a typical Intel Pentium processor had a clock speed of 120 MHz although high-end workstations based on Digital's Alpha chips could run as fast as 266 MHz! Today's top-end processors run at over 3 GHz - Intel's recently announced Core i9-12900KS can even run at up to 5.5 GHz! Multicore chips are ubiquitous. The size of main memory deployed has similarly increased nearly 1000-fold, from a few megabytes to multiple gigabytes being common in desktop machines today, and terabytes in servers. Nevertheless, the advances made in the performance of DRAM memory continue to lag well behind those of processors. At that time, in his first book, Garbage Collection: Algorithms for Automatic Dynamic Memory Management (Wiley, 1996), Richard did not argue that “garbage collection is a panacea for all memory management problems,” and in particular pointed out that “the problem of garbage collection for hard real-time programming [where deadlines must be met without fail] has yet to be solved.” Yet by the time of the first edition of this book (2012), hard real-time collectors have moved out of the research laboratory and into commercially deployed systems. Nevertheless, although many problems have been solved by modern garbage collector implementations, new hardware, new environments and new applications continue to throw up new research challenges for memory management.

The audience

In this book, we have tried to bring together the wealth of experience gathered by automatic memory management researchers and developers over the past fifty years. The literature is huge - our online bibliography contains over 3,400 entries at the time of writing. We discuss and compare the most important approaches and state-of-the-art techniques in a single, accessible framework. We have taken care to present algorithms and concepts using a consistent style and terminology. These are described in detail, often with pseudocode and illustrations. Where it is critical to performance, we pay attention to low level details, such as the choice of primitive operations for synchronisation or how hardware components such as caches influence algorithm design.

In particular, we address the new challenges presented to garbage collection by advances in hardware and software over the last couple of decades. The gap in performance between processors and memory has by and large continued to widen. Processor clock speeds have increased, more and more cores are being placed on each die and configurations with multiple processor modules are common. This book focuses strongly on the consequences of these changes for designers and implementers of high performance garbage collectors. Their algorithms must take locality into account since cache performance is critical. Increasing numbers of application programs are multithreaded and run on multicore processors. Memory managers must be designed to avoid becoming a sequential bottleneck. On the other hand, the garbage collector itself should be designed to take advantage of the parallelism provided by new hardware. Richard did not consider at all how to run multiple collector threads in parallel. We devoted but a single chapter to incremental and concurrent collection, which seemed exotic then.

We are sensitive throughout this book to the opportunities and limitations provided by modern hardware. We address locality issues throughout. From the outset, we assume that application programs may be multithreaded. Although we cover many of the more simple and traditional algorithms, we also devote nearly half of the book to discussing parallel, incremental, concurrent and real-time garbage collection.

We hope that this survey will help postgraduate students, researchers and developers who are interested in the implementation of programming languages. The book should also be useful to undergraduates taking advanced courses in programming languages, compiler construction, software engineering or operating systems. Furthermore, we hope that it will give professional programmers better insight into the issues that the garbage collector faces and how different collectors work and that, armed with this knowledge, they will be better able to select and configure the choice of garbage collectors that many languages offer. The almost universal adoption of garbage collection by modern programming languages makes a thorough understanding of this topic essential for any programmer.

Structure of the book

Chapter 1 starts by considering why automatic storage reclamation is desirable, and briefly introduces the ways in which different garbage collection strategies can be compared. It ends with a description of the abstractions and pseudocode notation used throughout the rest of the book.

The Chapters 2 to 5 discuss the classical garbage collection building blocks in detail. We look at mark-sweep, mark-compact and copying garbage collection, followed by reference counting. These strategies are covered in depth, with particular focus on their implementation on modern hardware. Readers looking for a gentler introduction might also consult Richard's earlier book Garbage Collection: Algorithms for Automatic Dynamic Memory Management. Chapter 6 compares the strategies and algorithms covered in those chapters in depth, assessing their strengths, weaknesses and applicability to different contexts.

How storage is reclaimed depends on how it is allocated. Chapter 7 considers different techniques for allocating memory and examines the extent to which automatic garbage collection leads to allocator policies that are different to those of explicit malloc/free memory management.

The first seven chapters make the implicit assumption that all objects in the heap are managed in the same way. However, there are many reasons why that would be a poor design. Chapters 8 to 10 consider why we might want to partition the heap into different spaces, and how we might manage those spaces. We look at generational garbage collection, one of the most successful strategies for managing objects, how to handle large objects and many other partitioned schemes.

The interface with the rest of the run-time system is one of the trickiest aspects of building a collector.3 We devote Chapter 11 to the run-time interface, including finding pointers, safe points at which to collect, and read and write barriers, and Chapter 12 to language-specific concerns such as finalisation and weak references.

Next we turn our attention to concurrency. We set the scene in Chapter 13 by examining what modern hardware presents to the garbage collection implementer, and looking at algorithms for synchronisation, progress, termination and consensus. In Chapter 14, we see how we can execute multiple collector threads in parallel while all the application threads are halted. In Chapters 15 to 18, we consider a wide range of concurrent collectors, in which we relax this ‘stop-the-world’ requirement in order to allow collection to take place with only the briefest, if any, interruptions to the user program. Finally, Chapter 19 takes this to its most challenging extreme, garbage collection for hard real-time systems. Chapters 20 and 21 are completely new to this second edition, addressing energy-aware collection and collection of persistent systems.

At the end of each chapter, we offer a summary of issues to consider. These are intended to provoke the reader into asking what requirements their system has and how they can be met. What questions need to be answered about the behaviour of the client program, their operating system or the underlying hardware? These summaries are not intended as a substitute for reading the chapter. Above all, they are not intended as canned solutions, but we hope that they will provide a focus for further analysis.

Finally, what is missing from the book? We have only considered automatic techniques for memory management embedded in the run-time system. Thus, even when a language specification mandates garbage collection, we have not discussed in much depth other mechanisms for memory management that it may also support. The most obvious example is the use of ‘regions’ [Tofte and Talpin, 1994], most prominently used in the Real-Time Specification for Java. We pay attention only briefly to questions of region inferencing or stack allocation and very little at all to other compile-time analyses intended to replace, or at least assist, garbage collection. Neither do we address how best to use techniques such as reference counting in the client program, although this is popular in languages like C++. Finally, the last decade has seen little new research on distributed garbage collection. In many ways, this is a shame since we expect lessons learnt in that field also to be useful to those developing collectors for the next generation of machines with heterogeneous collections of highly non-uniform memory architectures. Nevertheless, we do not discuss distributed garbage collection here.

e-book

The book is available either in print or as an e-book. The e-book has a number of enhancements over the print version. In particular, it is heavily hyperlinked. There are over 37,000 clickable links from every reference to its target (chapter, section, algorithm, figure, table and so on). Clicking on a citation will take the reader to its entry in the bibliography, many of which have a digital object identifier (‘DOI’), in turn a link to the original paper. Each entry in the bibliography also includes a list of pages from which it was cited; these are links back to the referring citations. Similarly, entries in the index contain links to the pages where the entry was mentioned. Finally, technical terms in the text have been linked to their entries in the glossary.

Online resources

The web page accompanying the book can be found at

http://www.gchandbook.org

It includes a number of resources including our comprehensive bibliography. The bibliography at the end of this book contains over 600 references. However, our comprehensive online database contains over 3,400 garbage collection related publications. This database can be searched online, or downloaded as BibTeX, PostScript or PDF. As well as details of the article, papers, books, theses and so on, the bibliography also contains abstracts for some entries and URLs or DOIs for most of the electronically available ones.

We continually strive to keep this bibliography up to date as a service to the community. Richard (R.E.Jones@kent.ac.uk) would be very grateful to receive further entries (or corrections).

Updates in the second edition

This second edition is testament to the continued progress in garbage collection both academically and in major deployed systems. We aimed to include significant work within our scope up through the first half of 2022. While all chapters have seen some revision and refinement, the more significant changes include: considerable revision of the discussions of finalisation and weak references, plus managing changing object layout needed for dynamic languages in Chapter  collection on GPUs in Chapter  expanded discussion of barriers ,including load barriers, in Chapter 15; expanded discussion of the Garbage-First (G1) collector in Chapters 16 and 17; discussion of new collectors including Collie, Transactional Sapphire, Platinum, ZGC and Shenandoah, and updated discussion of Pauseless and C4 in Chapter  and discussion of the high-performance reference counter, LXR, in Chapter 18. We have added entirely new chapters on energy-aware collection (Chapter 20) and collection of persistent}systems based on byte-addressable non-volatile memory (Chapter 21). The glossary and bibliography expanded substantially as has the index. While memory management continues to be a dynamic field of work, we believe this new edition captures the most important developments of the last decade, maintaining the utility and relevance of this handbook.

Acknowledgements

We thank our many colleagues for their support for this new book. It is certain that without their encouragement (and pressure), this work would not have got off the ground. In particular, we thank David Bacon, Steve Blackburn, Hans Boehm, Cliff Click, David Detlefs, Daniel Frampton, Robin Garner, Barry Hayes, Laurence Hellyer, Maurice Herlihy, Martin Hirzel, Abhinav Jangda, Tomáš Kalibera, Roman Kennke, Doug Lea, Simon Marlow, Alan Mycroft, Rupesh Nasre, Cosmin Oancea, Erik Österlund, Erez Petrank, Fil Pizlo, Tony Printezis, John Reppy, Thomas Schatzl, Peter Sewell, David Siegwart, Gil Tene, Tomoharu Ugawa and Mario Wolczko, all of whom have answered our many questions or given us excellent feedback on early drafts. We thank others whose keen eyes have spotted errors in the first edition, including Roberto Cometti, Neil Dhar, Raffaello Giulietti, Tsuneyasu Komiya, Atusi Maeda, Hayley Patton and Carl Shapiro. We also pay tribute to the many computer scientists who have worked on automatic memory management since 1958: without them there would be nothing to write about. In particular, we acknowledge Tomoharu Ugawa and Carl Ritson, co-authors with Richard, of the works on Transactional Sapphire, and Albert Yang and Tobias Wrigstad for their deep dive into ZGC on which we draw heavily in Sections 17.4 and 17.5, respectively. We are extremely grateful to Thomas Schatzl, Erik Österlund and Roman Kennke for patiently explaining to us the nitty-gritty details of the Garbage-First (G1) (Sections 10.4 and 16.5), ZGC and Shenandoah (Section 17.5) collectors.

We are very grateful to Randi Cohen (now Slate), our long-suffering editor at Taylor and Francis, for her support and patience. She has always been quick to offer help and slow to chide us for our tardiness. Dare we recall Douglas Adams, “I love deadlines. I like the whooshing sound they make as they fly by”? We also thank Elizabeth Haylett and Kate Pool from the Society of Authors for their advice, which we recommend highly.

Richard Jones, Antony Hosking, Eliot Moss

Above all, I am grateful to Robbie. How she has borne the stress of another edition of this book, whose writing has yet again stretched well over the planned deadline, I will never know. I owe you everything! I also doubt whether this book would have seen the light of day without the inexhaustible enthusiasm of my co-authors. Tony, Eliot, it has been a pleasure and an honour writing with knowledgeable and diligent colleagues. I am also grateful to the Engineering and Physical Sciences Research Council for the support of my work through grants EP/D078342/1, EP/F06523X/1 and EP/H026975/1, and to the Royal Academy of Engineering's Distinguished Visiting Fellowship.

Richard Jones

In the summer of 2002, Richard and I hatched plans to write a follow-up to his 1996 book. There had been lots of new work on GC in those six years, and it seemed there was demand for an update. Little did we know then that it would be another nine years before the current volume would appear. Richard, your patience is much appreciated. As conception turned into concrete planning, Eliot's offer to pitch in was gratefully accepted; without his sharing the load we would still be labouring anxiously. Much of the early planning and writing was carried out while I was on sabbatical with Richard in 2008, with funding from the United Kingdom Engineering and Physical Sciences Research Council through grant EP/F06523X/1 and the United States' National Science Foundation whose support we gratefully acknowledge. Mandi, without your encouragement and willingness to live out our own Canterbury Tale this project would not have been possible.

Antony Hosking

Thank you to my co-authors for inviting me into their project, already largely conceived and being proposed for publication. You were a pleasure to work with (as always) and tolerant of my sometimes idiosyncratic writing style. A formal thank you is also due to the Royal Academy of Engineering, who supported my visit to the UK in November 2009 through a Distinguished Visitung Fellowship, which greatly advanced the book. Other funding agencies supported the work indirectly by helping us attend conferences and meetings at which we could gain some face-to-face working time for the book as well. And most of all, many thanks to my ‘girls,’ who endured my absences, physical and otherwise. Your support was essential and is deeply appreciated!

For the second edition: Thanks are again due to my family, especially for their tolerance of my work on the book during what was supposed to be vacation, and to Robbie Jones for hosting me for three weeks in Canterbury as we kicked off the second edition revisions.

Eliot Moss

Material in this book is partly based upon work supported by the National Science Foundation under Grant No. 0811691. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation


  1. By yet another curious chronological coincidence, we started writing the first edition on the tenth anniversary of the First International Symposium on Memory Management, held in October 1998, itself almost exactly forty years after the implementation of Lisp started in 1958.
  2. The IBM 704's legacy to the Lisp world includes the terms car (in Lisp) and cdr. The 704's 36-bit words included two 15-bit parts, the address and decrement parts. Lisp's list or cons cell stored pointers in these two parts. The head of the list, the car (in Lisp), could be obtained using the 704's car ‘Contents of the Address part of Register’ instruction, and the tail, the cdr, with its cdr ‘Contents of the Decrement part of Register’ instruction.
  3. Keynote address, We live in interesting times, International Symposium on Memory Management, 2022.
  4. And one that Richard passed on in Jones [1996]!