Green Files

What and where are the "stack" and "heap"?

Day: 05/31/2022 - Time: 15:51:50

A stack, in this context, is an optimized way to organize in-memory data allocated in sequence and abandoned (yes, there is normally no deallocation) in reverse sequence to the input.

A heap is the most flexible memory organization that allows the use of any available logical area.

Which pile are we talking about?

There are some very widespread stack concepts in computing, to name a few:

There is the execution stack of some architecture where the instructions and data are being stacked and after executing something there, the unstacking occurs.

There is the function call stack, which is confused with memory management, where functions are called and stacked and when their execution ends, it comes off the stack.

There is the generic data structure that stacks miscellaneous data.

Abstract concept

The two concepts in the question are abstract. There is not physically a specific memory area for the stack (much less its area is physically stacked) and there is no area reserved for the heap, on the contrary, it is usually quite fragmented. We use the concept to better understand how it works and its implications, especially in the case of the stack.

Most modern and popular computer architectures do not have great facilities to manipulate this memory stack (it usually has only the stack pointer register), as well as the heap, although in this case, instructions that help manipulate virtual memory in a certain way help to organize the heap, but this goes for all memory, not just the heap.

Getting a little more concrete

The operating system, on the other hand, is well aware of these concepts and it is essential that they have some form, even if limited, to manipulate application memory, especially in modern and general-purpose systems. Modern systems have a complex management through what is conventionally called virtual memory, which is also an abstract concept, often misunderstood.

Where we move directly

In Assembly or C it is very common to have contact with this memory management. In Assembly it is common to manipulate the stack almost directly and in both languages ​​at least the allocation and deallocation of the heap must be done manually through the operating system API. In C the stack is managed by the compiler, unless some unusual operation is required.

Nothing prevents you from using some library that abstracts this manipulation, but this is only common in higher-level languages. In fact, it is very common for other languages ​​to internally use the OS API to do heavy memory management, but memory access in "retail" is done by its own manager, usually called garbage collector through reference counting techniques. for an object on the heap (some consider that this is not a garbage collector technique) or to check later if there are references to the object on the heap. Even using a more abstract library, the concepts remain.

The higher the level, the less they need to manage all of this, but understanding the general workings is important in all languages.

Languages ​​that don't need performance can leave everything on the heap and "facilitate" compression and access.

Battery

Allocation

Under normal conditions, the stack is allocated at the beginning of the application's execution, more precisely at the beginning of the thread, even if the application only has the main thread.

The stack is a contiguous portion of memory reserved for stacking the necessary data during the execution of code blocks.

Each allocation need is a part of the stack that is always used in sequence determined by a marker, that is, a pointer, a pointer, "moves" to indicate that a new part in the sequence of this reserved portion is compromised.

When something reserved for a segment is no longer needed, this marker moves in the opposite direction of the data sequence indicating that some of that data may be discarded (overlaid with new data).

The allocation of each memory segment does not exist on the stack, it is just the movement of this pointer indicating that that area will be used by some data.

Roughly speaking, we can say that the application has full control over the *stack, except when it runs out of space.

There are features to manually change the stack size, but this is uncommon.

Operation

The stack works using a LIFO (Last in First Out) or UEPS (Last in, first out) manner.

The scope of a variable usually defines the allocation time on the stack. The data used as parameters and functions return are allocated on the stack. That's why the function call stack is confused with the memory stack.

We can say that parameters are the first variables of a function allocated on the stack. Accessing the data in the stack is usually done directly, but there are indirections as well.

You can understand that each thread has its own stack, right? And the stack size of each created thread can have its size defined before creation. A default value is often used.

The stack is considered an automatic form of allocation (often confused with static, which is allocation that occurs at the same time as execution is carried out. Technically, there is another area of memory that is really static, that is allocated before execution begins. The area effectively static cannot be manipulated, cannot be written (at least it shouldn't be able to) The stack itself is static, although its data is not, after all they are being placed and abandoned according to its use, its management is automatic.

Decision on where to allocate

As with the heap, it is not possible to allocate data on the stack before knowing its size (you don't need to know it at the time of compiling, but at the time of executing the allocation, but the stack has some restrictions). But if the size is undetermined at compile time or can be determined to be possibly large (perhaps a few tens of bytes), then most likely the allocation should occur on the heap.

High-level languages ​​predetermine this. Others let the programmer have more control, and can even abuse the stack if it is useful and the programmer knows what he is doing.

Stack overflow

The famous stack overflow occurs when you try to allocate something on the stack and there is no placeholder available. It can also, in some cases if the language provides mechanisms that allow, to have an overflow of data on top of another that comes next on the stack. Uncontrolled recursive execution causes stack overflow.

Another pile

There is also a call stack which is where the addresses where the stack pointer should return to when a function finishes executing are stored.

Heap

Allocation

The heap, unlike the stack, does not impose a model, a memory allocation pattern. This is not very efficient but it is very flexible.

The heap is considered dynamic. In general you allocate or deallocate small chunks of memory, just for the data needed. This allocation can physically occur on any free portion of memory available to your process.

The operating system's virtual memory management, aided by processor instructions, helps to organize this.

In a way we can say that the stack as a whole is the first object allocated on the heap.

Effectively these actual allocations usually occur in fixed-size blocks called pages. This prevents the application from making dozens or hundreds of small allocations that would fragment the memory in an extreme way and avoids calls to the operating system that switch context and are usually much slower. In general, every memory allocation system allocates more than it needs and gives access to the application as it needs, in some cases, it almost simulates a stack, for some time, or doing memory reorganization (through a GC packer).

Deallocation

Deallocation of the heap usually happens:

Manually (at the risk of bugs), although this is not available for some languages; through such a garbage collector that identifies when a part of the heap is no longer needed; when an application terminates.

Depends on implementation

There are even languages ​​that have specialized heaps that can behave a little differently, but let's keep it simple for common cases.

Abstract concept

It is clear that the heap is not an area of ​​memory, even abstractly conceptualized, it is a set of small areas of memory.

Physically it is often fragmented throughout memory. These parts are very flexible in size and lifespan.

For security reasons it is good to know that deallocating is an abstract concept as well. It is often possible to access data from an application even after it has finished. The content is only deleted by manual request or when an available area is written again.

Heap cost

Allocation on the heap "costs" expensive. Many tasks must be performed by the operating system to ensure the perfect allocation of an area to a section of it, especially in concurrent environments, which are very common nowadays, and even when you don't need the OS, you still have a complex algorithm to allocate. Deallocating, or making available back an area also has its cost, in some cases for the allocation to be cheaper, the release is very expensive (ironically it can be controlled by several stacks).

There are even ways to avoid calling the operating system for each allocation needed, but still the "cost" of processing this is considered high. Keeping lists (in some cases linked) of allocated areas or pages is not trivial for the processor, at least compared to the pointer movement that is required on the stack.

The heap is accessed through pointers. Even in languages where there is no concept of pointers available to the programmer, this is done internally in a transparent way.

An object of type class1 is allocated on the heap. But there is a reference to this object, which is allocated on the stack (in some cases it might not be).

This allocation is necessary because the size of the object may be too large to fit on the stack (or at least take up a considerable part of it), or because it may survive longer than the function that created it.

If it were on the stack, the "only" way to keep it "alive" would be copying it to the calling function, and so on for all the others, where it is needed. Imagine how "expensive" that gets. The way it's organized, only the reference, which is short, needs to be copied, and this can be done just using registers, super fast.

Conclusion

Then the runtime of a programming language communicates with the OS to manage memory. Whether this runtime is exposed to the programmer depends on the purpose of the language. In languages called "managed", all this happens, both concepts exist and need to be understood, but you don't have to manually manipulate the heap. It happens to be as transparent as the stack is in other lower level languages (except Assembly).

The allocation of both are usually performed in RAM, but nothing prevents it from being allocated elsewhere. Virtual memory can put all or part of the stack or heap into mass memory, for example.

Reference:

https://pt.stackoverflow.com/questions/3797/o-que-s%c3%a3o-e-onde-est%c3%a3o-a-stack-e-heap

GO UP
GO TO INDEX