The way you doing it is "loosing" 2 machine words of memory (size of string). If you popped from the end of the slice instead then those 2 words would be reused the next time you append().
Go string is a pointer + length on the inside AFAIU. Hence 2 words.
This memory is not completely lost for the lifetime of the process. It is freed/reclaimed every time the slice gets reallocated by append().
So it is only "leaked" until the next append() causes reallocation. "Normally" though, append() does not cause reallocation. So you don't know for sure when (if ever) the memory is returned to the process.
You keep truncating the buffer on the left... So you might be causing reallocation on every loop iteration? I'm not too sure TBH.
Language standard does not (precisely) specify reallocation policy. AFAIU in practice reallocation doubles capacity. But I am not too sure what the base number is: original capacity or original length. So I cannot confidently reason about how many reallocations your code would cause.
The way you doing it in your last example is the naive way people implement FIFO queue (in both Go and Python). It works but is suboptimal b/c the memory for the popped elements is not reclaimed until reallocation.
A better way to do FIFO is a ring buffer like structure.
Yes, almost. The difference really isn't the memory retention, but when reallocation happens and what the footprint requirement is.
I.e if you pop from the back, you retain just as much memory as when you pop from the front. The difference is that if you pop from the front, this will trim to size upon the next push, but if you pop from the back, you will retain the historically biggest array.
But we can't really tell how that works, because it is a slice, an abstraction. So it may well be a ring buffer underneath, why not?
Of course, the experiment shows that the implementation isn't a ring buffer:
I think it's 100% by the spec and not surprising at all.
In Go re-slicing works against the original buffer. This design choice is well understood and documented but it is not the only one possible. In Python re-slicing makes a (shallow) copy of the relevant chunk of the buffer. So unlike Go the original and the new slice are reasonably well decoupled.
I am guessing... If this Go behavior is surprising to you then maybe some of my previous comments wouldn't be quite clear or even would not make much sense?
Well, it is surprising to those familiar with java byte buffers. Slicing a buffer there creates a slice of the underlying buffer, but does not share its limit - can't go beyond the last byte available in the buffer at the time of slicing, even if the capacity allows for that.
So to speak, cap(n)==len(n) immediately after slicing.
Go ways are very similar to C ways. So old C ppl find Go easy. In particular, Go slicing resembles C pointer arithmetic when it's used say to move a window over a larger array.
no subject
Performance-wise - ... it's a slice, they do not necessarily move the data. I don't know what they do, but I wouldn't assume.
no subject
Go string is a pointer + length on the inside AFAIU. Hence 2 words.
no subject
So this leaks 5GB or what? I am not sure how this works in go
no subject
So it is only "leaked" until the next append() causes reallocation. "Normally" though, append() does not cause reallocation. So you don't know for sure when (if ever) the memory is returned to the process.
You keep truncating the buffer on the left... So you might be causing reallocation on every loop iteration? I'm not too sure TBH.
Language standard does not (precisely) specify reallocation policy. AFAIU in practice reallocation doubles capacity. But I am not too sure what the base number is: original capacity or original length. So I cannot confidently reason about how many reallocations your code would cause.
no subject
The way you doing it in your last example is the naive way people implement FIFO queue (in both Go and Python). It works but is suboptimal b/c the memory for the popped elements is not reclaimed until reallocation.
A better way to do FIFO is a ring buffer like structure.
no subject
I.e if you pop from the back, you retain just as much memory as when you pop from the front. The difference is that if you pop from the front, this will trim to size upon the next push, but if you pop from the back, you will retain the historically biggest array.
But we can't really tell how that works, because it is a slice, an abstraction. So it may well be a ring buffer underneath, why not?
Of course, the experiment shows that the implementation isn't a ring buffer:
https://tio.run/##bY7NCsIwEITv@xTLnhIoUvFW6Dt4Lz3E/qXUbENNQSM@e4ypCIqngfm@gRnmEKxqJjV0aNTIAKOx8@IEUm8coQToV24SExLvgIgGizIWUyeq@uKWkYcM8wwPcoMlKms7boXJkNpZUwx1aihhTttqX9R/5OvNv60qryMi7z1tq4/H0dNaf3tudQSvIn7eHeMjd2Zh5G/DEh4hPAE
Append behaviour was surprising, though. I.e. that appending to n modified m.
no subject
In Go re-slicing works against the original buffer. This design choice is well understood and documented but it is not the only one possible. In Python re-slicing makes a (shallow) copy of the relevant chunk of the buffer. So unlike Go the original and the new slice are reasonably well decoupled.
I am guessing... If this Go behavior is surprising to you then maybe some of my previous comments wouldn't be quite clear or even would not make much sense?
no subject
So to speak, cap(n)==len(n) immediately after slicing.
no subject
Go ways are very similar to C ways. So old C ppl find Go easy. In particular, Go slicing resembles C pointer arithmetic when it's used say to move a window over a larger array.
no subject
In particular, Go slice is definitely not a ring buffer.