Say, 42 was pushed 7 times (not necessarily in sequence). It became the most frequent element. Then it was popped twice. Count/frequency went down. And now it is no longer the most frequent element.
One nit though: you are popping the least recently pushed most frequent element...
It makes a tiny bit more sense to pop the most recently pushed one (the last element of the same slice). More sense from both semantical and performance POV.
The way you doing it is "loosing" 2 machine words of memory (size of string). If you popped from the end of the slice instead then those 2 words would be reused the next time you append().
Go string is a pointer + length on the inside AFAIU. Hence 2 words.
This memory is not completely lost for the lifetime of the process. It is freed/reclaimed every time the slice gets reallocated by append().
So it is only "leaked" until the next append() causes reallocation. "Normally" though, append() does not cause reallocation. So you don't know for sure when (if ever) the memory is returned to the process.
You keep truncating the buffer on the left... So you might be causing reallocation on every loop iteration? I'm not too sure TBH.
Language standard does not (precisely) specify reallocation policy. AFAIU in practice reallocation doubles capacity. But I am not too sure what the base number is: original capacity or original length. So I cannot confidently reason about how many reallocations your code would cause.
The way you doing it in your last example is the naive way people implement FIFO queue (in both Go and Python). It works but is suboptimal b/c the memory for the popped elements is not reclaimed until reallocation.
A better way to do FIFO is a ring buffer like structure.
Yes, almost. The difference really isn't the memory retention, but when reallocation happens and what the footprint requirement is.
I.e if you pop from the back, you retain just as much memory as when you pop from the front. The difference is that if you pop from the front, this will trim to size upon the next push, but if you pop from the back, you will retain the historically biggest array.
But we can't really tell how that works, because it is a slice, an abstraction. So it may well be a ring buffer underneath, why not?
Of course, the experiment shows that the implementation isn't a ring buffer:
I think it's 100% by the spec and not surprising at all.
In Go re-slicing works against the original buffer. This design choice is well understood and documented but it is not the only one possible. In Python re-slicing makes a (shallow) copy of the relevant chunk of the buffer. So unlike Go the original and the new slice are reasonably well decoupled.
I am guessing... If this Go behavior is surprising to you then maybe some of my previous comments wouldn't be quite clear or even would not make much sense?
Well, it is surprising to those familiar with java byte buffers. Slicing a buffer there creates a slice of the underlying buffer, but does not share its limit - can't go beyond the last byte available in the buffer at the time of slicing, even if the capacity allows for that.
So to speak, cap(n)==len(n) immediately after slicing.
Go ways are very similar to C ways. So old C ppl find Go easy. In particular, Go slicing resembles C pointer arithmetic when it's used say to move a window over a larger array.
I am afraid this: v, c.L[j] = c.L[j][0], c.L[j][1:] is a memory leak (sorta). It would not be an issue if the slice elements were say int or float though.
Two ways to loose memory here: 1. String can "never" be GC-ed. This is the worst one. 2. Zero-th element of the slice is lost until the slice is fully reallocated. I _think_ you cannot just re-slice to reclaim/reuse this memory.
The right way to do what you wanted is this: v, c.L[j], c.L[j][0] = c.L[j][0], c.L[j][1:], nil
Well, priority queue is rather an interface than a particular data structure.
When ppl say "priority queue" they typically mean "min-max heap". If your guess is the same then it is not the best guess. Min heap gives you O(log(N)) time for push/pop. And this problem has a O(1) (or very close to that) solution.
Then this statement is not specific enough to be useful. Here's what I mean:
Yes, the best solution I know of technically is a priority queue. But it is a very unusual/specialized one.
I solved this on a real interview with a conventional priority queue (max heap). From the interviewer's POV my solution was not passable. And rightfully so (IMHO).
So, it's about interviewers. You worked in companies. You know what kind of people are there. Random. So... so that's what you get. Besides, if you show your expertise, they'll be jealous.
I mostly post problems for which all three are true: 1. I failed to solve the problem at some real interview. 2. Learned the solution or solved later. 3. See the solution as being amusing/useful/interesting enough to share.
no subject
Most frequently accessed?..Ah, OK, there are no other accesses, just pushes and pops that presumably remove items.
no subject
Say, 42 was pushed 7 times (not necessarily in sequence). It became the most frequent element. Then it was popped twice. Count/frequency went down. And now it is no longer the most frequent element.
no subject
no subject
One nit though: you are popping the least recently pushed most frequent element...
It makes a tiny bit more sense to pop the most recently pushed one (the last element of the same slice). More sense from both semantical and performance POV.
no subject
Performance-wise - ... it's a slice, they do not necessarily move the data. I don't know what they do, but I wouldn't assume.
no subject
Go string is a pointer + length on the inside AFAIU. Hence 2 words.
no subject
So this leaks 5GB or what? I am not sure how this works in go
no subject
So it is only "leaked" until the next append() causes reallocation. "Normally" though, append() does not cause reallocation. So you don't know for sure when (if ever) the memory is returned to the process.
You keep truncating the buffer on the left... So you might be causing reallocation on every loop iteration? I'm not too sure TBH.
Language standard does not (precisely) specify reallocation policy. AFAIU in practice reallocation doubles capacity. But I am not too sure what the base number is: original capacity or original length. So I cannot confidently reason about how many reallocations your code would cause.
no subject
The way you doing it in your last example is the naive way people implement FIFO queue (in both Go and Python). It works but is suboptimal b/c the memory for the popped elements is not reclaimed until reallocation.
A better way to do FIFO is a ring buffer like structure.
no subject
I.e if you pop from the back, you retain just as much memory as when you pop from the front. The difference is that if you pop from the front, this will trim to size upon the next push, but if you pop from the back, you will retain the historically biggest array.
But we can't really tell how that works, because it is a slice, an abstraction. So it may well be a ring buffer underneath, why not?
Of course, the experiment shows that the implementation isn't a ring buffer:
https://tio.run/##bY7NCsIwEITv@xTLnhIoUvFW6Dt4Lz3E/qXUbENNQSM@e4ypCIqngfm@gRnmEKxqJjV0aNTIAKOx8@IEUm8coQToV24SExLvgIgGizIWUyeq@uKWkYcM8wwPcoMlKms7boXJkNpZUwx1aihhTttqX9R/5OvNv60qryMi7z1tq4/H0dNaf3tudQSvIn7eHeMjd2Zh5G/DEh4hPAE
Append behaviour was surprising, though. I.e. that appending to n modified m.
no subject
In Go re-slicing works against the original buffer. This design choice is well understood and documented but it is not the only one possible. In Python re-slicing makes a (shallow) copy of the relevant chunk of the buffer. So unlike Go the original and the new slice are reasonably well decoupled.
I am guessing... If this Go behavior is surprising to you then maybe some of my previous comments wouldn't be quite clear or even would not make much sense?
no subject
So to speak, cap(n)==len(n) immediately after slicing.
no subject
Go ways are very similar to C ways. So old C ppl find Go easy. In particular, Go slicing resembles C pointer arithmetic when it's used say to move a window over a larger array.
no subject
In particular, Go slice is definitely not a ring buffer.
no subject
I am afraid this:
v, c.L[j] = c.L[j][0], c.L[j][1:]
is a memory leak (sorta). It would not be an issue if the slice elements were say int or float though.
Two ways to loose memory here:
1. String can "never" be GC-ed. This is the worst one.
2. Zero-th element of the slice is lost until the slice is fully reallocated. I _think_ you cannot just re-slice to reclaim/reuse this memory.
The right way to do what you wanted is this:
v, c.L[j], c.L[j][0] = c.L[j][0], c.L[j][1:], nil
no subject
Eg
v, c.L[j], c.L[j][0] = c.L[j][0], c.L[j][1:], nil
c.L[j][0] on the left - does that refer to the c.L[j] before or after assignment of c.L[j][1:] to it?
no subject
These assignments work like they do in SQL update. Everything to the left of "=" is "after" the change and everything to the right is "before".
If course, I could write a quick test and/or even dig up a link to the relevant place in the spec. Please forgive me for being too lazy.
no subject
PriorityQueue, I guess.
no subject
When ppl say "priority queue" they typically mean "min-max heap". If your guess is the same then it is not the best guess. Min heap gives you O(log(N)) time for push/pop. And this problem has a O(1) (or very close to that) solution.
no subject
Any implementation of priority queue, I mean.
no subject
Yes, the best solution I know of technically is a priority queue. But it is a very unusual/specialized one.
I solved this on a real interview with a conventional priority queue (max heap). From the interviewer's POV my solution was not passable. And rightfully so (IMHO).
no subject
So, it's about interviewers. You worked in companies. You know what kind of people are there. Random. So... so that's what you get. Besides, if you show your expertise, they'll be jealous.
no subject
Still, I believe the problem and the solution are amusing enough regardless of all that.
no subject
I mostly post problems for which all three are true:
1. I failed to solve the problem at some real interview.
2. Learned the solution or solved later.
3. See the solution as being amusing/useful/interesting enough to share.
no subject
I see. It was a philosophical question. But I was also curious... but not much. :)