3

Currently, I am doing a project for my university, where I do implement known data structures (array, linked list, BST, etc.) and I have to measure the times some operations on them require. For example, the first one for me was array. I've measured the time for adding an element in the middle of an array (with moving further values ofc, so n = n + 1. It of course gave me O(n), where n is a number of elements. Now I have to check for adding an element to the beginning of array and appending an element to it (adding to the end). I've got 2 simple algorithms (I cannot use STL or boost::):

Adding to the beginning:

void array::beginning_append(int value)
{   
    int *temp = new int[n + 1];
    memcpy(temp + 1, table, sizeof(int) * n);
    n = n + 1;
    temp[0] = value;
    delete[] table;
    table = new int[n];
    memcpy(table, temp, sizeof(int) * n);
    delete[] temp;
}

Appending to the end:

void array::append(int value)
{
    int *temp = new int[n + 1];
    memcpy(temp, table, sizeof(int) * n);
    temp[n] = value;
    n = n + 1;
    delete[] table;
    table = new int[n];
    memcpy(table, temp, sizeof(int) * n);
    delete[] temp;
}

Those are methods or array class, where table and n are members of this class. Now those do not differ that much, I thought their times will be the same, and they are (I checked with QueryPerformanceCounter for big numbers of elements, like 500000, and it gave me O(n) complexity), but my lecturer said that adding to beginning will have O(n) computational complexity, but adding or removing an element from the end will have complexity of O(1), so it will be const, no matter of the number of elements. So I would like to ask you guys if my algorithms are bad, I mean they do unnecessary stuff, and that's why they rely on number of elements, or my lecturer was just wrong

10
  • prepend instead of begging append? :) Commented Mar 11, 2017 at 11:32
  • I prolly should do that :D When I'm coding, my ability to speak, or write proper english decreases, so f.e I write beggining instead of beginning Commented Mar 11, 2017 at 11:33
  • The times don't differ because in both cases you are reallocating and copying the entire array. If you want to understand why appending to the end is constant, read up about amortized constant time Commented Mar 11, 2017 at 11:33
  • 3
    Welcome to programming. Commented Mar 11, 2017 at 11:36
  • 1
    Try this one stackoverflow.com/questions/13215645/… Commented Mar 11, 2017 at 11:40

2 Answers 2

3

Both complexities are:

O(n)

because you are copying the entire array.

See more about complexity of memcpy() here.


Tip: You could easily skip the second copy, by freeing the memory table is pointing to, and then making it point temp. That way you will copy only once, not twice. However, the overall time complexity will not change.

Example:

void array::prepend(int value)
{   
    int *temp = new int[n + 1];
    memcpy(temp + 1, table, sizeof(int) * n);
    n = n + 1;
    temp[0] = value;
    delete[] table;
    table = temp;
}

Pro tip: How do you detect/avoid Memory leaks in your (Unmanaged) code?

Hunch: A clever implementation of realloc() should execute faster than a brute memcpy() to a new array. The reason is that if realloc() is able to grow/shrink your array without having to copy the whole thing into a new location (which can occur when there is simple not enough contiguous space), then it will be faster. realloc() does not mean O(1) always.

Sign up to request clarification or add additional context in comments.

8 Comments

There is this line: n = n+1
Point is: why you copy the array twice? Just make table point temp
Yes, i know, thats right. I didn't use cpp for a long time and I was curious if I make table point to temp, then I have to delete[] temp, or not, but now I think I don't :D Am I correct?
@Frynio memory that has been dynamically allocated must be deallocated by you, see my updated answer please.
I know, I did delete[] table and then table = temp, I guess that's ok, right?
|
1

Adding to the end can be achieved in O(1) when the buffer is big enough to hold the new element. Usually, when a reallocation is done, the memory size is increased by more than what is needed at the moment, to not have to do reallocations too often.

In this case you would have an average complexity of O(1) and a worst-time complexity of O(n) when a reallocation occurs.

10 Comments

I do have to allocate the table dynamically, and the size has to be just as big as the number of elements is
The stl vector for example has a size and capacity member, size is of course always exact.
If you can use C style memory management, using realloc() can make you speed up allocation, and let the append be O(1).
@paddy (s)he cannot use STL.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.