It forces you to have an int that's always accessable via a reference, which may not be possible if the int is part of another object.
It also prevents you from monitoring writes to that int -- to check for boundary checks, illegal writes, or make other internal state changes depending on the data written.
If it's illegal writes, then likely the method needs to be set to protected or private.
If it's boundary checking, chances are it's modification is part of a bigger function anyway (and thus not merely setting it - I.E. setting the size of a dynamic array, accessing a variable in an array).
Not sure what you mean on 'other internal state changes'?
Point was, you can merge functions into one where applicable.
Really the major reason for my interjection in this thread was your ill-advised operator abuse.
That's a valid enough point. I am one of those people who likes to make it so you can do most things in a few short commands. I've got the numerous classes working, I just need to (re)build the FileProc/HTMLFileProc/StringTokeniser classes where applicable. Despite the protests over the ++/-- operators (which I can understand), you'll like the assignment operators.
Point is, if vector (as a std) does scary inefficient stuff like that
I assume you're talking about copying on resize.
[vector] Recopying on every resize.
It doesn't. It recopies when you exceed its capacity. The capacity can be preset with a call to reserve.
Exceeding it's capacity (or changing it's size) is a resize. It has to copy when it resizes or the data will be lost.
vector guarantees contiguous memory allocation. That's the whole point. It's fast random access. There's no way to keep it all contiguous in memory if you don't move everything when you run out of space.
It's not inefficient if you use it right.
Aside from a hat-tip to realloc, first statement is true but my point is it needs to copy
less frequently rather than not at all. Latter statement is true, but classes shouldn't have the capacity to be used wrongly.
It's a kinda bizarro universe dynamic array.
They opt for speed over safety.
It's possible to have both. For example, for TemplateList, CurrRight or CurrLeft will return false if you've reached the start/end (or the list hasn't been initialised yet) but it won't go out of bounds. If you access an item with an uninitialised list, you obviously get a crash (it's avoidable by supplying a static hardcopy but I get the impression this would be frowned upon), but it's down to the user to ensure there's something there.
I am not sure exactly how walking a list could even go out of bounds.
I've been there before. It's easy to find fault in existing implementations and think you can do better. But when you consider how rubust and flexible the standard container classes are, the performance and elegance they offer is surprisingly high. They're very reusable.
I am sure they are very re-usable. In normal circumstances, I try to use as much as the C standard library's functions rather than create my own. My problem is, the STD and STL classes usually require you know precisely what they do, how they do it, and what functions are available. By the time I've researched however many classes, it's probably quicker for me to construct my own class that I understand inside out, and can expand it's functions if the need arises.
The problem I find is 'just one function...' problem where, you reach 50% in your code, and you realise the class or function set you're using hasn't got that 'just one function...' you need (like case insensitivity with strstr, for example, or dynamic memory allocation of sprintf for windows). At which point you either have to draft your own or implement a hacky work around.
I know the hassle of reinventing the wheel. I've been there numerous times. But I also found the hassle of not reinventing certain parts of the wheel.
Optimizing one area of the program won't matter if the bottleneck is elsewhere. Get it working first, optimize later.
Optimisation usually requires you re-write the foundation code in the first place. Which usually means it's better to scrap and redo the code (another code iteration) than to waste hours re-writing a function with a 'jumper string' pulling effect (pull one function, another comes undone).
I find it better to optimise first. More specifically, write short example code to test assumptions, if assumptions wrong, update, if assumptions correct, expand, clarify, incorporate. Test incorporated function. Repeat. Doing this keeps bugs to such a minimum you'd not believe it.
And if it turns out that std::list works fine, you just saved yourself a good month of wasted time and effort.
It's been 6 days since the first post. I have 6 main classes (TemplateList, TemplateArray, CharList, CharArray, TemplateListAdv, CharListAdv). The latter two are the List/Dynamic array hybrids. All the classes can assign to each other (fully tested, no apparent issues). There are 3 additional helper classes. 2 days I spent on the website stackoverflow asking questions to clarify assumptions about inheritance, templates and assignment operators.
Bear in mind I type like a zoned out monkey when it comes to code.
From TemplateList you could easily derive the ImageQueue class (subclass, define as sf::Image or custom class), AnimatedSprite (subclass, single sf::Sprite variable, ImageQueue internal, sf::Clock), etc etc.
Just mentally designing FileProc before I move on.
What's the point of having the list if you have a vector? They're contradictory storage containers.
vector has slow insertion and fast random access
list has fast insertion and no random access.
combining both gives you the worst of both worlds. You can't access the list randomly and you can't insert/remove items in the vector quickly.
Maybe I'm missing the point here.
Actually, combining both gives you the best of both worlds, fast insertion, with random access. The downside would be memory usage (but given it contains pointers rather than copies, overhead wouldn't be that much).
Naturally if you alternate between adding a single item and then single random access you could force it to be inefficient. However it's more intended you add a load of items, access them at random, do some work, access it at random, then add more items, repeat.
It'd be good, for example, for loading a stack of images, then granting you the ability to modify/add images to it later on whilst granting you the ability to rotate which image is selected (using sf::Clock for example) via the [] operator.