Code pasting, filtering of lines or search/replace should also be considered. That said it is probably easy to move blocks if you know the size of edits.
I agree, but as you said in those cases the final size is known so it is not a series of one-character operations (which would be quadratic complexity and definitely noticeable.)
Based on what profiler shows me rendering takes much more time and that's what I have to optimize.
That's been my experience playing around with text editing too; the time taken to modify the buffer is tiny in comparison to rendering the text itself. It is here that e.g. updating only the regions which changed will have a noticeable improvement in responsiveness.
I wonder what data structure the Atom text editor uses --- it's famously slow on large files, but I doubt that's where the bottleneck is; it's more like an IDE so parsing and rendering are taking the bulk of the time. It is written in JavaScript and browser-based, but having seen JS run a PC emulator and boot a usable Linux kernel, I don't think that is the bottleneck either.
> I wonder what data structure the Atom text editor uses --- it's famously slow on large files, but I doubt that's where the bottleneck is; it's more like an IDE so parsing and rendering are taking the bulk of the time. It is written in JavaScript and browser-based, but having seen JS run a PC emulator and boot a usable Linux kernel, I don't think that is the bottleneck either.
There was quite nice post [1] commented here [2] lately on text management of web text editors.
Quote:
> Every time text is inserted, it's inserted as one "chunk", then split up by its line endings. This is done by invoking a regular expression engine. Personally I think this is overkill, but it certainly lets Atom continue to be easily modifiable. I can imagine the same thought is running through a few people reading this. It pushes all the new lines to a stack (or more technically: a regular JavaScript array). Already I don't want to find myself opening a large file. It then uses "spliceArray" to replace a range of lines.
> So what is the actual data structure of the great Atom text buffer?...
> @lines = [''];
> A regular JavaScript array. Ooof.
I think that in JS simple operations on arrays of strings have much more impact than in C. Few things from the top of my mind: additional metadata that has to be managed behind the scenes and garbage collection. But I don't really know how it would add up to overall performance. Certainly performance would look different if text would be rendered by dedicated library instead of advanced layout engine that lives inside modern browsers. It could be an interesting project to write editor in JS, but use for example Pango [3] bindings to render text.
I agree, but as you said in those cases the final size is known so it is not a series of one-character operations (which would be quadratic complexity and definitely noticeable.)
Based on what profiler shows me rendering takes much more time and that's what I have to optimize.
That's been my experience playing around with text editing too; the time taken to modify the buffer is tiny in comparison to rendering the text itself. It is here that e.g. updating only the regions which changed will have a noticeable improvement in responsiveness.
I wonder what data structure the Atom text editor uses --- it's famously slow on large files, but I doubt that's where the bottleneck is; it's more like an IDE so parsing and rendering are taking the bulk of the time. It is written in JavaScript and browser-based, but having seen JS run a PC emulator and boot a usable Linux kernel, I don't think that is the bottleneck either.