I think vibe coding would greatly struggle with large open source projects unless your planning was exceptional and your comments on optimal coding style was exceptional, however...... For those small open source tools that many of us use daily and find invaluable, I actually think vibe coding is ideal for that. It can make a functional version quickly and you can iterate and improve it, and feel no loss for making it free to use.
I was very sceptical but I will admit I think vibe coding has a place in society, just what it is yet is still to be determined. It can't help most for sure but it can help some in some situations.
> For those small open source tools that many of us use daily and find invaluable, I actually think vibe coding is ideal for that.
If they don't exist, AND the author is comitted to maintaining them instead of just putting it online, sure. But one issue I see is that a lot of these tools you describe already exist, so creating another one (using code assist tools or otherwise) just adds noise IMO.
The better choice is to research and plan (as you say in your first sentence) before comitting resources. The barrier to "NIH" is lowered through code assistants, which risks reducing collaboration in open source land in favor of "I'll just write my own".
Granted, "I'll write my own" has always felt like it has a lower barrier to entry than "I'm going to search for this tool and learn to use it".
There are three or four projects I've always wanted to do, but were frontloaded with a lot of complexity and drudgery.
Maybe the best feature of vibe coding is that it makes the regret factor of poor early choices much lower. Its kind of magic to go "you know what, I was wrong, let's try this approach instead" and not having to spend huge amounts of time fixing things or rewriting 80% of the project.
It's made it a lot more fun to try building big projects on my own, where I would go into decision paralysis or prematurely optimize and never start the meat or learning of the core project.
Its also been nice to have agents review my projects for major issues, so I feel more confident sharing them.
> go into decision paralysis or prematurely optimize
Setting out to implement a feature only to immediately get bogged down in details that I could probably get away with glossing over. LLMs short circuit that by just spitting something out immediately. Of course it's of questionable quality, but once you get something working you can always come back and improve it.
The authors try to study the effect of people not engaging directly with OSS projects because they substitute for this with a gang of chatbots, and draw the conclusion that this lack of contact with actual people means they'll be less likely to help finance OSS development.
I think one of the things that will need to be embraced is carefully curating .md context files to give the prompts better/shared direction to contributors. Things like any new feature or fix should include a test case (in the right place), functions should re-use existing library code wherever possible, function signatures should never change in a backwards-incompatible way, any code changes should pass the linter, etc etc. And ideally ensure the agent writes code that's going to be to the maintainer's "taste".
I haven't worked out how to do this for my own projects.
Once you've set it up it's not too hard to imagine an AI giving an initial PR assessment... to discard the worst AI slop, offer some stylistic feedback, or suggest performance concerns.
I'm delighted it's now become available on ARM as I now can have it on my lab computer (Pi) and the Pocket Reform that I use day to day. I still need to come up with an optimal folder system for it though.
I was a FileMaker developer and I wouldn't say that. The way to develop on FileMaker is to be an expert of workarounds that would stun an outsider.
It was a good data processing tool in earlier versions. I always liked its splash screen picture of v4: a file cabinet of the size of an office building. It was a very good metaphor. But even then there always were rather hard and strange limitations of what you can do and the main skill you developed was to invent a disgustingly clever way to overcome them.
(Example: sometimes people want to sort entries in a pop-up menu in a custom way, but there is no built-in capability to do that: the entries are always sorted alphabetically. So one of "solutions" is that: you give each entry a prefix that consists of zero-width spaces. First entry gets one space, second two spaces, and so on. As a result when they are sorted, they are sorted by that prefix, but since the prefix is invisible, the user only sees the entries themselves. If you are careful and lucky and the users aren't too inventive these spaces won't contaminate the data, although generally contaminated data is the normal state of most data in FileMaker.)
> If you are careful and lucky and the users aren't too inventive these spaces won't contaminate the data, although generally contaminated data is the normal state of most data in FileMaker.
Thanks for the example. I have a copy of v19 I've been meaning to work with. Have such workarounds become less necessary?
There is still a lot. E.g. that thing about sorting a pop-up is there since v4 at least and, of course, there were requests to improve this, but in v19 it is still the same. FileMaker was always mostly concerned about adding more features they can highlight and bugfixes are not that exciting. They do fix some, but there is a lot of old baggage.
It is not outright bad, of course. It is very easy to create a relational-like database with several tables and give them decent layouts that would be do a good job of displaying the data and jump between tables. These layouts are directly searchable, you can sort, select and, as they say, "massage" the data. All this doesn't even require much scripting, you'll get rather far with simple single-action buttons. If you use it as a personal tool you can get very good mileage out of it.
But as you go deeper you'll see a rather strange mix of things that do not really work well together. There are calculated fields; not a bad concept in general, but in FileMaker this is the only way to search or sort by a calculated criteria: you have to turn it into a field. As a result a nicely looking table starts to get populated by calculated fields for various purposes. Those that span to related tables are marked as "unstored". These are going to be slower, especially over a network. In fact they can be so slow that some developers go out of their skin to imitate such calculations: they just add a normal field and try to make sure it always stays updated as the data change. Which is not that simple if there is network and multiple users.
Then there are scripts. Again, a good thing, but their original idea was that they will simply resemble user actions: go to that layout, enter that field, etc. This means there is some context, similar to what the user sees, and what is not visible from that context won't be visible to the script. E.g. you cannot just read a field in some table even if you know the ID of the record: you have to go there and find that record (enter find mode, set fields, perform find) or go elsewhere and actually find that record that can see that record over a relationship.
There is SQL, right. But that SQL is given to you only in the form of 'ExecuteSQL()' function and can only read data. (There is also a script step, but this one is about connecting to external data sources.) If you want to change the data, you must use a plug-in: plug-ins have access to SQL without this limitation. (But do not have access to anything else in FileMaker; they are basically just additional functions or script steps you can use in your expressions or scripts: they receive data, send back a result, can evaluate an expression or do SQL and that's it.) You can use placeholders for data ('WHERE "abc" = ?') but there is no placeholders for table and field names. FileMaker is rather smart when it comes to names: it tracks them by internal IDs, very much like original Mac tracked files, so when you rename a field, nothing breaks. But not in SQL expressions; here they are entered as text and if you rename a field, the SQL will fail. So if you want robust SQL, you need to come up with a solution that supplies the field names dynamically. It is doable, but, of course, complicates things and it will be your own house rule.
Then there are "custom functions". You see, you can create your own scripts, you can write expressions that are part of script steps or fields, but you cannot call a script as a part of such an expression. So there are "custom functions" that fill this niche. Basically they are expressions with names and parameters that you can create. Again, they could be useful, but they are linked to a single file and in a multi-file app you have to copy them between the files yourself. There is no automation. Of course, you'll soon end up with several subtly different versions in different files.
And this continues. There is a JSON parser, but JSON is passed around as text and the parser works by evaluating a single path-like expression, which means it parses that JSON each time. It is not fast. The layout features are relatively limited, but there is a web view; some people write whole widgets in that web view. (E.g. a calendar.) There is some integration: you can call a script in the web viewer and a web viewer can call a FileMaker script. But now your database contains HTML and JavaScript for what you're going to display. There is no drag-and-drop; but there are "container" (binary) fields and you can drag content between them and there is an auto-enter feature and I already forgot how it was done, but some people built limited drag-and-drop this way. There are variables, two kinds, some are script-wide, some are application-wide (global), and they can work like arrays. Related data over network get very slow, so people invented such thing as "virtual tables": you create a single table that has no data, only unstored calculations that refer to cells in those global variables and when you display such a table, nothing gets transferred over the network, everything is read from the variables that are in memory; as a result it works pretty fast. But now you have to devise a strategy to fill those variables yourself and the data you see is not real anymore (cannot be changed directly), so you need to add another way. Their network layer, by the way, was created long ago and judging from the files in the app directory it uses Corba, so, apparently, they just work with remote tables as with local tables, without trying to account for latency and such. So complex interactions (like sorting) over a network tend to get very slow. To sort of solve that there is a way to run a script on the server. But it is not integrated too well and I remember building a system for myself to be able to specify this as a general preference and not hardcode at the script level as it is normally done.
Their expressions engine has a somewhat strange but convenient way to group certain things into an array-like structure; e.g. if you need to substitute multiple strings, you can do this with a single call to 'Substitute()' and pass it an array of substitutions. But this is not available to plug-ins; here you can only receive a one-dimensional array of parameters. But in the plug-in you can specify you can process any number of parameters. With custom functions you cannot: here the number of parameters is fixed. Scripts too can get parameters, but here the situation is even worse: they get a single text string and if you want to pass multiple bits of data, you have to devise your own mechanism for that. Nowadays most pass JSON; but JSON parser appeared only recently, does not understand FileMaker data types too well and is rather tedious to build in FileMaker, as you have to specify JSON type for each property or write literal JSON as a string and escape every quote. The limitation is somewhat lifted if you call a script via Web API; here you can, e.g. set script variables for the target script. But that Web API is only available on the server, I think; there is some Data API you can call from inside, but I'm not sure what it does, by that time I mostly switched to other things. And Web API is limited, you subscribe to a certain number of calls. There is an older unlimited XML-based API, but it cannot do this for scripts.
And so on. They have lots of utility, they have servers for three platforms, they have desktop and mobile clients, access to specific Apple APIs, Web API, etc., but inside all of this is like a Frankenstein that was sewed together from totally different parts.
It's not specified how long the apprenticeships last or what qualifications you need.
But even the full-time job doesn't pay that well: $60K/year, after bonuses. And the company recently had layoffs and moved the plant an hour west, from North Adams, Mass., to Albany, New York. And you have to stand eight hours a day, and half of people drop out of the apprenticeship for whatever reason.
The fact is that an artistically skilled person in the Northeast could find better-paying, less physically taxing, and probably more stable work.
more absurdity: I aprenticed with old time masters in more than one trade, and have worked in and around many other trades, and have taken on aprentices(officialy) in trades that I am unpapered in, have requests from trade schools to take on more aprentices, but, BUT, I cant work officialy in those trades, because I am unpapered, and cant get insurance, without going to trade school, and doing an aprenticship first.
Potentially my only opportunity to say thank you for your efforts in creating your textbook, so thank you. It helped me get to the position I am today as an academic researcher in QC although I focus on Photonics.
reply