Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kodezi - Autocorrect for Programmers (kodezi.com)
62 points by thatxliner on Jan 23, 2023 | hide | past | favorite | 37 comments


Here is the relevant part that will answer everyone's first question:

"Your code is fully encrypted, never stored, and never leaves your machine."

However it also mentions "Your data is completely secured at rest and in transit with encryption during analyzing and debugging" which leads me to believe they don't know what the prior statement means.

https://kodezi.com/security


The second statement gives me the impression they're analysing and debugging... encrypted data? I can't really make sense of it


I wonder if their VS Code extension works when you disable networking?


The code and the data might be different things.

But why encrypt it if it doesn’t leave the machine?

(Also, your code is never stored?)


I've tried Copilot the last week and am severely underwhelmed. Many of suggestions don't make any sense and it takes me more time to analyze and disregard the wrong ones than it saves me time. Most egregious though is that it suggests invalid code, i.e. code that can't be compiled. For example it suggests a setter that doesn't exist. I don't know if I'm using it wrong but right now it's not even worth the free trial for me.

I'm not sure if Kodezi is any better. But the page as so often doesn't provide me with enough information without having to sign up and try it myself. Wanna know what "Debug bugs" actually looks like? Sure, watch this short styled up video which actually doesn't show anything.

I found some information in the doscs https://docs.kodezi.com/feature-guides/debugging but that's a manual, not a proper feature presentation.


If you expect AI to generate a complete project bug free from one line comment, yes it will need some time before that happens. Github co-pilot has been a game changer for me just because it allows me to code 20%-30% faster. As SWE our only way to scale is by improving our productivity and spend less time writing casual snippets of code.


Coding speed is absolutely not the problem that needs solving. What is needed is how to do more with less code. Long term maintenance of massive code-bases is the hard bit that eats away at productivity.


Writing unit tests faster absolutely encourages me to write more thorough tests. It’s less about speed to me and more about painlessness. It’s good at the most boring snippets, which I’m happiest to offload.

Treat it as a mildly better autocomplete, and you’ll be happy with it.


Well we have many ways to do it by abstraction — libraries, recipes, patterns, and so on.

I think that this sort of thing will handicap a dev in the same way that learning to write with a spell checker can (does!) handicap writers.


This is how I use it. I ignore anything specific to the logic of what I am writing, but it's great for filling out shit like structs and gets those right. I start typing `type ResponseHandler....` and it immediately knows what I want


Same issue I faced with Tabnine. I'm coding in many languages simultaneously, and I noticed that Tabnine produced often invalid code, suggested functions which are non-existing at all.


for me the Copilot killer feature is its ability to learn from context. If I write a function and then use that function later copilot learns how I use the function and suggests the correct parameters and even knows to change some of the parameters depending on the context. In this way it is jaw dropping and a serious multiplier to productivity.


Did anyone notice that most (all?) of the logos underneath "Supporting the top languages" are in fact the logos of frontend/JS frameworks, not languages?

Maybe this is far too pedantic, but I do wonder if the community's over-indexing on frameworks to the point that they almost become languages is one of the deeper problems.


I see the list here has less frameworks and more programming languages: https://docs.kodezi.com/basics/language-support


That's good that the documentation is correct, but this was the second thing on the landing page after the demo video, it didn't make a great first impression.


Obviously a lot of people have created/will create wrappers for ChatGPT or DaVinci (or non OpenAI alternatives).

It's hard to see how the integration layer adds any long term value. Although this is nicely presented etc. and I don't mean to detract - just the entire class of tool.

I think the paradigm might eventually look like this: - you purchase your token directly from OpenAI - you do an oAuth flow to delegate access to many different wrapper apps (who maybe get a small referrer kickback from OpenAI)

I can certainly see myself subscribing at the source if I could take this token to the many wrapper-apps with trivial/interchangeable commitment.

I don't see myself individually subscribing to wrapper-layer apps.


With that model you need to ensure wrappers won't send requests with tons of tokens as you will be the one who ends up paying for gpu bandwidth... Meaning your spending is based on usage


I’m from the creed which does everything in eMacs and doesn’t even use a spell checker, and at the risk of being the old man that shouts at clouds, why would you want to compromise your ability to reason in this way?

Don’t do it! And don’t use a spell checker either! Get better and internalise more, you won’t get there with a crutch.


But you still use a keyboard like a scrub? If you aren't manually punching cards, you'll never truly understand anything.


No, I used a magnetised needle and a steady hand actually. Obligatory:

https://xkcd.com/378/


Do you disable type checking too? It’s good to internalize things, but eventually you can internalize and delegate.


no. there's no way I'm disabling my spell checker.

there's nothing wrong with static analysis


Founder of Kodezi here,

Answering some of the questions that was raised.

Our goal with Kodezi is to become a centralized development platform for developers to use regardless of their experience or skill level. We realized early on the amount of autocomplete clones, and our goal from day 1 was to create an autocorrect/Grammarly-like approach to code debugging.

Original Post from 2021: https://news.ycombinator.com/item?id=29635354

In the next couple of months, we plan to release automated code debugging, merging, and automated PRs for Git, along with cloud integrations for Enterprises we are working with and a CLI for Kodezi. As well as a LLM for code debugging at scale.

Kodezi does not store your code; the data from file names we gather for our inference models use encryption.

Kodezi supports and debugs bugs of all nature, functional, syntax, logic, and more. We will have an overview of all types of bugs/errors Kodezi has been trained on to give users a better idea of solving issues.

Kodezi uses large parameter language models trained on a collection of natural and programming languages from various open-source places, along with Open Source LLMs like Bloom and CodeGen. Kodezi has been training data from open-source repositories and StackOverflow data since 2019.

Code productivity and learning have been our goals since day 1; not an auto-completion tool that does not allow new programmers to learn.


Why doesn't it run optimize on the code it generates?


What does it mean to “run optimize”?


I don't know. But in the presentation video they ask the tool to generate some code, then they click "optimize" to clean up the code.

Seems like it would make sense to run both at the same time.


Since the results of each operation can be manually annotated (helpful, not helpful) maybe the intention of this project is more to use the data as a human feedback loop? Therefore such fine-grained data is of course better.

Other than that, it might also be useful for the user when either of both operations is not good enough yet and error propagation might be worse than human correction between the two.


I wonder if clicking "Optimize" will use Duff's Device in some cases ;)


Does anyone know if this uses OpenAI, or which model? OpenAI's code-davinci-002 model is amazing and would be perfect for my startup but it only allows 10-20 requests per minute since it's in beta.

Or if anyone has inside info on when that might come out of beta or a comparable alternative.

I wonder if it's actually possible to fine tune text-davinci-003 to be better at programming.


The FAQ mentions that the main competitor GitHub copilot is only about completion, but it also have a few more features in common in the copilot lab extension.

https://marketplace.visualstudio.com/items?itemName=GitHub.c...


I understand that someone has to be paid to generate AI, but one of the promises of programming of the last couple of years was that if you could afford a laptop and were smart enough you could contribute to software development by contributing to github and so forth. In practice I don't know how much of that translates into practice (see AWS/cloud/hosting and so forth), but I don't like this future where the only people who are able to program the machines are able to pay for an IDE that has a monthly or per usage cost structure.

The only people who get rich during a gold rush are the people that are selling the shovels. I think that people are tired of rewarding shovel sellers.


Don't get your point. It's up to you to spend money to boost your productivity, nobody will force developers to use AI powered tools.


Let me explain the point:

* There are hundreds of millions of people living for under $2/day. Coding was a pathway out of poverty.

* There are billions of people with family incomes under $30/day. For those families, kids learning to code was a pathway to wealth.

If tools to be productive cost money, it will go the same way as other domains of engineering, where you can't be productive without a significant investment.

EDA tools to design electronics run $5k-$100k, and making an IC might have NREs of $100k. That places it squarely out-of-line as a career option for many.


It’s the shovels vs spoons parable all over again.


It's a different parable. Everyone can afford a shovel.

At one point in time, computers were limited to very wealthy folks, like Bill Gates. He learned to program by going to an exclusive private high school which had a computer, and later, in Harvard. He had a unique competitive advantage which allowed him to start Microsoft.

A better parable is GNU/Linux/BSD/etc. versus Unix/VAX/etc. Unix cost thousands of dollars. The free alternatives were free. This allowed a whole generation of kids to learn, who didn't have Gates' resources.

I wasn't in a private high school, and without Debian, I wouldn't have the life I have.


Perhaps when the standard of software development shifts towards using AI for everything?


Don't forget the, er, “recreation workers”.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: