Few years ago. A buyer cheated me on eBay. Basically, they purchased my mac and asked me to ship it to their son's address. They paid money through Paypal & cancelled the transaction after I shipped it. eBay didn't help/pay me because I didn't ship it to buyer's registered address. There is no accountability on eBay end. I lost $1.5K laptop. An expensive learning. Since then, I don't use eBay.
I shipped with signature verification to the buyers address. The buyer claimed they didn't receive the item and eBay still sided with them and refunded their money. I will never sell anything on eBay again.
Usually languages are not the issue. It is the code that we write. As long as languages help us to find/debug a problem caused by crappy code - we should be good. Coding is kinda creative work. There is no standard to measure creativity or pitfalls of using wrong patterns. The incidents & RCAs usually find these. But most of the times it is already too late to fix core problem.
Not sure that I agree... I think some of the worst AI code I've had to deal with and the most problematic are when dealing with Java or C#... I've found TS/JS relatively nice and Rust in particular has been very nice in terms of getting output that "works" as long as function/testing is well defined in advance.
I just tried a portion of the url & it took me to Bangladesh university - http://182.160.97.198:8080/xmlui/bitstream/handle/ . Intersecting. When I go to root of this url, the error messages are listing the softwares this site is powered by. Generally this is not considered a secure way of protecting a site.
This is good. Has anyone tried building any large scale applications entirely using Claude and maintaining it for a while with users paying for it? I’m looking for real life examples for inspiration.
Using AI we can make 1000s of commits per day. This metric becomes even more pointless in the days of AI. If we increase sales, New subscription count, reduced bug count, reduced incidents etc., those can be real metrics. I'm sure I am preaching to the choir.
I have coworkers commiting tens or hundreds of thousands of "lines of code" a week, because they'll push whatever the AI gives them, including dependencies and virtualenvs, without any review.
Of course, at the same time we're getting dozens of alerts a week about services deployed open to the Internet without authentication and full of outdated vulnerable libraries (LLMs will happily add two or three years old dependencies to your lockfiles).
reply