Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Code/PR reviews (by other devs and by people who specialize in application security), static analysis, secure coding practices training, constant vulnerability assessments and penetration testing, and a strong security team who understands the need to balance productivity and security.

Really, above all else it's just hiring or contracting (ideally hiring) good security people. Good appsec people to review code and look for vulns, people who just do education and training, people who just do penetration testing, people who just do security architecture and tooling. And of course a good security operations and incident response team to deal with the day-to-day "frontline" defense, investigation, and response. If they are full-time employees who are integrated with the company culture and are experienced enough with agile development, then odds are they're not going to set up more red tape than is necessary.

If security is mostly offloaded to devs or operations and no independent security org exists at your company, you're in trouble.



I may not be caffeinated enough to participate in these threads articulately, but this question made me realize something that hadn't quite clicked before. As I have experienced it, the existing (waterfall?) model for deploying new features and bug fixes in large organizations requires going through QA and security review, where (hopefully) the on site security team has some sort of checklist or guideline for doing whatever analyses they do. But as they move over to agile, continuous integration/delivery, is there still time to deploy a complete staging build and wait? We have been working with small companies for ~2 years now and I haven't rigged up something to test code in at least a year. Mostly I just stare at it until I've figured out if I can exploit it.

This is where large companies might trip and fall, if they expect their existing security analysts/teams to do this (or, as you say, if they try to offload it onto developers or ops people, even ones who are more interested in security). I don't like how this sounds, what I am about to type, but the majority of security people -- particularly the ones at the level where they are hired to run QA-type audits -- do not know how to review code. It really does make me uncomfortable to say this, but if that's the rock, the hard place is how many times I've had to tell someone they really need to learn to code if they want to "work in security."

So when you say, "good security people," what you mean is people who can read and write code, and also have enough experience to know when that code can be used in a way that the original author did not intend to. It's almost as if there's a misconception that being able to read and write code is what you do if you're a dev or devops. This mindset is odd to me in so many ways. It's like saying, "I don't need to know how to use a screwdriver or a hammer because I'm not a carpenter." Even better, someone who understands what it feels like to be under some pressure to roll out changes where there might not have been a lot of time to consider malicious behavior. Someone who has your back.

Just yesterday or the day before, I had this inexplicable crisis where I was concerned about the value I am providing. It's gone now. Thanks!


Yes, a lot of companies are definitely missing good application security personnel. Many companies don't even have a real infosec department or team at all; many have an infosec department but no dedicated appsec team or process; many have appsec people but to them "application security" means a team of 2-3 people who run IBM AppScan once per week and basically just attach a computer-generated report of findings to an email sent to a distribution list with almost no other input. Often without even reviewing the code flagged by the tool or eliminating false positives from the results, let alone performing manual self-driven code reviews.

For others, their appsec team(s) is/are constantly building security libraries, frameworks, and tooling, and scanning code for security issues with software and manual review on a daily basis.

Information security is still just a checkbox for a lot of companies. This is gradually changing with more and more breaches in the news every few days and executives who are finally starting to appreciate that the consequences of a breach can be very bad, but it's still pretty common. I really don't think many companies have solid appsec teams that are doing the things you and I would hope they would do, and I agree that probably a scarily high percentage of "application security analysts/engineers" do not and cannot review code effectively.

You are absolutely right that knowing how to write and read code are crucial skills for many aspects of infosec and that a lot of people neglect that, and it's disheartening that there are many companies who don't really have people like that on their security teams - even companies that claim to have an appsec program. But application security is also only one aspect of a solid information security program, and some other aspects do not necessarily require development knowledge beyond the basics.

From what I know and have experienced to a small extent, FAANG (and others in/near that tier) invest a ton into application security and really are doing it right, at least. (Or are at least doing it way better than 99% of other companies out there.)


My money is on there being a direct correlation between companies who did early adoption of internet technologies as crucial to business and 'doing it right' versus those other companies who view IT and security as a 'cost centers.'


Absolutely. I recently moved from a company that viewed IT and information security as cost centers to one that views them as core business components, and the culture (and competence) difference is very refreshing.


Thanks for this. Do product owners and developers get a sense of the product level threat model, e.g. "competitors reversing our code," "spammers will harvest our user identities," "a grad student is going to break this as their thesis project," or "a state level actor will try to establish persistence for Intel collection" ?


From what I've personally seen, it's typically the security organization's responsibility to construct threat models and risk profiles, which basically turns into more general downstream security guidance for product owners and devs. The people working on the products don't necessarily need to know or care about particular adversaries or scenarios or threats; they just need to understand application/network/data security concepts, be constantly vigilant, and not get too annoyed if a security team tells them they need to switch to a new way of doing something. A web developer should know what XSS and CSRF are and should know a S3 bucket with customer data and no ACL is bad, but they don't need to know what an APT or cyber kill chain is.

Any vulnerability, misconfiguration, or exposure could be exploited by many different kinds of entities for many reasons. A company with a good security department will have a dedicated threat intelligence and research team to consider specific risks and adversaries, which they will share with adjacent security teams; that information probably isn't relevant to most people just working on a product. But sure, some of it may reach a product owner if there's a particular kind of risk inherent to it (like if the product is offering secure end-to-end encrypted communications as a selling point or if they're in a sector that's been heavily targeted by state actors).

Every organization and product will differ a lot, but for any company with a not-insignificant presence in an industry, state actors (APTs) are pretty much always going to be a risk. The odds of being hit by an APT are pretty low for most small companies, but the risk is always there.

In terms of what's most likely to regularly affect the average company, financially motivated cybercrime has pretty much always been and remains to be the biggest issue. Much like regular daily crime, I suppose. It's people just looking to make a buck. Commodity malware like ransomware and banking info stealers, vuln scanning bots that pop a shell on their web servers and sell access, phishing emails spreading malware or impersonating executives and requesting wire transfers are probably still the main risk for most companies out there, of all sizes. This is probably the biggest problem for most companies whether they're entirely tech/software-focused or just a retail outlet or restaurant.

Sometimes you'll get more organized and sophisticated cybercrime actors targeting specific organizations in an APT-style manner, but it's still almost always using those tried-and-true basic tactics. And believe it or not, many state-sponsored APTs themselves often go with generic tactics, too. Most influential countries' intelligence services do seem to have at least one "A Team" conducting cyberwarfare/espionage, but some also seem to have a lot of grunts just looking for low-hanging fruit, and they're usually not much harder to detect or stop than more typical script kiddies you might otherwise encounter. Often the "A Teams" will intentionally stick to the most generic stuff they can find, too, to avoid tipping their hand that they're smarter than they seem and to hide the fact that they have access to non-public tools and exploits (both to prevent their "weapons" from being burned and to not seem like they're an APT). The most sophisticated groups will only pull out the big guns if all of the generic techniques fail.

I'm not really well-versed in industrial espionage and competitors reverse engineering software, but I think it's not really something most companies need to care about. If your competitive advantage is actually at risk just because someone reverse engineered one of your binaries, there's probably something very wrong with your business model. The companies who need to care about that kind of stuff are usually already well-aware.


Important view. Implies that the gap is that we're making developers responsible for security but without accountability for the consequences of a breach - where the people with the accountability in the event of one are shielded from responsibility for mitigation because they've pushed it down the stack to said people who can't actually hold millions of dollars of accountability.

The obvious threat scenarios above are examples of things product managers typically don't think about, but developers assume someone must have understood and decided. I'm trying to ascertain if this gap is a broad problem, or just a dark pattern in organizations that could be by design:)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: