Hacker Newsnew | past | comments | ask | show | jobs | submit | lasdfas's commentslogin

How does a good instructor, syllabus, and/or academic institution not need this? I went to an Ivy League school with hopefully above average teachers and saw a lot of plagiarism.


The problems are societal in nature. I don't beleive either the parent commentor or original author fully appreciate this.

Adversarial teaching environments are a symptom, not the problem. And the fundamental issues don't have a clear solution, though it is clear that the situation is far from optimal.


In principle, you remove the incentive for plagiarism by making the students' reward for their work be what they learn as a result, rather than some kind of certificate.


Oh cool, so we just need to fix a massive social problem that's generated by long-running structural trends within our society and economy. Then we can stop using plagiarism detection software.


... So we shouldn't try to fix the problem?


Of course we should, but we shouldn't be naive about the challenge doing so represents.


They could have set work that is difficult to lift word for word, or examine in any number of ways that do not allow googling


I saw multiple instances of my classmates plagiarizing. It's very rampant in colleges. Removing that tech would be a mistake in my opinion.


This looks very interesting. Very similar to Cloudflare Access. How does it handle auditing? Is there an ability to log SSH or HTTP traffic?


This is our very first public launch, and auditing/analytics are next up on our roadmap! Just to be clear, we do not currently and do not plan to intercept any traffic—any client application connections remain encrypted inside of our TLS transport tunnel. What we will be providing is connection-level analytics for Twingate traffic flowing through connectors you deploy on your network. For example, connection start and end times, which authorized user accessed the destination resource and how much data was transferred. How would that fit with your needs?

By the way, this is similar to Cloudflare Access, but we’re protocol agnostic (any TCP or UDP traffic can be proxied) and we don’t require you to change or create any public DNS entries. In fact, our best practice recommendation is that you exclusively use private DNS to cloak your private network.


I agree. I would like to seem more details of how they determined it was only crypto mining. Finding only mining scripts in your logs doesn't mean they were not running other code once they had root.


It seems bizarre to me that a crypto miner got in. It wouldn't make much money on regular CPUs, and the high processor usage would immediately draw attention. So it looks like a low-effort botnet, which is embarrassing to get pwned by.

(The coin mining could be a cover like you mention, but it seems unlikely since it naturally draws attention.)


It’s easier to sell Monero for cash than... some random data from some random company.


I once worked at a place where a minor piece of cloud infra got exploited. All the attacker did was run a monero miner on it.


Heh, in a way it makes a good bug bounty. Like if popping calc got you a trickle of income.


> It wouldn't make much money on regular CPUs

Not true; some PoWs such as Random-X are designed to be most efficient CPUs.


running the virus code in a container / vm and checking what gets modified


How is this even possible for $35?

50 staging servers per seat ($1450 Value) 11GB RAM per server 8 CPUs per server


We spent six months developing a hypervisor that lets us treat any linux VM as a lambda - We pass on the efficiency to our users as a value add.


Why are you trying to tackle some small little market of staging servers for the handful of companies that actually care about this?

Take your Hypervisor and sell it to companies that want to efficiently lift-and-shift legacy systems in to the cloud and make your investors super happy.


You should probably be selling that if it works well. That is something I have not seen.


<mostly sarcastic>

Bakevm with tooling > create userdata script for cloud init to copy payload to server > create systemd oneshot service to run payload && shutdown


You should try it! Our baking interface looks more like Docker than a traditional VM provision (ala Packer)

Also, we keep the baked images around for a week and trigger them whenever they are requested (with, say, HTTP, or terminal access)


I may be misunderstanding the product, but it seems to be able to run a webserver on each "server".


More info on that would be cool. Something like Amazon's FireCracker?


Sort of - FireCracker is less suited for testing use-cases because it still cold-boots. We use the same kernel features that let laptops resume when the lid is opened instead, they're less mature for production but much faster for staging and testing.


Are you using socket activation or something similar? I think there are really interesting use cases around that paradigm for packing services onto one host.

Some blog posts I remember from when I was exploring this idea a few years ago:

http://0pointer.de/blog/projects/socket-activated-containers...

https://www.ctl.io/developers/blog/post/running-drupal-in-li...


Sort of, you can think of it as an entire socket-activated OS.

Our hypervisor does the logic though, we don't rely on the OS of the staging server itself for anything (think PXE)


Ah. VMware does something similar for hot provisioning VMs from a point in time memory snapshot.

So you guys boot the image in the background, then capture a memory dump of it so you can quickly launch VMs from that snapshot? Or you boot the VMs the traditional way the first time, and then just suspend them when not in use?


We do even more, we take snapshots as the "Layerfile" runs - this basically lets you merge the memory dump semantics you mention with the interface of Docker.

Also, we monitor which files are read by any step in the process, and let you skip setup chunks that don't need to run (i.e., installing libraries) without needing to micromanage what files you copy into the staging server.


Man that sounds cool. I don't have any use for something like that, but I would love to see it opensourced to play with it (which I assume you guys won't be doing anytime soon)


This is great. My only worry is that people shouldn't be allowed to self diagnose Covid-19. It will lead to trolls and cause tons of people self isolating unnecessarily, then eventually not using it once they realize the abuse. Maybe have the sever validate the diagnosis or have the person required to enter a code signed by the server's private key.


My understanding is that to mark yourself as Covid+, you’ll need a code from a health care provider.

I agree that allowing self diagnosis would ruin the entire system.


A quick scan of the linked project suggests no such healthcare provider code is required. The source[1] suggests the flow is literally: "I Have COVID-19" -> "Are You Sure?" -> "Click OK", and that's that.

Anyways, I take this project to be a proof of concept. One would hope that governments will have healthcare professionals replacing the self-diagnosis step. * hope *

[1] https://github.com/CrunchyBagel/TracePrivately/blob/master/T...


> A quick scan of the linked project suggests no such healthcare provider code is required.

That's why this is a sample app and not the actual application that public health authorities will be using.

See below:

"A representative from Apple and Google's joint contact-tracing project said that their system similarly envisions that patients can't declare themselves infected without the help of a health care professional, who would likely confirm with a QR code." [1]

[1]: https://www.wired.com/story/apple-google-contact-tracing-str...


Sure, you can report any arbitrary key as positive. I can even do it right here on HN; my positive key that I just made up is "0d d8 cb 25 8a 88 aa df 6a 33 17 5f 59 ad fd bf"!

... now what? Someone has to aggregate that key (along with all the other flagged ones) somewhere, and then end users have to voluntarily choose to download the keys from that source and check for themselves if they came into contact with it. So you would have to get someone to accept your self reported positive key, and then convince a bunch of end users to trust that (apparently untrustworthy) data source.

I expect that most databases will require some sort of authentication from a healthcare provider or known laboratory before they will accept a key from an end user.


Yes, just a concept. Developer tweeted about issues with government’s plan and then spent part of a day building a concept to show how it should be done, then fleshed it out to share code publicly.


If the Rolling Proximity Key is not recorded by other users (e.g. the abuser haven't put their device in a high human traffic location and intentionally broadcast your RPK), attacker uploading Diagnosis Keys will not cause any effect.

If the abuser took the effort to place a device and broadcast RPK for a while, then upload the Diagnosis Keys, I'm hoping Apple or Google have a way to validate the requests is from a legit device and thus abuser would have to have a lot of devices to game the system.


Or possibly they could take a picture of their diagnosis sheet?


Because it can abused. The easier you make it upload, it also allows bad actors to upload invalid data to cause people to go into quarantine unnecessarily. It only works well if you can trust the data, so I think it should error on the side of validating the data instead of openness.


In an open model, the extent to which abuse is possible would be determined entirely by the authentication requirements (or lack thereof) imposed by the entity operating the server the user selected. That being said, another commenter linked to a set of specifications (https://news.ycombinator.com/item?id=22836871) which seem to indicate that (at least on iOS) the data source is determined entirely by an app that the user chooses to run on top of the framework.


It's normally not a fixed amount. It's generally a percentage above costs. If all costs increase, they increase their revenue. They get the same percentage of a bigger pie.


Having youtube tutorials on something does not make you an expert on the subject. He appears to not have the skills to do basic JS debugging. It's great to ask for help, everyone needs help at some point. The issue I see is starting with "The dark side of GraphQL". If you haven't found the actual issue, how do you even know what the cause is? Just because your SQL query is fast doesn't mean there is some inherent problem in GraphQL or Apollo. That argument doesn't follow. It could be user error.


I don't mean to sound like a blind witness fanboy, but I think teaching something effectively does require some mastery of the subject.

I also don't think it is hyperbolic say he doesn't have basic debugging skills. What qualifies as basic debugging skills? Like he isn't capable of using a debugger and introspecting code? He can't use a print statement and look at code? Debugging an E2E bottleneck is not trivial.


Not sure why you are getting downvoted. The person actually states that they don't know how to debug, "honestly, I'm not 100% sure the best way to debug from here." They are just looking at Datadog stats and not finding the root cause. They could do some basic JS debugging of the open source library to figure out the issue. Blaming Apollo would be a stretch (which may not even be the issue since they haven't done any debugging), but the protocol of graphql is way too far.


> Not sure why you are getting downvoted.

I don't know why anyone downvotes as they do, but the previous post is an irrelevant argument about semantics, so in my opinion it deserves to be downvoted.

Actually, now that I think of it, it's a little worse than that. The OP is being criticized for not understanding how to debug or improve the performance of their dependency while actively engaged in figuring out how to debug and improve the performance of their dependency. (People respond with questions, OP provides substantive answers, there's a back-and-forth and OP forms an idea it's related to a deeply nested schema, and so on.)


> The OP is being criticized for not understanding how to debug or improve the performance of their dependency…

Nothing I said is a critique of Ben Awad, the Twitter OP. I assume we've all used dependencies that we don't completely understand, no?

To rephrase my point: The dark side of semi-opaque dependencies like ORMs, application frameworks, etc. is that when the magic doesn't work, one may not be able to easily determine where to start in order to address issues.

EDIT: Removed unnecessary paragaph to simplify response.


The person who wrote the “creative” title here on HN for the Twitter url they submitted is not the same person having the issue with performance and posting for help on Twitter.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: