Hacker Newsnew | past | comments | ask | show | jobs | submit | jcarreiro's commentslogin

There are many, many more tax returns filed by people earning under 200k adjusted gross income than those earning more, I assume. So if there's a uniform chance that a return is audited, we would expect most audits to be done on returns under that threshold.

Of course, it may not make sense to select returns uniformly at random for audits...


Also, if tax cheating is uniform across the population, then the statement "there are more tax cheats earning under 200k" is true but wildly misleading, since "there are more taxpayers earning under 200k" is also true.


Nowhere near 48% of the population earns enough wages for EITC but still under 25k. It's way way way way overrepresented in audits. Nearly half of the audits are aimed at the poorest workers.

------- re: below due to throttling-----------

.... they were audits according to IRS. This is from the FOIA'd audit numbers from IRS via TRAC.


They are not audits. They are automated notices to idiots trying to claim the same child tax credit in multiple returns or hiding income(not reporting their w2 lol) to claim the EITC


In other words, understaffed agency goes for the low hanging fruit


Weird way to frame it. The computer does it automatically. They would do it whether they were well staffed or not.


The computer does not do all the loopholes and tax codes automatically. If it did that would solve a lot of these problems. But we need audits in the cases companies or people lie/exaggerate/forget/etc.


The paper says that:

> In practice, we find that four Taylor terms (P = 4) suffice for recovering conventional attention with elementwise errors of approximately the same magnitude as Float16 resolution, acceptable for many AI applications.

ie., the claim is that this method reproduces the results of conventional attention, up to float16 numerical precision.


> approximately the same magnitude

and they really do mean that, their results show +/- 1 on log10 plots.


I don't think this is an accurate characterization of the error magnitude? Their error plots (from appendix 3) are all showing `log_10(|Y - \dot{Y}|)` as having a median of ~-3 (difference of 0.001) and a max of ~1.5 (difference of 0.035), and this is with only 3 Taylor terms.


Oh you're right that is a misread on my part, the appendix charts don't say that. I think they're just useless then though? Since they're reporting absolute error (on a log10 scale) we can't assess the relative to compare to the 'within an order of magnitude' claim in the text.


It converges on conventional attention as P goes up


The method is more general. The github repository's first example is with eight Taylor terms (P = 8).


I'm clueless about this whole thing, but from my EE education I remember that in general:

Taylor approximations converge slowly in terms of error if the function they're representing is discontinuous (the error disappears quadratically if continuous, linearly if not), and they tend to create highly energetic swings near discontinuties (similarly to Fourier series with Gibbs oscillations).

Moreover, Taylor series are inherently nonlinear, and much of the mathematical toolset around AI assumes general linearity (cue linear algebra), with the exception of sigmoids , and going beyond cubic approximations tends to make errors worse (as expressed in SNR).


This must be facetious? A GFCI breaker in a house's electrical panel and any breakers present in the transformer on the utility pole are protecting against _very_ different scenarios -- the breaker in the house is to stop someone from accidentally electrocuting themselves, but the breaker on the pole won't even notice an amount of current that could easily kill someone.


> The problem with that is then you need a mechanism that creates non-uniformly distributed mass.

The mechanism is gravity; and we have good observational evidence that the mass distribution of the universe is not uniform, at least at the scales we can observe (we can see galaxy clusters and voids).


Sadly, this is not possible, at least AFAIK. The basic problem is that a gravitational wave won't push against you, like a water wave would; the wave will pass through you.

See https://worldbuilding.stackexchange.com/questions/36113/woul... for a more detailed explanation.


Because they are researching inertial confinement fusion, not trying to build a working power plant. The efficiency of the lasers doesn't matter, since it doesn't affect their research.


Is energy on the order of 300MJ so cheap? You’d think that cutting it down to 150MJ would allow them to do more experiments.


300 megajoules is 83 kilowatt hours

a typical power price at trading hubs is US$40 per megawatt hour, though this varies considerably depending on many factors and is sometimes actually negative

a typical retail price is US$120 per megawatt hour

so this is about US$10 worth of electrical energy


300MJ ~= 83kWh which is like, $2000 in CA


I think you're out by some orders of magnitude. With the current energy issues in the UK it'd be under £100. Other things suggest in California it's more like 20 cents per kWh so were you thinking ~$20?


You're right of course, messing up my units again.


You can download a version without the watermarks from arXiv: https://arxiv.org/abs/1804.03719.


So if HN added advertising to support the site, you would totally stop posting here, right?


Not unless I could find a place that would pay better for my posts.


This is a good observation, but gravitational lensing surveys have all but eliminated massive compact objects as a potential source of dark matter; see for example https://www.ncbi.nlm.nih.gov/pubmed/17359015.


Unrelated, but why is a paper on astrophysics on the NIH website?


The NIH (specifically PubMed) just indexes most scientific journals. This paper is actually in Phys. Rev. Letters (which the NIH site links to, if you click the DOI link below the abstract):

http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98....


Gravitational lensing surveys coverage is pretty darn small so far it's also might not be appearant on the scales we are talking about there will need to be a pretty lucky alignment of a galaxy and a black hole given the space and relative size of both objects and the noticeable lense diameter I'm not sure that intergalactic lensing will be detectable as intragalactic ones which also are pretty rare.


Have you done the math to back up this position? Because I'm pretty sure the people who wrote the paper did the math for theirs.


"The period for which reasonably reliable instrumental records of near-surface temperature exist with quasi-global coverage is generally considered to begin around 1850. Earlier records exist, but with sparser coverage and less standardized instrumentation.

The temperature data for the record come from measurements from land stations and ships. On land, temperature sensors are kept in a Stevenson screen or a maximum minimum temperature system (MMTS). The sea record consists of surface ships taking sea temperature measurements from engine inlets or buckets. The land and marine records can be compared.[13] Land and sea measurement and instrument calibration is the responsibility of national meteorological services. Standardization of methods is organized through the World Meteorological Organization and its predecessor, the International Meteorological Organization.[14]"

Source: https://en.wikipedia.org/wiki/Instrumental_temperature_recor...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: