> Source - I'm a videographer that also works as a cinematographer / director on smaller budget projects.
Tangential - any helpful advice you could give to budding videographers? I'd love to make those nice B-roll images you see in YouTube videos (Engineering Explained comes to mind).
Most advice is either for folks videoing people, or generally for photography. Funny thing is I'd say I'm already a very solid photographer... but my videos (admittedly shot on my phone) never look as good.
Learn to shoot static first. Biggest mistake I see people make when they move from photo to video is moving the camera without intention. Master the basic size of shots - wide, mid, closeup - with a variety of stills lenses on a tripod (or in hand with good in camera stabalisation).
Then learn the basic moves - ped, pan, track etc. If you're moving, think about how you're stabalising your camera - gimbal, shoulder rig etc. Most DSLR's do not have good enough stablisation to allow movement without artifacts.
Make sure you understand your camera. For photos you have much more leeway in post. For video I'd recommend always shooting at the camera's native ISO, at 24/25/30 shutter speed, and keeping shutter angle at double the shutter speed (or 180 degrees).
Don't change settings during a shot (other than focus). Set everything to manual, get your ISO, white balance, shutter speed or angle right, and leave it at that for the duration of the shot. If the lighting changes in the shot, your settings should cover the whole extent of the lighting for that shot.
Think about each shot as an image. i.e.: Don't try to catch everything, but focus on a detail, or framing, just as you would with a photo. If you're filming people, how they sit in the frame in relation to the background and other people (how large they are in frame, how they're blocked, whether they're enclosed by foreground detail etc) determines how we see them.
Just focus on all the basic photography stuff - rule of thirds, colour theory, bokeh etc. People just get overwhelmed when they switch to video, but the same rules apply. It's really just moving photographs after all.
Movement is in time, think about a nice frame of a railway line in a landscape - then a train enters and passes through it. Movement is everywhere - water, reflections, shadows, animals. Find a strong frame in nature or the build environment, that has movement, or will have movement passing through it and shoot that.
Then start thinking about how shots connect together. Even B-Roll tells a story and has a rhythm. Wide to closeup, big object to small object, matching motion between shots, directing the viewers eye as it moves across the frame. You're always telling a story, so when you get 'coverage' try to have the story you'll tell in the edit in mind. If you're capturing a place, whats a wide or ultra wide that gives us an emotional impression of the place. What are some details that colour it in. Whats a change thats occurring that ads movement life and purpose.
Basically it's about intentionality and choice. Whats the feeling you're trying to convey and which shots convey it best. A good exercise is trying to shoot a happy event in a threatening or disturbing style, or vice versa. Here's an example where I shot and edited a St Patrick's day parade in a nightmarish style - https://www.youtube.com/watch?v=lpj-fK8obPI
Think in terms of the final video or film rather than individual shots. That's the equivalent of the finished photo.
> Man, paying Google/Apple $5/mo is surely a much better solution for her. And are you really doing 3-2-1 on that?
Just some days back someone on reddit posted how their 14yo son (via a family/linked Google account) used Gemini Live to, err, enjoy himself with the camera on.
All his accounts are now permanently locked for CSAM.
So, yes, not being beholden to a megacorp absolutely has its uses.
That Reddit post was thoroughly debunked as untrue. It had some obvious plot holes and inconsistencies.
Google even came out and said that’s not how account suspensions work: They don’t sequentially ban other accounts that have been associated with a device that was associated with an account, as many pointed out.
I’m surprised how many people fell for that obvious piece of Reddit creative fiction. I think we’ll be hearing about it as an urban legend for years.
Reddit has become a place for posting fiction on advice subs. It started on the relationship advice subs but has spread to all of the advice subs now, like the legal advice post you saw. You have to read Reddit with a lot of skepticism.
Thanks, it's good to know this thing wasn't true. I wasn't aware of it at all.
Unfortunately I have seen other horror stories (dad takes a picture to send to the doctor, it uploads to iCloud/Google photos, account gets banned) to be wary of trusting any such large corp.
Partly tangential, but just yesterday there was a post of someone with a checzk password who got locked out of their iPhone. Now of course an iCloud backup might have actually helped them here, but the reliance on "It's Apple, it'll work" is a very common thing (understandably!), but unfortunately not true.
Oh, by the way - this was the account he used for his business (I don't remember if it was a custom domain). He's pretty much lost his only way of communicating with customers. This isn't just a "whoops, let me make a new email" situation.
(You can go to the legal advice UK subreddit if you want to see the post.)
> However on android the sampling rate of the acceleration sensor is limited to 50/s. At least if you install through the official app store.
My understanding is that it’s the same even on iOS (or at least on my iPhone SE 2020). More specifically, the output only measures till 50hz (but the sensor sampling rate is actually 100hz - Nquist, you need double the measured frequency as sampling frequency, yada yada.)
I get 100/s on an iPhone SE2. 50/s on a Samsung Galaxy A16 which was released in 2024 or 2025, but that is due to an API restriction. You can export from phyphox (.xslx or .cvs). You get timestamps in the first column. Phyphox refers to the raw data rate, not Nyquist freq.
The sensors have analog lowpass filters that can be adjusted in order to avoid aliasing.
In general, with more bandwidth you can do more intrusive things. But if you want to tell wether two people ride in the same car, 50 Hz should be sufficient anyways.
By the way, it’s important to note that measuring vibrating things can permanently damage the OIS VCs in the camera. (See: Apple’s warning against motorcycle mounts.) my iPhone already had a broken OIS so I didn’t mind as much.
…I'm a bit afraid to ask, but are folks from Greenpeace supposed to be rich or something? (I'm not from the US so idk if it's a cultural thing I'm missing.)
Unless you come from privileged background, you don't exactly have the free time to go and prostest against the destruction of habitat of toads. And even if you do have the time, you probably don't care.
> I love that we're still learning the emergent properties of LLMs!
TBH, this is (very much my opinion btw) the least surprising thing. LLMs (and especially their emergent properties) are still black boxes. Humans have been studying the human brain for millenia, and we are barely better at predicting how humans work (or for eg to what extent free will is a thing). Hell, emergent properties of traffic was not understood or properly given attention to, even when a researcher, as a driver, knows what a driver does. Right now, on the front page, is this post:
> 14. Claude Code Found a Linux Vulnerability Hidden for 23 Years (mtlynch.io)
So it's pretty cool we're learning new things about LLMs, sure, but it's barely surprising that we're still learning it.
(Sorry, mini grumpy man rant over. I just wish we knew more of the world but I know that's not realistic.)
I'm a psychiatry resident who finds LLM research fascinating because of how strongly it reminds me of our efforts to understand the human brain/mind.
I dare say that in some ways, we understand LLMs better than humans, or at least the interpretability tools are now superior. Awkward place to be, but an interesting one.
LLMs are orders of magnitude simpler than brains, and we literally designed them from scratch. Also, we have full control over their operation and we can trace every signal.
Are you surprised we understand them better than brains?
We've been studying brains a lot longer. LLMs are grown, not built. The part that is designed are the low-level architecture - but what it builds from that is incomprehensible and unplanned.
LLMs draw origins from, both n-gram language models (ca. 1990s) and neural networks and deep learning (ca. 2000). So we've only had really good ones maybe 6-8 years or so, but the roots of the study go back 30 years at least.
Psychiatry, psychology, and neurology on the other hand, are really only roughly 150 years old. Before that, there wasn't enough information about the human body to be able to study it, let alone the resources or biochemical knowledge necessary to be able to understand it or do much of anything with it.
So, sure, we've studied it longer. But only 5 times longer. And, I mean, we've studied language, geometry, and reasoning for literally thousands of years. Markov chains are like 120 years old, so older than computer science, and you need those to make an LLM.
And if you think we went down some dead-end directions with language models in the last 30 years, boy, have I got some bad news for you about how badly we botched psychiatry, psychology, and neurology!
Embedding „meaning“ in vector spaces goes back to 1950s structuralist linguistics and early information retrieval research, there is a nice overview in the draft for the 3rd edition of speech and language processing https://web.stanford.edu/~jurafsky/slp3/5.pdf
You are still talking about low level infrastructure. This is like studying neurons only from a cellular biology perspective and then trying to understand language acquisition in children. It is very clear from recent literature that the emergent structure and behavior of LLMs is absolutely a new research field.
"Designed" is a bit strong. We "literally" couldn't design programs to do the interesting things LLMs can do. So we gave a giant for loop a bunch of data and a bunch of parameterized math functions and just kept updating the parameters until we got something we liked.... even on the architecture (ie, what math functions) people are just trying stuff and seeing if it works.
> We "literally" couldn't design programs to do the interesting things LLMs can do.
That's a bit of an overstatement.
The entire field of ML is aimed at problems where deterministic code would work just fine, but the amount of cases it would need to cover is too large to be practical (note, this has nothing to do with the impossibility of its design) AND there's a sufficient corpus of data that allows plausible enough models to be trained. So we accept the occasionally questionable precision of ML models over the huge time and money costs of engineering these kinds of systems the traditional way. LLMs are no different.
Saying ML is a field where deterministic code would work just fine conveniently leaves out the difficult part - writing the actual code.... Which we haven't been able to do for most of the tasks at hand.
It is impossible to design even in a theoretical sense if functional requirements consider matters such as performance and energy consumption. If you have to write petabytes of code you also have to store and execute it.
I'm a psychiatry resident who has been into ML since... at least 2017. I even contemplated leaving medicine for it in 2022 and studied for that, before realizing that I'd never become employable (because I could already tell the models were getting faster than I am).
You would be sorely mistaken to think I'm utterly uninformed about LLM-research, even if I would never dare to claim to be a domain expert.
I'm still impressed by the progress in interpretability, I remember being quite pessimistic that we'd achieve even what we have today (and I recall that being the consensus in ML researchers at the time). In other words, while capabilities have advanced at about the pace I expected from the GPT-2/3 days, mechanistic interpretability has advanced even faster than I'd hoped for (in some ways, we are very far from completely understanding the ways LLMs work).
Learning about the emergent properties of these black boxes is not surprising, but it's also not daily. I think every new insight is worth celebrating.
Oh I very much agree that it's great to see more research and findings and improvements in this field. I'm just a little puzzled by GP's tone (which suggested that it isn't completely expected to find new things about LLMs, a few years in).
Sorry lol, to me it felt like you were (pleasantly) surprised by this research. IMO I'd hardly be surprised to see breakthroughs in LLM understanding years or even decades from now. I guess I misunderstood your tone.
Indeed. For me, it's also a good reminder that AI is here to stay as technology, that the hype and investment bubble don't actually matter (well, except to those that care about AI as investment vehicle, of which I'm not one). Even if all funding dried out today, even if all AI companies shut down tomorrow, and there are no more models being trained - we've barely begun exploring how to properly use the ones we have.
We have tons of low-hanging fruits across all fields of science and engineering to be picked, in form of different ways to apply and chain the models we have, different ways to interact with them, etc. - enough to fuel a good decade of continued progress in everything.
I mean... You could? AI comes in all kinds of forms. It's been around practically since Eliza. What is (not) here to stay are the techbros who think every problem can be solved with LLMs. I imagine that once the bubble bursts and the LLM hype is gone, AI will go back to exactly what it was before ChatGPT came along. After all, IMO it's quite true that the AIs nobody talks about are the AIs that are actually doing good or interesting things. All of those AIs have been pushed to the backseat because LLMs have taken the driver and passenger seats, but the AIs working on cures for cancer (assuming we don't already have said cure and it just isn't profitable enough to talk about/market) for example are still being advanced.
I agree on that part as well, but saying that AI will go back at what it was before ChatGPT came along is false. LLM will still be a standalone product and will be taken for granted. People will (maybe? hopefully?) eventually learn to use them properly and not generate tons of slop for the sake of using AI. Many "AI companies" will disappear from the face of Earth. But our reality has changed.
LLMs will not be just a standalone product. The models will continue to get embedded deep into software stacks, as they're already being today. For example, if you're using a relatively modern smartphone, you have a bunch of transformer models powering local inference for things like image recognition and classification, segmentation, autocomplete, typing suggestions, search suggestions, etc. If you're using Firefox and opted into it, you have local models used to e.g. summarize contents of a page when you long-click on a link. Etc.
LLMs are "little people on a chip", a new kind of component, capable of general problem-solving. They can be tuned and trimmed to specialize in specific classes of problems, at great reduction of size and compute requirements. The big models will be around as part of user interface, but small models are going to be increasingly showing up everywhere in computational paths, as we test out and try new use cases. There's so many low-hanging fruits to pick, we're still going to be seeing massive transformations in our computing experience, even if new model R&D stalled today.
I hate to "umm, akshually" but apparently we have been studying the brain for thousands of years. I wasn't talking about purely modern neuroscience (which ironically for our topic of emergence, (often till recently/still in most places) treats the brain as the sum of its parts - be them neurons or neurotransmitters).
> The earliest reference to the brain occurs in the Edwin Smith Surgical Papyrus, written in the 17th century BC.
I was actually thinking of ancient greeks when writing my comment, but I suppose Egyptians have even older records than them.
None of that counts as studying the brain. It's like saying rubbing sticks together to make fire counts as studying atomic energy. Those early "researchers" were hopelessly far away from even the most tangential understanding of the workings of the brain.
But fundamentally speaking, they were trying to understand the brain, right? IMO that counts as science/study in my books. They understood parts/basics of intracranial pressure so long back.
And if we say it's not science if it's not correct, well, (modern) physics isn't a science then, right? ;) As we haven't unified relativity with quantum mechanics?
Interestingly enough, for a while physics used to be studied by philosophers (and used to be put in the natural philosophy basket, together with biology and most other hard sciences).
The intersection of physics isnt psychology it is philosophy, and the same is true (at present) with LLM's
Much as Diogenes mocked Platos definition of a man with a plucked chicken, LLM's revealed what "real" ai would require: contigous learning. That isnt to diminish the power of LLM's (the are useful) but that limitation is a fairly hard one to over come if true AGI is your goal.
Sir Roger Penrose, on quantum consciousness (and there is some regret on his part here) -- OR -- Jacob Barandes for a much more current thinking on this sort of intersectional exploratory thinking.
I thought it was determined (slight pun) that free will is not a thing. I'm referring to Sapolsky's book "Determined: A Science of Life Without Free Will)" as an example.
Hate to say it and sound so "conspiracy-like", but I no longer can trust what the current US administration is saying. Ever since the path of a hurricane was redrawn with a sharpie, it's been... unusual.
I think the problem is that in previous administrations at least they had some skill in lying in ways that were not so constantly contradicting one another.
Regardless of whether it's a "perfect setup" or not, the facts speak for themselves.
Most competent governments don't say things that are outright wrong. They may use double speak, or not comment on a topic. But this government (and unfortunately it's this specific adminstration/president) has acted time and again in a way that both of us know very well.
Not really. Just that trust ain't binary and the govt is made of people. I don't like this admin but this too shall pass. Cultivate your garden. Electing bad people has consequences.
None of what's happening today could have happened without everything that came before it.
The blue team carries plenty of blame for not fielding better candidates. If nobody is buying your bullshit, it's a little weak to blame the customer.
And all of the us electorate carries plenty of blame for letting our government get so massive and out of control over time. We've let this beast metastasize and grow, and now were stuck with it.
The American people are ultimately to blame for it, they've got the government they deserve, which is actively dismantling the US empire day by day. The American people voted for Trump instead of Kamala, and that is rather damning of the state of the American people, far more so than however damning it may also be for the Democratic party.
As we all know, in this day and age, you need to REALLY sell your story, and have the media behind you. Competence is tertiary.
> Approval of Trump among Republicans has slipped to a second-term low of 84%, down from 92% last March. At the same time, an all-time high 16% of Republicans disapprove. This shift can be attributed, at least in part, to declining support among non-MAGA Republicans, as approval dropped 11 points in the last year among this group (70% in March 2025 to 59% today). Virtually all MAGA Republicans continue to approve of Trump, with 98% approving a year ago and 97% now.
Or the bootlicker olympics for those who want everyone else to ignore the constant lies because they think bigger, more powerful government is utopian.
I wouldn't be so pleased with myself over such "You will get wet in a rainstorm." style predictions.
truths from different angles that are at odds with one another produce mistrust and thoughts of conspiracy. We have more of that now than we have ever had, ever. It doesn't take Nostradamus to point to the trend.
tl;dr : Gee, where did this mistrust in the current government come from? I'd point but I don't have that many hands.
I desperately hope so too. It will be absolutely terrible if there were to be an issue, and moreso if people can say “We knew about it beforehand but still went ahead”.
Only if you're a software-only startup. If you have hardware, the entire article is still valid.
reply