For what it's worth, I'm not sure specificity/sensitivity is the limiting factor here (for doing broad population surveys with this test); it's the low base rate of Alzheimers relative to the specificity. At 90% true positive and 10% base rate, I think? (I suck at math) a positive test has like a coin flip chance of being right?
We're in the realm of probability, which is mysterious sometimes even to people who have a math background. I didn't fully grok it until I studied intensely for my first actuarial exams. (:
This kind of thing is almost always a weighted coin toss: with sensitivity or specificity alone, you only have two possible outcomes (present/relevant, absent/irrelevant), and the thing that changes is the probability distribution of those outcomes.
Combining the two gets you the full four: present and relevant; present and irrelevant; absent and relevant; and absent and irrelevant. Taking the uniform distribution, they're all 25% likely, but the idea is to find a probability distribution that makes the "present and relevant" and "absent and relevant" outcomes more likely.
Since I myself don't work in a clinical setting, I simply hadn't considered that the clinician would want to exercise discretion in pre-screening for specificity before ordering the test in order to get there. Oops.
I think you may be close but likely for the wrong reasons. I had to sit down with this for a moment to feel comfortable with it.
If we take your 10% base rate to be disease prevalence, that gives us 100 sick and 900 well.
Of the 100 sick, something with 90% sensitivity should get me 90 true positive tests and 10 false negative tests.
Of the 900 well, I should expect to see for a test with 90% specificity, what, 80 false positives and 810 true negatives? if I did my arithmetic right?