This situation (lack of schematics) is powering the open source test instrument surge.
Test instruments are an interesting space in the technology center mostly because they are never "obsolete." If they are operating to their original specifications and tolerances, they do the job just as well in 2020 as they did in 1960. As a result, test equipment can really hold its value well.
I recently paid about $700 to have my HP Spectrum Analyzer from the early 90's[1] calibrated. I bought the instrument for $1,100 at auction, it's working range is 9kHz to 12GHz. So for $1,800 I have an instrument that, if I were to buy the current generation model, would cost me nearly $18,000 or 10x that price. Even a "cheap" Rigol that only goes to 6.5GHz would cost $10,000.
So what to do when your sales are "one and done" ? It is a hard problem from a business model perspective.
[1] EDIT: Turns out it is only 30+ years old, not 50+. Its an HP8596E "portable" (weighs quite a bit :-).
Here's where I have to chime-in with a reality that people have yet to experience and most don't know about: The unintended negative consequences of RoHS.
I have over a dozen instruments and several tools that are, in some cases, over 30 years old. I think all of my scopes, signal generators, DVM's, logic analyzers, DSO's, probes, lab power supplies, etc. are pre-RoHS and, in most cases, significantly so.
The transition to RoHS, while, in principle, well-intentioned, is likely to prove to have been a massive mistake.
Lead-free solder has one major problem: Tin whiskers.
One way to think about this is that all RoHS electronics has a stochastic failure rate. I have devoted more time than I care to admit studying tin whiskers in the context of my work in aerospace. I have, in that process, consulted with NASA scientists who were are the forefront of long term research on the subject. The most salient take-away was that we had no way to predict or truly mitigate tin whiskers. The only mitigation in aerospace is to, quite literally, send chips to services that remove all lead-free solder from the pins (or remove balls from BGA's) and replace them with conventional lead solder.
If this plays out as it could, landfills are going to be piled sky-high with broken electronics. From phones to laptops, TV's, ovens, clocks...anything really. And cars, yes, cars!
Lead-free RoHS solder is a ticking time bomb and, in my opinion, one of the most misplaced decisions made in the name of protecting the environment.
I have HP-41 calculators I bought in the early '80's that still work as new today. That's to say they are nearly 40 years old. There is no way a RoHS compliant calculator will survive 40 years. That is nearly impossible. And so, millions of them will end-up in landfills. Well done European Union, you really helped the planet with that one!
For those not familiar with RoHS issues (a deep and wide topic), here's a starting point:
Ideally, we would be building devices that don't suffer from tin whiskers in the long term. Pragmatically, the vast majority of e-waste is being generated from simpler failures than tin whiskers (degraded batteries, simple mechanical breakage, changing standards and performance requirements, etc). In that case, it is better that the supply chains for those products are lead-free, reducing harm in production and in disposal.
We need to make our consumer products longer lived and circular, and that should be a prerequisite for RoHS exemptions (as it is currently for certain classes of long lived non-consumer devices).
Funny, the simplest thing to do there, and relatively cheap, is to lacquer the whole device after it's soldered. This is done on military grade hardware - or even tougher conformal coating.
Sadly, no, this does not avoid or stop tin whiskers at all. The general technique you are referring to us called “conformal coating”. It can help a bit but it does not stop growth. Also, at that scale the whiskers are super strong and easily puncture right through the coating.
This is a deceptively complex topic (as I learned over the years). For example, a whisker that does not penetrate the coating will either buckle and curl-up under the film or grow laterally under it. With fine pitch components having pin-to-pin gaps in the range of 0.25 mm, a short between pins can happen in just a few weeks.
Conformal coating is useful for other reasons, but, ultimately, it isn’t a solution for tin whiskers.
Back in the day I even looked into using specially formulated epoxy encasement. What starts happening there is that the coefficient of thermal expansion differential between The epoxy, board and components can cause all kinds of failures...and it still does not stop tin whisker growth or buckling.
This is fascinating, I'd never heard of tin whiskers before today. From a short look at the literature, nickel-palladium-gold alloys seem to be an alternative lead-free finish that doesn't suffer from whiskers. Have you worked with that at all? Does it have other shortcomings?
Well, being that most of my work during the last few years has been in aerospace we just use conventional lead-containing solder. When you can't afford for things to fall out of the sky you go with what you know works. We've known about tin whiskers since, I think, the 1940's. In fact, if I remember the history correctly, we added lead to solder precisely for this reason way back in the prehistoric era.
I might be diving into some industrial work soon that does not benefit from RoHS exemptions in this domain. This is one of the important cards in my kanban board...likely a rabbit hole that will not be pleasant to navigate.
I took a quick look myself. Here's a note from Maxim with interesting data:
I've used pretty much all the conformal coatings they list in this article. What they don't go into is buckling under the coating, which can short adjacent pins in fine pitch chips.
I truly hate this problem. I has wasted more of my time over the years than anything else. RoHS is, in my opinion, a very misguided directive. It's a perfect example of when politics gets ahead of reason and fear-mongering wins over science. If we are not careful we are on track to do something similar with climate change.
> Funny, the simplest thing to do there, and relatively cheap, is to lacquer the whole device after it's soldered. This is done on military grade hardware - or even tougher conformal coating.
That helps avoid tin whiskers, but those aren't the only problem with lead free solder. It doesn't flow as easily as tin-lead, so dry joints are more common. It's relatively brittle, so it's more susceptible to vibration and thermal fatigue. Melting point is higher so components are more likely to be damaged during soldering.
Military hardware is exempt from RoHS for good reason.
> the vast majority of e-waste is being generated from simpler failures than tin whiskers
I can't assert or challenge this conclusion because we simply do not have the data. This is yet another reality of this issue: Tin whisker failures in the consumer domain are nearly 100% unreported.
Consumers are certainly not in a position to conduct the forensic work required in order to determine failure modes. My guess is that the large majority of devices are thrown out upon failure, which means we know nothing about why they failed.
Companies, on the other hand, frankly, have little incentive to conduct that research other than to use best practices during design. If a product survives a couple of years they are good. This isn't due to some dark profit motive, it's just a reality of business. You simply can't ask every single customer to send you their failed hardware for expensive --very expensive-- forensic analysis no matter their age or condition. This would mean having to manage a process involving forensics of millions of devices per month or year, depending on scale.
When I was investigating tin whisker mitigation options I explicitly sought out data from the consumer electronics domain. I quickly discovered there was none, at least nothing useful. This is why I don't think a statement such as the one you made is neither right or wrong, it simply has virtually no supporting data in either direction.
I should rephrase. From my experience in consumer electronics, the vast majority of instances of consumers stopping using a device are not from unknown failures. They are from visible failures (including reduced battery life), software/content incompatibility or obsolescence, or the consumer just generally losing interest in the device. I should caveat though that this is in PC and mobile categories. It’s plausible that things like appliances don’t fall into those failure or obsolescence modes and could be facing unknown failures that are solder related.
Yeah, it's hard to quantify in any meaningful way. That's why I said I am not in a position to confirm or challenge what you said and all I know is that, for the most part, we can't really make scientifically supportable statement about the consumer world (by this I mean with actual published data from an authoritative source). We are blind.
In 1992 I got a Swatch "Earth Day" edition self winder. Made from the best modern earth friendly materials. By 95 it had cracked and yellowed to the point it was illegible.
I started wearing my grandfather's '68 Caravelle self winder. I still am.
I had heard about tin whiskers, but always in relation to NASA. I had assumed it was only a problem in space, somehow related to lack of gravity and/or atmosphere.
Apparently it's just that NASA cares about reliability more than most, and hence takes the time to do more detailed failure analyses.
NASA has done tons of research on this over decades. I have read most of the papers they published and worked with a couple of their researchers. While the context was aerospace, the effect happens on earth just as well. Pretty much all of their tests are done on the ground.
The reason lead was added to solder in the ~30s-40s was tin whiskers. Think about how huge components and pitches were back then and they still noticed tin whiskers and changed the solder to avoid them.
I agree, and my belief is that RoHS is a convenient excuse for companies to continue creating e-waste, as long as it's "environmentally friendly" e-waste.
I suspect a similar story with "biodegradable" materials too, they just haven't become common enough to workd their way into products which people expect to last longer.
If I remember history, it was France who pushed for RoHS. Manufacturers wanted nothing to do with it precisely due to reliability issues. When the EU made it a requirement the entire world was forced to follow.
I remember having to retool most of my products at the time. It was nothing less than a nightmare.
Also the fact that some critical categories are exempt (military, aerospace, automotive, active implantable medical devices etc.) speaks volumes about that.
Just thought of this while reading your post: Could the tin whisker problem be solved by simply reflowing circuit boards every 5-10 years, before they actually short out? I mean, it's labour intensive but for specialized or hobby equipment, might be worth doing.
True. I wonder if you could do something with IR and a mask to selectively heat the bits of the board that’d take it?
Then again, later in the slides they talk about simply giving things a scrub with a wire brush to clean off the whiskers. Apparently they don’t regrow after the first few years.
> Could the tin whisker problem be solved by simply reflowing circuit boards every 5-10 years
Apart from logistics, many components have a hard time surviving the heat of soldering during original manufacture and may fail on subsequent reflowing.
Lead free solder typically has a melting point around 35C higher than tin-lead which exacerbates the problem.
Yes, lead-free solder brings a whole lot of trouble to the table. Perhaps someday someone will come up with a formula that doesn't -- but until then, I continue to use lead-based solder.
> So what to do when your sales are "one and done" ? It is a hard problem from a business model perspective.
The vendor's responses since the ~90s is pretty much that they do trade-ins and crush the old gear. That is one of the reasons (the other being outsourcing) why test instruments made in the last ~20-25 years are much rarer in EU/US compared to older gear.
Sort of a 'make on demand' model. It might work. These days it is hard to find parts that don't get end-of-lifed after 5 years, so you're re-doing schematics and thus your instrument's PCB. You could perhaps make a modular structure such that the instrument was constructed from modules but I'm not sure how well that would work.
The open source trend is to put the instrument in its box and connect via USB to a computer that provides the UX. That works but has the issue of computer operating system compatibility. I've got a USB scope in my junk drawer that has a UI that only works on Windows 98. Not too useful these days, and the company is out of business.
The instrument control protocol standards help here.
To some extent standardized assemblies have been used for a very long time. HP System 2 (the cute 1/3 rack units) is from the 60s, System 3 from the late 70s (I think), though the actually walked back on that one and later replaced the standardized cast aluminium parts with unit-specific sheet metal bending (3478A and others). Later they re-used the display/input modules, both HP and Keithley did this for a lot of their later DMMs and SMUs.
-Perhaps a stupid question, but wouldn't that UI live and thrive inside a Win98 VM?
(I am cursed by the instrumentation gods - I have to keep all sorts of late-paleolithic configuration software running to support the ditto hardware we still rely on for specific tasks in our service department.
I've found that just about anything can be coaxed into working - at least as long as it doesn't involve some copy protection scheme relying on slightly iffy RS232 timing and a dongle.
You would be correct. Of course it is just a USB 2.1 box and enumerates as a HID device and a bulk data endpoint so presumably spending some time with a USB protocol analyzer and running it through its paces would illuminate some aspects of its operation.
This is why its still in my junk box and not in an ewaste skip somewhere :-)
> This situation (lack of schematics) is powering the open source test instrument surge.
Actually, I would argue that the expense of tech support is what is powering the open source test instrument surge.
Unless I can sell you a $20,000 instrument and cut you off after 3 years, my support costs are probably unsustainable.
This means that anything worth less than $20,000 is unprofitable (case in point: the Tektronix USB Real Time Spectrum Analyzers). So, the only way around that is to make it worth even LESS and make it open source.
Now, your tech support is the internet at large. Good luck with that.
Sure when you can't find connectors, power cables, comparable software or accessories. You got your instrument compared to a traceable reference but good luck getting an adjustment or repair performed.
This only applies to dumb instruments (power supplies and dc electronic loads) but remember the technology you are measuring is eventuality going to have performance that exceeds your ability to test.
Seriously, I strongly prefer test equipment from the pre-software era. Test equipment shouldn't have to boot, and it shouldn't be dependent for its operation on the shitty non-realtime junk we call operating systems nowadays.
As do I. That said, there is something to be said for the economics of using software to compensate for manufacturing tolerances in hardware. 3D printers are a good example of taking a crappy platform and making it perform reasonably precise things using clever software.
Of course sometimes that software is in an FPGA design :-). I expect that once the Xilinx RFSOC starts ending up in test gear you'll see some more cost effective GHz gear but very unhackable due to the FPGA nature of things.
Of course no amount of software is going to make the front end conditioning circuits quieter (well I suppose simulating them and designing from that might but not on the instrument itself). That will always need a certain expertise to design and implement.
I dunno, I've been doing a lot of RF analysis lately as my company has been re-upping our capabilities in that regard. We went from an 80's spectrum analyzer (the Rohde and Schwarz rep laughed at us when we called to ask for calibration data... "that machine was made before you were born") hooked up to an ink plotter to a mid-level Rigol analyzer. It really made me appreciate being able to recall the settings for the scan I ran last week, dump the data for the new scan, baselines and a few variations I wanted to try, and compare them all in Excel back at my desk (alongside the results from the external compliance lab).
Fair point. Sending data back and forth is certainly useful. My main gripe is with "instruments" that are themselves just applications on a PC with a little bit of external A/D interface hardware. I prefer standalone boxes, perhaps with a data interface but not dependent on it.
The newest Keysight spectrum analyzers all run Windows 10. Even an older Tektronix scope I use daily runs XP. The XP install has crapped out a few times but it has a recovery partition built in so it's trivial to restore it.
Good guess, but my date was wrong. It wasn't 1974 it was in the early 90's apparently (8596E btw). I suspect there is a decoder ring for the serial number which would tell me exactly. I've updated the date in my original comment.
Can’t beat 20th century HP test equipment. The pinnacle of quality.
I have an 8566B at home. These things are built like tanks, and were made to operate in high temperature environments. We had about 10 of them at work, and the only things that would fail (slowly) are the CRT supplies. I’ll get the LCD retrofit when that happens.
Broadly speaking test instruments often do similar things that some consumer devices do as well, but 1) with greater precision, accuracy and stability 2) over a much wider range of parameters.
For example, a spectrum analyzer is similar to a radio receiver, but the SA usually covers a much larger rage (typically 9 kHz - 1.8, 5, 12 GHz), the SA can handle a huge range of input powers (typically something like -1xx to +20 dBm), and it can accurately measure power levels at all frequencies, has a variety of filters and analysis functions and so on.
The RF section of some of these is actually mostly made of standard COTS parts - input assembly (these use very high grade connectors because of the repeatability requirements), programmable precision attenuator, mixer module, YIG LO etc. -- except all of these are high precision, RF devices, which simply cost a lot of money.
The 80/20 rule applies as well. Well, maybe more like 60/40. An SDR can get you most of the stuff a proper SA can do for a fraction of the price (and can do some things an used SA from the 80s or 90s cannot do). A very good audio interface can serve as an audio analyzer, given the right software. A common pattern is that you can get the capabilities to do something, but not the accuracy of a real test instrument.
One aspect that drives cost is that high precision circuits often need a few parts that have tighter tolerances than the typical 5% or 1% that are usually good enough for 99% of use cases. And these parts are a lot more expensive.
It's worth mentioning that often parts aren't manufactured to be high precision variants. That would be extremely expensive. What happens instead is the parts get measured and "binned" by how close to spec they are. In a large enough sample, statistically a few parts (even with an inaccurate process) will be perfect. The best ones are sold as A grade, etc.
This is pretty common for thinfs like resistors, but even CPUs and RAM are binned by their overclock performance as there's some randomness in production.
When I were a wee lad, we used to machine passive components to value (sometimes the "whole thing", sometimes a companion trim piece). There's no economy of scale to be had there, and binning won't help all that much because it's the installed value that matters, not the loose part value.
Company A treats its customers with respect, lets them maximise the return on their investment by giving them all the details and helping them keep the instrument going as long as possible.
Company B releases a product that works pretty much as well as Company A's but is cheaper (because it isn't investing in the aftersales support). Since support is an expensive business, by cutting it Company B makes more profit.
In the long run, Company B wins. Quite a few companies are smart enough to see how this works, and so morph from being an A type company to being a B. Evolution in action, the fitter (most profitable) company survives (shame about the prey..ah, "clients").
Indeed. I think this is a textbook example of market failure: Something where the most profitable course of action very clearly does not align with the course of action overall best for society (which would be to maximize the lifetime of those devices if possible - but in any way not deliberately shorten it).
Hence this is a prime example where regulation would be needed.
"The market works perfectly" isn't something you can say without defining what the market is supposed to do. If you consider "the market" to only have to care about itself and not any external effects it causes then sure, but that's like saying car owners don't have to worry about air pollution because it's "outside the car".
The market is intrinsically linked to its effects on the environment it's in and I would certainly consider its failure to account for that a failure. To put it in CS terms, if build a "world peace" gAI with a reward function that doesn't include "not killing humans" and it starts killing humans, the argument of "it's working perfectly" isn't really a useful one if you want to stay alive. Adding a negative term to the reward function is equivalent to marker regulation, yet while we have entire fields of study dedicated to the first, the latter is frowned upon.
It could also be argued that there is an asymmetry of information. Consider trying to figure out how reliable a product is. By eliminating third-party servicing, it is very easy to conceal a product's failure rate. There is a strong conflict of interest in obtaining that information from vendors. Anecdotal information from consumers is going to be highly biased (one way or the other). While obtaining that information from people who repair equipment has its problems, they will have a better perspective.
As the market data shows, most customers value a very low price over anything else. So the market delivers.
On the other hand, if the noble ethical goals that you list were more important for the customer, he would prefer devices that satisfy these goals (as much as possible) over low prices in his buying decision(s).
Have you considered for a moment that the customer knows very little about what she is buying, especially nothing at all about the long term value of the product? When the only thing you know about various alternatives is the price, how are you supposed to choose what's best for you?
Information scarcity is a defining feature of modern market, let's not forget that.
That's what makes this conversation interesting. People who are in the market for a $20K piece of test equipment are likely to be experts who are well equipped to evaluate the product's specifications.
Probably they are just as susceptible to hyperbolic discounting as others. So they undervalue the problem of maintenance in the future compared to saving some money now, even if they are thinking about it.
How does the market data show that? This is not an A/B test, there is usually not an equivalent alternative to compare against.
See e.g. the right-to-repair debate. The function is important enough for farmers to demand regulation and drive the prices of old, non-locked-down tractors through the roof. And yet, it's still more profitable for John Deere to lock down their tractors.
When deciding between the packages "modern, feature-rich tractors that are locked down" or "old, reparable but overpriced tractors", the former probably win. That still doesn't imply that farmers "want" locked down tractors or that reparability is of no importance to them.
Also yes, we do need ethics in business. Last thing I've read, even some MBAs have heard of that by now.
> See e.g. the right-to-repair debate. The function is important enough for farmers to demand regulation and drive the prices of old, non-locked-down tractors through the roof.
This is in fact a textbook example for how markets work. By the insane prize increase of old, non-locked-down tractors, every sane market participant can see how much more of a prize farmers are willing to pay for non-locked-down tractors.
I believe every manufacturer of tractors now does a careful market analysis and is probably (secretly) developing a more open, but expensively prized tractor.
My bet: In a few years there will be new tractors on the market that are better repairable, but higher prized - exactly what an economist would predict. It's just that we are currently in a transformation period.
Markets can be in pathological states, what people usually call market failures. Basically when the transition cost from one equilibrium to a "better" is too high (barriers to entry, eg due to regulatory capture, not enough capital available, cooperation problems for the many small agents - for example because the powerful agent can punish those who try to organize, what happens when some workers try to unionize), the market stays put. Waste and rents are generated. (The union organizers fired the cost of labor basically freezed.)
In case of tractors it's not at all obvious how fast the market will provide the tractors with lower TCO, that farmers wish for.
The notion that the market is always optimal is one of the central articles of faith in free-market fundamentalism. It's essentially tautological, a commerce-filtered spin on the just world fallacy.
What you ignore is that there are a bunch of information and experience asymmetries here, some of them intentionally created by vendors. There are also big problems with short-term biases and principal-agent problems. These lead to demonstrably sub-optimal global outcomes.
As an example, look at the recent wave of Softbank-funded idiocy. There's no way WeWork was an optimal deployment of capital. You could have lit $10 billion on fire and done better. But it worked out very well for Adam Neumann, who not only got lots of short-term adulation and wealth, but somehow ended up becoming a billionaire because they paid him to go away. And this is from the supposedly hyper-rational experts in the world of finance. If they can't get it right, there's little reason to think that random purchasers and Tektronic's middle managers will.
> As the market data shows, most customers value a very low price over anything else. So the market delivers.
Right, and it delivers that by pushing costs onto the greater proportion of people who are not involved. That is neither ethical nor sustainable -- it's more like a kind of theft.
What you are citing is a clear example of how the "free market" can fail to operate in the best interests of society.
In my view, it is successful more often than not. Laws against stealing, murder, etc., are a form of regulating against human nature, and they appear to be pretty successful.
Regulation is mandatory in order to have a society. The question isn't whether or not it should exist, but whether or not the particular regulation is good.
Regulation is just a furthering of the cultural decay that is the root cause of these issues in the first place. Be careful suggesting violence as an answer to society becoming crummy as it always leads to worse crumminess.
This isn't "evolution in action" in any inevitable sense. This only happens if the consumer values up-front price over TCO. Admittedly, with emphasis on quarterly earnings, this is the path many companies have chosen, but it's not universal.
Time utility of money. A lot of those financing plans are interest-free and surprisingly straightforward, and add up to the actual retail value of the device over the time of the plan, not one penny more. These proliferated as years-long contracts became unpopular.
In the case of Verizon, signing up with such a plan also allows you to return and upgrade your phone after it's halfway paid off.
With phones getting more and more ridiculously expensive, there's little reason not to do this unless you'd rather be out $1000+ immediately when you have the option to be out $40/month for a couple years with no downsides instead.
My guess is because that’s what they’re sold on and offered. Also perverse incentives come into play. With companies it’s quarterly results, with consumers it’s consumerism culture and the insane financing dependency people are hooked on.
Valuing up-front over TCO only makes sense for very large purchases and for business investments.
Exactly, same happens in other markets like consumer electronics or cars, people complain all the time that “things aren’t built to last anymore” but when faced with the decision they will just buy the shiniest one for the price. So who are to be blamed? Us the consumers.
I remain amazed at how long some of these old products last. My last 2 phones (Pixels) have died right around the 2 year mark. Meanwhile, I have a Tektronix DMM 252 from the early 1990s (which I now realize is a rebranded APPA 105), and a Fluke 8060A meter (labeled IBM) from the early 80s (1).
For the Fluke, you can, today, get the schematics online in the user manual (2). For the Tek, not so much (3). Regardless, they have outlasted my phones by 15-20x.
Oh, and while I'm on my old electronics worship soapbox, let's mention the HP 48G, one of the greatest calculators ever made, which runs Reverse Polish Lisp (4) and still has an active community (5), 17 years after production ended and 27 years after release, and emulator apps for iPhone (6) and Android (
> My last 2 phones (Pixels) have died right around the 2 year mark.
See my post on this thread about the "mutually assured destruction" (as I sometimes call it) brought to us by those who pushed for a transition to lead-free solder. This is a real and massive problem that might come home to roost in a major way within the next ten to twenty years. Here's the link:
If its any consolation, all of my Android gear (every nexus/pixel) has stayed in working order with the Nexus 7 tablet. I think the issue with that one was something like no TRIM support ruined the SSD.
That said, I doubt anything with a battery will last anywhere close to old tech.
Not the submitter, but just for context, this is a post by the admin for the groups.io community for users of Tektronix oscilloscopes and related test equipment, answering a query from a user looking for service documentation on an instrument from the 1990s era, somewhat newer than most of those discussed on the group.
Service manuals from Tektronix, like those from Hewlett-Packard, used to include what amounted to a BSEE lab syllabus in their service manuals. They would include full schematics and operational theory that often went well beyond what was needed by a technician who was tasked with maintaining the gear. Often the documentation was written by the design engineers themselves, who were the Michael Jordans and Kobe Bryants of their field. The educational value of this material was (and is) immense. Not only could you get your scope up and running after spending some time with these manuals, you understood a lot about how it worked and what motivated the underlying design decisions.
Dennis is right in that this wonderful literature went away well before its time, but I think he undersells the other side of the argument, which is that these manuals would inevitably become less useful over time as well. It was true that management became less engineering-driven in the 1980s-1990s period, which was both unfortunate and avoidable, but it's also true that over the same timeframe he's referring to, companies like Tektronix and HP had to migrate to custom ASICs and software-heavy architectures that were inherently less user-serviceable.
These aren't like products from manufacturers like Apple, Tesla, or John Deere whose overt market abuses have spawned the right-to-repair movement -- they're the tools used by the people who design those products. The instrumentation companies had to stay well ahead of the technology curve, just as they do now, and increasing integration often has the unfortunate side effect of making the inner workings of the product less accessible to the user. Personally, I'm not sure things could have evolved much differently than they did, regardless of the ratio of MBAs to EEs in management.
Bummer, too, because those manuals were AWESOME. One thing that is indisputable is that both Tektronix and HP stopped publishing component-level service manuals about 5-10 years before the technology justified it. Because their technology had to stay ahead of the market, a lot of gear that is still very useful in many applications today is now almost impossible to keep running.
They would include full schematics and operational theory that often went well beyond what was needed by a technician who was tasked with maintaining the gear.
I see this trend with automotive manuals too; in particular, from the 40s to 60s when automatic transmissions were still a relatively new and complex part, the service manuals contained a huge amount of information on their theory of operation. Now, although they're even more complex (and computer-controlled), the most you get is how to take it apart and put it back together.
This is one of the down sides of "software is eating the world." A 1960s Ford C4 or Chevrolet TH350 is mechanical and applied physics, whereas a modern automotive transmission is mechanical and applied physics combined with a blackbox of software controlled electronics. While it's awfully hard to obfuscate mechanisms it's nearly effortless to do so with software.
Because mechanisms instantiated are by their very nature "published," and any repairs are documented because they require intervention by the mechanic... or could be documented by anyone with enough knowledge and training to examine the mechanism.
Patents and other intellectual property methods are the only barrier to a competitor reproducing the mechanical product, but software allows additional barriers to reproduction as well as a means to prevent modification of the operation from the manufacturer's desired behaviors.
I'd also note that the era in which those mechanisms were created and documented included an expectation that owners often did their own automotive maintenance and repair and were far more mechanically-minded (it was an age defined by mechanisms) than current owners of autos.
While the hardware it runs on may require servicing, well-designed and well-implemented code should indeed be free of such a need.
Alas my experience in maintaining many codebases over the years suggest that most code needs a lot more servicing, itself, than the hardware it ran on. ;)
That "trend" has been in progress for a very long time, as electronics aren't repaired but replaced - or rather it was modulelized so instead of focusing on single components you focus on a whole section. It either works or doesn't - and when faulty you replace the whole module. We've done that with computers since the IBM PC.
The reason/motivation was cost. It is very expensive for the consumer to have a subject matter expert in electronics or mechanics do full diagnosis. It's like hiring a programmer to fix your off the shelf software - it isn't cheap and for consumers now that there is a choice, why would they want to pay hundreds of $$$ to replace a cheap cap? It's not the cost of the component that they're charged after all. That too can be applied to cars - after all modern cars are computers on wheels more than they are mechanical. The mechanic only need to be taught how to use a computer to diagnose the car, and be told what part/module needs to be replaced. It's faster and because it's lower skilled it's cheaper per hour too.
This is speaking from someone in the IT field - been there since the beginning of the PC era. We don't really expect code, or make code, to be analyzed line by line. It's modular, full of dependencies and very fluent. To improve code, we often replace whole modules instead of rewriting our code - new libraries, new frameworks is a much faster and cheaper way to improve code, than rewriting it from scratch or attempting to find a way to optimize 10s of thousands of code lines. Only the factory has the experts that design those modules. With computers, the stuff created has a very short life-span - it's often more about "good enough" vs "best/perfect" as tomorrow will have the fixes "automagicly".
As a consequence, the field is much larger than it would be if only the real experts could take part. Imagine the skill set someone doing mainframe programming in the 1940s and 1950ies had to have compared with a programmer today. It's worlds apart. It would be more surprising if other fields hadn't moved towards the same idea. The vast amount of programmers today don't even know how the computer works internally.
I agree with the OP - generally the quality is way down. Does anyone really expect their TV, Blueray player or computer to last for 10 years? You probably already noticed that Blueray isn't really in demand these days - it lasted less than 10 years! The consumer electronics market moves way too fast to be betting on longevity.
Now, speaking as an engineer who likes to tinker with things, like to understand how things work, I do regret this movement. But I have to ask myself if we would have had the progress we've had in electronics if we hadn't focused on making it affordable to the masses. Ask yourself how many TVs would be sold at $8000 instead of Wallmart's $200 made in China, undocumented and often you're unable to find the exact same model after just a few months. But it's cheap - which is why there's a market for it.
So I do not foresee this change back. Not unless the general public gets massively increased purchase power. I'm not surprised the commuter earning $50k/year would prefer this car shop-bill for maintenance isn't in the $1000s but in the low $100s and even that is considered high by many. Recall that in the US more than 50% cannot find enough savings for a $1000 emergency - so it's not buying to save 5 years from now. It's all about the now (even though we all know it's more expensive in the long run). And to lower cost it's modular and "automated" without the need for expensive experts spending hours/days diagnosing to find the exact part that failed.
> It was true that management became less engineering-driven in the 1980s-1990s period, which was both unfortunate and avoidable, but it's also true that over the same timeframe he's referring to, companies like Tektronix and HP had to migrate to custom ASICs and software-heavy architectures that were inherently less user-serviceable.
Why do you think that? What about ASICs and software-heavy makes devices less user-serviceable? I mean, software has schematics you, we usually call those "source code", but it's the same thing, so the assumption would be that devices would come with the source code for the software. With ASICs, I can see a problem when the ASIC itself is broken and you can't buy it anymore, but then I'd think that's probably not the most common failure mode?
Code doesn't break, overheat, or wear out, so it's not something you'd expect to see in a service manual.
ASICS can (and do) break, unfortunately, and schematics continue to be helpful in diagnosing and repairing those sorts of problems. Often an elaborate recalibration process is needed after doing so, requiring all kinds of specialized equipment. As a result, both the supply and the demand for those schematics started ramping down in the 1990s.
Early on in that cycle, some major customers like Boeing continued to insist that schematics be provided, which is the only reason we have some of those documents at all.
> Code doesn't break, overheat, or wear out, so it's not something you'd expect to see in a service manual.
But the purpose of a schematic is not to identify a broken component, it's to enable you to understand the system and thus diagnose malfunction, which is exactly what you would need the code for as well. When it's obvious what component is broken (like, a blown cap or a burned resistor or whatever), you don't need a schematic to figure out the fix. You need a schematic to understand all the stuff around the actual defect in order to be able to locate it--and code would be equally useful for that. The fact that there is no wear to repair in the code itself is secondary--there is no (problematic) wear in most of the circuit that you study to find the fault either.
Also, even if there is no wear to repair, there could still be opportunities to modify things either for repair or for diagnostic purposes. Like, make the software produce test signals that you can trace through the circuit. Or "rewire" I/O pins in the software to work around broken hardware. Exactly what you would use schematics for as well: It's not always about replacing a broken part, sometimes you also just modify some other parts of the circuit to restore function or to diagnose a problem, and the same could potentially be done in the software as well.
> ASICS can (and do) break, unfortunately, and schematics continue to be helpful in diagnosing and repairing those sorts of problems.
Well, yeah, sure. But my point is that many defects are in other parts of the circuit, and knowing the internals of the ASIC could still be helpful in diagnosing and fixing those problems.
This is equivalent to what Apple has done. For a while they had their own in-house repair service through the Genius Bar, and it coexisted with independent repair shops. Now they try to prevent this with software lockouts on repaired equipment and designs that are increasingly difficult or impossible to repair. It’s interesting to see similar patterns in other industries.
I find it a little frustrating but since I’m still not willing to switch to Windows, Linux, or Android I keep using their products. I buy fewer electronics than I used to since they seem to be usable for much longer so it ends up not being much of an issue, but it feels anti-consumer to me.
Of course, troubleshooting without a schematic is also an important skill. It is often done for laptops (which often don't have schematics, and the only ones available are leaks) as well as other consumer electronics.
The idea that it somehow stops competitors from copying the product is very weak; it's easy enough to reverse-engineer a design, and there are entire companies whose existence is solely to provide such services.
Louis Rossmann (independent MacBook repair owner and Youtuber) has recently been flying all over the US fighting for right-to-repair (r2r), from Maine to New York to Washington state to possibly Hawaii now. He's even learning about the plights of farmers unable to get their equipment fixed after they repair it themselves at harvest time because a factory-authorized tech has to use a factory magic box to flip a magic bit somewhere in the tractor to clear an error, while 20 farmers also want their self-disabled tractors reenabled too.
You used to open any consumer TV or Radio and there would be schematics inside. When I bought an Apple ][ computer in 1978, the manual had full schematics in it. And I used it to fix things.
Intel had schematics for all their motherboards. Unfortunately they've left the motherboard business.
Interesting, I didn't know that. Was that true all the way up until they left the business? What was the most recent generation of motherboards for which they published the schematics?
I don't recall ever seeing schematics for my Ivy Bridge era Intel desktop motherboard, but its manual was by far the most detailed I've ever seen for a consumer motherboard. It included details like the location of each temperature sensor, the current ratings for each fan header, the model number of the Super IO chip, and a full memory map.
For many years my purchasing policy for test equipment was "no schematics, no sale". I still try to stick with that as far as possible, but it means I don't buy much new equipment.
I don't remember where exactly I saw this, but some boards even stack components, so that e.g. a tiny SMD part sits under a bigger component that has some empty volume under it (e.g. an SMD resistor under a socket or capacitor). Good luck finding that without taking the board apart completely.
To be honest, I think layouts like this are done for practical reasons like space constraints rather than to annoy reverse engineers. If you are serious and can disassemble a board, you will find these components immediately.
Yes. It took two attempts. The second attempt worked, electrically, but didn’t have a measurable effect on the noise reduction, so did not end up doing it.
An x-ray machine will also easily reveal such hidden features. It's not something everyone has, but then again, as I noted in another comment here, there are entire companies specialising in this sort of RE work.
I've wondered this in the past. If you buy something from the 1970s it seems like everything had a schematic in the box. As if to imply many customers could try to fix problems themselves. Now, people will bring their car to the dealer to get windshield wipers changed.
It's not just TI. The lack of schematics for electronic equipment across the board has been a serious problem for decades, and just gets worse over time.
Schematics can't be copyrighted, so anyone could take it and make themselves an instrument. I think schematics disappeared around the rise of China. Companies didn't want their products copied.
I think that is backwards. Only thing you may lose is R&D time which there are ways to recoup.
Let's make schematics available again.
Test instruments are an interesting space in the technology center mostly because they are never "obsolete." If they are operating to their original specifications and tolerances, they do the job just as well in 2020 as they did in 1960. As a result, test equipment can really hold its value well.
I recently paid about $700 to have my HP Spectrum Analyzer from the early 90's[1] calibrated. I bought the instrument for $1,100 at auction, it's working range is 9kHz to 12GHz. So for $1,800 I have an instrument that, if I were to buy the current generation model, would cost me nearly $18,000 or 10x that price. Even a "cheap" Rigol that only goes to 6.5GHz would cost $10,000.
So what to do when your sales are "one and done" ? It is a hard problem from a business model perspective.
[1] EDIT: Turns out it is only 30+ years old, not 50+. Its an HP8596E "portable" (weighs quite a bit :-).