Considering how prevalent it is, converting analog data (ie. the real world) to digital form is a piece of our technological portfolio that is still surprisingly difficult to get right.
As much as some CS majors may like to ignore it, computers that interact with the real world need to do this conversion. It’s a difficult problem partially because the digital devices for the most part have no way of knowing whether the data they’re getting is ‘good enough’ to act on – leading to some annoying situations. Another problem is that once it’s digitized, data looks far more clean and official than it may actually be.
These musings come courtesy of a day filled with events tied together by this theme:
The undergraduates working with me on the Eddy Current project are constantly running into problems with digitized signals from an analog accelerometer.
I spent a long time striving to get the voice recognizer on my phone to work consistently – it’s good about 80% of the time, which is just enough to inspire hope, but bad enough to be untrustworthy.
One of the major hurdles of the microgravity test apparatus we’re working on is getting good, accurate data from the analog sensors to the digital feedback loops.
It seems like every time data needs to be digitized, there is a different system to do it, leading to a lot of redundant effort. Luckily there are a lot of smart people who also recognize this problem, and that there is a lot of money to be made in solving it – in my mind their improvements can’t come fast enough.
If you enjoyed this post, please share!