Hardware Notifications For ISS Flybys
By Bryan Cockfield
Since Sputnik launched in the 1950s, it’s been possible to look outside at night and spot artificial satellites orbiting with the naked eye. While Sputnik isn’t up there anymore, a larger, more modern satellite is readily located: the International Space Station. In fact, NASA has a program which will alert anyone who signs up when the ISS is about to fly overhead. A better alert, though, is this ISS notifier which is a dedicated piece of hardware that guarantees you won’t miss the next flyby.
This notifier is built around the Tokymaker, a platform aimed at making electronics projects almost painfully easy to learn. Connections to various modules can be made without soldering, and programming is done via a graphical interface reminiscent of Scratch. Using these tools, [jaime_lc98] designed a tool which flips up a tiny paper astronaut whenever the ISS is nearby. The software side takes advantage of IFTTT to easily and reliably control the servo on the Tokymaker.
The project pages goes into detail about how to set up IFTTT and also how to use the block-style language to program the Tokymaker. It’s pretty straightforward to get it up and running, and relatively inexpensive as well, and might be a great way to get the younger folks excited about space while also teaching them about programming. It might also be a good stepping stone on the way to other ISS-related hacks.
Neural Network In Glass Requires No Power, Recognizes Numbers
By Al Williams
We’ve all come to terms with a neural network doing jobs such as handwriting recognition. The basics have been in place for years and the recent increase in computing power and parallel processing has made it a very practical technology. However, at the core level it is still a digital computer moving bits around just like any other program. That isn’t the case with a new neural network fielded by researchers from the University of Wisconsin, MIT, and Columbia. This panel of special glass requires no electrical power, and is able to recognize gray-scale handwritten numbers.
The glass contains precisely controlled inclusions such as air holes or an impurity such as graphene or other material. When light strikes the glass, complex wave patterns occur and light becomes more intense in one of the ten areas on the glass. Each of those areas corresponds to a digit. For example, here are two examples of the pattern of light recognizing a two on the glass:
With a training set of 5,000 images, the network was able to correctly identify 79% of 1,000 input images. The team thinks they could do better if they allowed looser constraints on the glass manufacturing. They started with very strict design rules to assist in getting a working device, but they will evaluate ways to improve recognition percentage without making it too difficult to produce. The team also has plans to create a network in 3D, as well.
Hacking This Smart Bulb Is Almost Too Easy
By Tom Nardi
The regular Hackaday reader no longer needs to be reminded about how popular the ESP8266 is; they see the evidence of that several times a day. But what might not be quite so obvious is that it isn’t just us hacker types that are in love with the inexpensive IoT microcontroller, it’s also popping up more and more frequently in commercial products.
As [Majenko] demonstrates, one of those ESP-powered devices is the LOHAS Smart LED Bulb. Upon cracking one open, he found that these relatively low-cost bulbs are little more than a standard ESP8266 chip and a couple of LED drivers. He wanted to see how hard it would be to get his own code running on the bulb, and by the looks of it, it took longer to get the thing open then it did to load it up with a custom firmware.
The bulb’s PCB features the aforementioned ESP8266, a 1MB 25Q80 flash chip, and MY9231 LED drivers. Whoever put the board together was nice enough to label the RX, TX, and GPIO test points, though [Majenko] notes that what’s labeled as 3.3 V appears dead. With a ESP-01 programmer wired up to the board and the appropriate board settings (which he provides), you can use the Arduino IDE to upload whatever you like to it.
Running “Hello World” on a smart bulb is fun and all, but what about kicking on those LEDs? [Majenko] found a library that works with the MY9231 drivers, and it didn’t take long to figure out which of the ESP’s pins were used to communicate with them. All in all, he said it was far easier than he expected.
You’ll probably want to put this bulb back into service after reprogramming, so [Majenko] advises caution when cracking open the shell. There are clips holding on the diffuser which he assures us are going to break no matter what you do, plus some silicone adhesive. He suggests super glue to hold it together when you’re done programming it, and using an OTA firmware so you don’t need to get back in there.
Jigsaw Motor Uses PCB Coils For Radial Flux
By Dan Maloney
Electric motors are easy to make; remember those experiments with wire-wrapped nails? But what’s easy to make is often hard to engineer, and making a motor that’s small, light, and powerful can be difficult. [Carl Bugeja] however is not one to back down from a challenge, and his tiny “jigsaw” PCB motor is the latest result of his motor-building experiments.
We’re used to seeing brushless PCB motors from [Carl], but mainly of the axial-flux variety, wherein the stator coils are arranged so their magnetic lines of force are parallel to the motor’s shaft – his tiny PCB motors are a great example of this geometry. While those can be completely printed, they’re far from optimal. So, [Carl] started looking at ways to make a radial-flux PCB motor. His design has six six-layer PCB coils soldered perpendicular to a hexagonal end plate. The end plate has traces to connect the coils in a star configuration, and together with a matching top plate, they provide support for tiny bearings. The rotor meanwhile is a 3D-printed cube with press-fit neodymium magnets. Check out the build in the video below.
Connected to an ESC, the motor works decently, but not spectacularly. [Carl] admits that more tweaking is in order, and we have little doubt he’ll keep optimizing the design. We like the look of this, and we’re keen to see it improved.
Artificial Intelligence Powers A Wasp-Killing Machine
By Lewin Day
At the time of publication, Hackaday is of the understanding that there is no pro-wasp lobby active in the United States or abroad. Why? Well, the wasp is an insect that is considered incapable of any viable economic contribution to society, and thus has few to no adherents who would campaign in its favor. In fact, many actively seek to defeat the wasp, and [TegwynTwmffat] is one of them.
[Tegwyn]’s project is one that seeks to destroy wasps and Asian Hornets in habitats where they are an invasive pest. To achieve this goal without harming other species, the aim is to train a neural network to detect the creatures, before then using a laser to vaporize them.
Initial plans involved a gimballed sentry-gun style setup. However, safety concerns about firing lasers in the open, combined with the difficulty of imaging flying insects, conspired to put this idea to rest. The current system involves instead guiding insects down a small tube at the entrance to a hive. Here, they can be easily imaged at close range and great detail, as well as vaporized by a laser safely contained within the tube, if they are detected as wasps or hornets.
Bandwidth is one of those technical terms that has been overloaded in popular speech: as an example, an editor might ask if you have the bandwidth to write a Hackaday piece about bandwidth. Besides this colloquial usage, there are several very specific meanings in an engineering context. We might speak about the bandwidth of a signal like the human voice, or of a system like a filter or an oscilloscope — or, we might consider the bandwidth of our internet connection. But, while the latter example might seem fundamentally different from the others, there’s actually a very deep and interesting connection that we’ll uncover before we’re done.
Let’s have a look at what we mean by the term bandwidth in various contexts.
Perhaps the most common usage of the term bandwidth is for the data bandwidth of digital channels, in other words, the rate of information transfer. In this case, it’s measured in bits per second. Your ISP might provision you 50/10 Mbps internet service for example, meaning you have 50 million bits per second of download capacity and 10 million bits per second of upload. In this case you would say that the download bandwidth is 50 Mbps. Measuring the digital bandwidth of a network channel is as easy as sending a fixed number of bits and timing how long it takes; this is what those broadband speed test sites do.
We’ll come back to digital bandwidth in a little while, to see how it’s connected to the next concept, that of signal bandwidth.
The term bandwidth is also used to describe the frequency range occupied by a signal. In this case, the bandwidth of the signal is defined as the maximum frequency contained in the signal minus the minimum frequency. If a signal has frequency components between 100 Hz and 300 Hz, we would say that the signal has a bandwidth of 200 Hz. As a concrete example, consider the medium-wave (aka AM) broadcast band in the US: each signal occupies a bandwidth of 20.4 kHz. So, a transmitter operating on the 1000 kHz channel should only output frequencies between 989.8 kHz and 1010.2 kHz. It’s interesting to note that an AM-modulated RF signal takes up twice the bandwidth of the transmitted audio, since both frequency sidebands are present; that 20.4 kHz RF bandwidth is being used to send audio with a maximum bandwidth of 10.2 kHz.
While the definition of bandwidth seems very straightforward, sometimes the application to common signals can be confusing. Consider an ideal square wave at 1 kHz. This signal repeats at a frequency of 1 kHz, so we might assume that it has a bandwidth of 1 kHz. In fact, an ideal square wave contains components at all odd multiples of the fundamental frequency, in this case at 3 kHz, 5 kHz, 7 kHz, etc. The practical upper limit, which determines the bandwidth of the signal, depends on how “ideal” the square wave is — in other words, the sharpness of the edges. While the amplitude of these components falls with increasing order, they’re important for properly constructing the original waveform. In fact, a common way to generate a sine wave is to filter out the higher-order components of a square wave signal.
Given a signal, how do we determine its bandwidth? The plain old telephone service (POTS) of my youth, for instance, passed frequencies between 300 Hz and 3000 Hz, which was found to be sufficient for voice communications; we might say signals passing through this system were limited to a bandwidth of 2700 Hz. While this would be true if the POTS system had sharp frequency edges, in reality, the signals passing through will have some small components below 300 Hz and above 3000 Hz. Because of this, it’s more common to define a non-zero threshold for the edges of the band. For instance, in measuring the highest and lowest frequencies in a signal, we might use the frequencies where the signal power is half of it’s peak value, or – 3 dB, corresponding to 70.71% in amplitude terms. While 3 dB is by far the most common value, you’ll find others used as well.
A third use of the term bandwidth is to describe the range of frequencies passed by a system, such as a filter, amplifier, or the telephone system described above. While a particular signal passing through the system may have a quite narrow bandwidth — a nearly-pure sine wave at around 2600 Hz with a bandwidth of just a few Hz, for instance — the system itself still has a bandwidth of 2700 Hz. As with signal bandwidth, system bandwidth can be measured at 3 dB points (where the signals passed by the system have dropped to half power), or using other thresholds — 6 dB and 20 dB might be used for certain filters.
As an example, I measured the response of a 1090 MHz filter for receiving ADS-B transmissions. The 3 dB response of this filter extends from 927.3 MHz to 1,181.8 MHz, for a 3 dB bandwidth of 254.5 MHz. On the other hand, if measured at the -20 dB points, the filter has a 312 MHz bandwidth.
For another practical example, consider an oscilloscope — the “X MHz” in the scope specifications refers to the bandwidth, and this is almost always measured at the -3 dB point. The front-end amplifier of a 100 MHz oscilloscope will pass frequencies between 0 Hz (DC) and 100 MHz with 3 dB loss or less. This means that a 100 MHz sine wave may only show 71% of its actual amplitude, but also that frequencies somewhat above 100 MHz can be viewed — they’ll just be reduced in amplitude even more. The other consequence is that a 100 MHz square wave will look like a sine wave on a 100 MHz scope; to get an accurate picture of the square wave, the scope must have a bandwidth greater than about five times the square wave fundamental frequency. The 100 MHz oscilloscope is best used for observing square waves of 20 MHz or less.
Oscilloscope bandwidth is commonly assessed by measuring the rise time of a very fast edge. Assuming the signal edge is much faster than the rise time of the oscilloscope, the bandwidth of the scope is BW = 0.35/t_rise, with bandwidth in Hz and rise time in seconds. A scope with a rise time of 1 ns, for example, has a bandwidth of 350 MHz. The 0.35 factor assumes that the frequency-limiting elements in the scope’s front end produce a Gaussian filter shape, although the result is almost identical for a first-order RC filter; scopes with a sharper “brickwall” response may have factors of 0.45 or more. For more information about oscilloscope bandwidth, check out this article by Jenny List.
At the beginning of this article, I mentioned a connection between digital bandwidth and signal bandwidth: it turns out that the relationship between them is a cornerstone of information theory. Consider the question of what you can do with a channel of 1 Hz bandwidth. What’s to limit the amount of information that can be sent over this link? Claude Shannon was the first to solve this problem for an abstract communication system where symbols are sent over the channel. He came up with the Noisy-channel coding theorem, which showed that the maximum possible information rate depends on probability that a symbol gets corrupted in transmission. Channels which create more errors during transmission limit the rate that data can be transmitted, no matter how clever we get with error-correcting codes.
Later, the Shannon-Hartley theorem extended this result to less abstract signal channels where the error is due to additive white Gaussian noise (AWGN). The net result is the same: it’s noise in the channel that ultimately limits the rate of information that can be transmitted. In the case a channel corrupted with AWGN, we have the following result.
The channel capacity, C, in bits/second, depends on the bandwidth, B, in Hz, and the ratio of the signal power, S, to the noise power, N, in the channel. This is the theoretical limit of the channel, and we may have to work very hard coming up with clever error-correcting codes to approach this limit in practice, but we can never exceed it.
Armed with this equation, we can return to the original question: how much information can we send over a 1 Hz channel per second? If the channel is noise-free, the signal-to-noise ratio (SNR) is infinite, and we can send data at an unlimited rate — of course, this never happens. In the case of equal signal and AWGN noise powers, or a 0 dB SNR, however, the result shows that we can only send a maximum 1 bit per second. That’s a big drop from infinity! On the other hand, if we have a channel of 60 dB SNR, we can theoretically send a maximum of 19.9 bps in our 1 Hz bandwidth. Of course, if the noise level remains the same, we need to increase the signal power by 60 dB — a million times — to achieve this. And, the reality is that we can only approach these rate limits, and the codes which do so comprise a large body of research.
Bandwidth = Bandwidth = Bandwidth
Even though the term is used in different ways in different contexts, the concepts of bandwidth are very simple. In a nutshell, signal bandwidth is the amount of frequency occupied by a signal, system bandwidth is the range of frequencies passed by the system, and digital bandwidth is the rate at which information flows over a channel. But, connecting these simple concepts are some very interesting fundamental principles of information theory. We’ve only been able to scratch the surface of this fascinating area in this article; sound off in the comments below if you’d like to see more articles about information theory.
The finished product shows very well the progression as the youngster adapts to a regular sleep pattern, and even shows a shift to the right at the very bottom as a result of a trip across time zones to see relatives. It’s both a good visualisation and a unique keepsake that the baby will treasure one day as an adult. (Snarky Ed Note: Or bring along to the therapist as evidence.)
Exploring The Raspberry Pi 4 USB-C Issue In-Depth
By Maya Posch
It would be fair to say that the Raspberry Pi team hasn’t been without its share of hardware issues, with the Raspberry Pi 2 being camera shy, the Raspberry Pi PoE HAT suffering from a rather embarrassing USB power issue, and now the all-new Raspberry Pi 4 is the first to have USB-C power delivery, but it doesn’t do USB-C very well unless you go for a ‘dumb’ cable.
Join me below for a brief recap of those previous issues, and an in-depth summary of USB-C, the differences between regular and electronically marked (e-marked) cables, and why detection logic might be making your brand-new Raspberry Pi 4 look like an analogue set of headphones to the power delivery hardware.
A Trip Down Memory Lane
Back in February of 2015, a blog post on the official Raspberry Pi blog covered what they figured might be ‘the most adorable bug ever‘. In brief, the Raspberry Pi 2 single-board computer (SBC) employs a wafer-level package (WL-CSP) that performs switching mode supply functionality. Unfortunately, as is the risk with any wafer-level packaging like this, it exposes the bare die to the outside world. In this case some electromagnetic radiation — like the light from a xenon camera flash — enter the die, causing a photoelectric effect.
The resulting disruption led to the chip’s regulation safeties kicking in and shut the entire system down, fortunately without any permanent damage. The fix for this is to cover the chip in question with an opaque material before doing something like taking any photos of it with a xenon flash, or pointing a laser pointer at it.
While the Raspberry Pi 2’s issue was indeed hard to predict and objectively more adorable than dangerous, the issue with the Power over Ethernet (PoE) HAT was decidedly less cute. It essentially rendered the USB ports unusable due to over-current protection kicking in. The main culprit here is the MPS MP8007 PoE power IC (PDF link), with its relatively sluggish flyback DC-DC converter implementation being run at 100 kHz (recommended <200 kHz in datasheet). Combined with a lack of output capacity (41% of recommended), this meant that surges of current were being passed to the USB buffer capacitors during each (slow) power input cycle, which triggered the USB chipset’s over-current protection.
The solution here was a redesign of the PoE HAT, which increased the supply output capacitance and smoothed the output as much as possible to prevent surges. This fixed the problem, allowing even higher-powered USB devices to be connected. The elephant-sized question in the room was of course why the Raspberry Pi team hadn’t caught this issue during pre-launch testing.
USB Type-C is Far More Than Just a Connector
The new Raspberry Pi 4 was just released a few weeks ago, and is the biggest overhaul of the platform since the original Model B+ that defined the ‘Raspberry Pi form factor‘. But in less than a week we started hearing about a flaw in how the USB-C power input was behaving. It turns out the key problem is when using electronically marked cables which include circuitry used by the USB-C specification to unlock advanced features (like supplying more power or reconfiguring what signals are used for between devices). These e-marked cables simply won’t work with the Pi 4 while their “dumb” cousins do just fine. Let’s find out why.
The USB-C specification is quite complex. Although mechanically the USB-C connector is reversible, electrically a lot of work is being performed behind the scenes to make things work as expected. Key to this are the two Configuration Channel (CC) pins on each USB-C receptacle. These will each be paired to either the CC wire in the cable or (in an e-marked cable) to the VCONN conductor. A regular USB-C cable will leave the VCONN pin floating (unconnected), whereas the e-marked cable will connect VCONN to ground via the in-cable Ra resistor as we can see in this diagram from the USB-C specification.
It’s also possible to have an e-marked USB-C cable without the VCONN conductor, by having SOP ID chips on both ends of the cable and having both the host and the device supply the cable with VCONN. Either way, both the host and device monitor the CC pins on their end measure the resistance present through the monitored voltage. In this case, the behavior of the host is dictated by the following table:
The Analog Raspberry Pi 4
Now that we have looked at what a basic USB-C setup looks like, it’s time to look at the subject of this article. As the Raspberry Pi team has so gracefully publicized the Raspberry Pi 4 schematics (minus the SoC side for some reason), we can take a look at what kind of circuit they have implemented for the USB-C sink (device) side:
As we can see here, contrary to the USB-C specification, the Raspberry Pi design team has elected to not only connect CC1 and CC2 to the same Rd resistor, but also shorted CC1 and CC2 in the process.
Remember, the spec is looking for one resistor (Rd) on each of the two CC pins. That way, an electronically marked cable connecting the Pi 4 board and a USB-C charger capable of using this standard would have registered as having the appropriate Rd resistance, hence 3A at 5V would have been available for this sink device.
Since this was however not done, instead we get an interesting setup, where the CC connection is still established, but on the Raspberry Pi side, the resistance doesn’t read the 5.1 kOhm of the Rd resistor, as there’s still the Ra resistor in the cable on the other CC pin of the receptacle. The resulting resistor network thus has the values 5.1k and 1k for the Rd and Ra resistors respectively. This circuit has an equivalent circuit resistance of 836 Ohm.
This value falls within the range permitted for Ra, and thus the Raspberry Pi 4 is detected by the source as Ra + Ra, meaning that it is an audio adapter accessory, essentially a USB-C-to-analog audio dongle. This means that the source does not provide any power on VBUS and the Raspberry Pi 4 will not work as it does not get any power.
In the case of a non-electronically marked cable, this would not be an issue as mentioned earlier, as it doesn’t do this form of detection, with no Ra resistor to get in the way of making the Raspberry Pi 4 do its thing.
Testing is More Important than Developing
Much like with the PoE HAT issue, the question was quickly raised: why wasn’t this issue with the Raspberry Pi 4’s USB-C port found much sooner during testing. Back with the Pi 2 PoE HAT, the official explanation from the Raspberry Pi team was that they had not tested with the appropriate devices, had cut back on testing because they were late with shipping and did not ask the field testers the right questions, leading to many scenarios simply not being tested.
In a response to Tech Republic, the CEO of Raspberry Pi Ltd, Eben Upton has admitted to the Pi 4 USB-C error. And a spokesperson for Raspberry Pi Ltd said in a response to Ars Technica that an upcoming revision of the Raspberry Pi 4 is likely to appear within the “next few months”.
The obvious cause for why this error managed to sneak its way into production seems obvious. Much like with the PoE HAT, a design flaw snuck in, no one caught it during schematic validation, the prototype boards didn’t see the required amount of testing (leaving obvious use cases untested), and thus we’re now seeing another issue that leaves brand new Raspberry Pi hardware essentially defective.
Sure, it’s possible to hack these boards as a workaround. With the PoE HAT one could have soldered on a big electrolytic capacitor to the output stage of the transformer’s secondary side and likely ‘fixed’ things that way. Similarly one could hack in a resistor on the Raspberry Pi 4 after cutting a trace between both CC pins… or even simpler: never use an e-marked USB-C cable with the Raspberry Pi 4.
Yet the sad point in all of this is undoubtedly that these are supposed to be ready-to-use, fully tested devices that any school teacher can just purchase for their classroom. Not a device that first needs a trip down to the electronics lab to have various issues analyzed and bodge wires and components soldered onto various points before being stuffed into an enclosure and handed over to a student or one’s own child.
The irony in all of this is that because of these errors, the Raspberry Pi team has unwittingly made it clear to many that testing hardware isn’t hard, it just has to be done.
Compliant USB-C is Not Easy
In the course of writing this article, yours truly had to dive deep into the USB-C specification, and as a point of fairness to the Raspberry Pi design team, the first time implementing USB-C is not a walk in the park. Of all specifications out there, USB-C does not rank very high in user-friendliness or even conciseness. It’s a big, meandering, 329 page document, which does not even cover the monstrosity that’s known as USB-C Power Delivery.
Making a mistake while implementing a USB-C interface during one’s first time using it in a design is not a big shame. The USB-C specification adds countless details and complexities that simply do not exist in previous versions of the USB standard. Having this mess of different types of cables with many more conductors does not make life easier either.
The failure here is that the USB-C design errors weren’t discovered during the testing phase, before the product was manufactured and shipped to customers. This is something where any team that has been involved in such a project needs to step back and re-evaluate their testing practices.
Raspberry Pi Cyberdeck Inspired by Rare MSX
By Tom Nardi
When we see these cyberdeck builds, the goal is usually to just make something retro-futuristic enough to do William Gibson proud. There’s really no set formula, but offset screens coupled with large keyboards and a vague adherence to 1980s design language seem to be the most important tenets.
Granted the recent build by [lewisb42] still leans heavily on those common tropes, but at least there’s a clear lineage: his Raspberry Pi retro all-in-one is styled after a particularly rare bright red variant of the MSX that Sony released in Japan. Known as the HIT-BIT HB-101, some aficionados consider the circa-1984 machine to be the peak of MSX styling. Since getting his hands on a real one to retrofit wasn’t really an option, he had no choice but to attempt recreating some of the computer’s unique design elements from scratch.
The faceted sides were 3D printed in pieces, glued together, and then attached to a 1/4″ thick backplate made out of polycarbonate. For the “nose” piece under the keyboard, [lewisb42] actually used a piece of wood cut at the appropriate angles with a table saw. The top surface of the computer, which he calls the FLIPT-BIT, is actually made of individual pieces of foamed PVC sheet.
If all this sounds like a big jigsaw puzzle, that’s because it basically is. To smooth out the incongruous surfaces, he used a combination of wood putty, body filler, spot putty, and more time sanding then we’d care to think about. For the 3D printed surface details such as the screen bezel and faux cartridge slots, he used a coat of Smooth-On’s XTC-3D and yet more sanding. While [lewisb42] says the overall finish isn’t quite as good as he hoped, we think the overall look is fantastic considering the combination of construction techniques hiding under that glossy red paint job.
As for the electronics, there’s really no surprises there. The FLIPT-BIT uses a keyboard and touchpad from Perixx, a seven inch TFT display, and of course the Raspberry Pi 3. The display runs at 12 V so [lewisb42] used a combination of a generic laptop-style power supply and a 5 V step-down converter to keep everyone happy. While it doesn’t currently have a battery, it seems like there’s more than enough room inside the case to add one if he ever wants to go mobile.