Classic Triumph Gets A Modern Digital Dash
By Lewin Day
Analog gauges gave way to all manner of fancy electroluminescent and LED gauges in the ’80s, but the trend didn’t last long. It’s only in the last decade or so that LCD digital gauges have really started to take off in premium cars. [Josh] is putting a modern engine and drivetrain into his classic Triumph GT6, and realised that he’d have to scrap the classic mechanical gauge setup. After not falling in love with anything off the shelf, he decided to whip up his own solution from scratch.
The heart of the build is a Raspberry Pi 4, which interfaces with the car’s modern aftermarket ECU via CANBUS thanks to the PiCAN3 add-on board. Analog sensors, such as those for oil pressure and coolant temperature, are interfaced with a Teensy 4.0 microcontroller which has the analog to digital converters necessary to do the job. Display is via a 12.3″ super-wide LCD sourced off Aliexpress, with the graphics generated by custom PixiJS code running in Chromium under X.
The result is comparable with digital displays in many other modern automobiles, speaking to [Josh]’s abilities not just as a programmer but a graphic designer, too. As a bonus, if he gets sick of the design, it’s trivial to change the graphics without having to dig into the car’s actual hardware.
Spinning Up a Water Cooled 3D Printed Stirling Engine
By Tom Nardi
The Stirling external combustion engine has fascinated gear heads since its inception, and while the technology has never enjoyed widespread commercialization, there’s a vibrant community of tinkerers who build and test their own takes on the idea. [Leo Fernekes] has been working on a small Stirling engine made from 3D printed parts and common hardware components, and in his latest video he walks viewers through the design and testing process.
We’ve seen Stirling engines with 3D printed parts before, but in most cases, they are just structural components. This time, [Leo] really wanted to push what could be done with plastic parts, so everything from the water jacket for the cold side of the cylinder to the gears and connecting rods of the rhombic drive has been printed. Beyond the bearings and rods, the most notable non-printed component is the stainless steel spice shaker that’s being used as the cylinder.
The piston is made of constrained steel wool.
Mating the hot metal cylinder to the 3D printed parts naturally introduced some problems. The solution [Leo] came up with was to design a toothed collar to hold the cylinder, which reduces the surface area that’s in direct contact. He then used a piece of empty SMD component feed tape as a insulator between the two components, and covered the whole joint in high-temperature silicone.
Like many homebrew Stirling engines, this one isn’t perfect. It vibrates too much, some of the internal components have a tendency to melt during extended runs, and in general, it needs some fine tuning. But it runs, and in the end, that’s really the most important thing with a project like this. Improvements will come with time, especially once [Leo] finishes building the dynamometer he hopes will give him some solid data on how the engine’s overall performance is impacted as he makes changes.
Direct Memory Access: Data Transfer Without Micro-Management
By Maya Posch
In the most simple computer system architecture, all control lies with the CPU (Central Processing Unit). This means not only the execution of commands that affect the CPU’s internal register or cache state, but also the transferring of any bytes from memory to to devices, such as storage and interfaces like serial, USB or Ethernet ports. This approach is called ‘Programmed Input/Output’, or PIO, and was used extensively into the early 1990s for for example PATA storage devices, including ATA-1, ATA-2 and CompactFlash.
Obviously, if the CPU has to handle each memory transfer, this begins to impact system performance significantly. For each memory transfer request, the CPU has to interrupt other work it was doing, set up the transfer and execute it, and restore its previous state before it can continue. As storage and external interfaces began to get faster and faster, this became less acceptable. Instead of PIO taking up a few percent of the CPU’s cycles, a big transfer could take up most cycles, making the system grind to a halt until the transfer completed.
DMA (Direct Memory Access) frees the CPU from these menial tasks. With DMA, peripheral devices do not have to ask the CPU to fetch some data for them, but can do it themselves. Unfortunately, this means multiple systems vying for the same memory pool’s content, which can cause problems. So let’s look at how DMA works, with an eye to figuring out how it can work for us.
Hardware Memcpy
At the core of DMA is the DMA controller: its sole function is to set up data transfers between I/O devices and memory. In essence it functions like the memcpy function we all know and love from C. This function takes three parameters: a destination, a source and how many bytes to copy from the source to the destination.
Take for example the Intel 8237: this is the DMA controller from the Intel MCS 85 microprocessor family. It features four DMA channels (DREQ0 through DREQ3) and was famously used in the IBM PC and PC XT. By chaining multiple 8237 ICs one increase the number of DMA channels, as was the case in the IBM PC AT system architecture. The 8237 datasheet shows what a basic (single) 8237 IC integration in an 8080-level system looks like:
In a simple request, the DMA controller asks the CPU to relinquish control over the system buses (address, data and control) by pulling HRQ high. Once granted, the CPU will respond on the HLDA pin, at which point the outstanding DMA requests (via the DREQx inputs) will be handled. The DMA controller ensures that after holding the bus for one cycle, the CPU gets to use the bus every other cycle, so as to not congest the bus with potentially long-running requests.
The 8237 DMA controller supports single byte transfers, as well as block transfers. A demand mode also allows for continuous transfers. This allowed for DMA transfers on the PC/PC AT bus (‘ISA’).
Fast-forward a few decades, and the DMA controller in the STM32 F7 family of Cortex-M-based microcontrollers is both very similar, but also very different. This MCU features not just one DMA controller, but two (DMA1, DMA2), each of which is connected to the internal system buses, as described in the STM32F7 reference manual (RM0385).
In this DMA controller the concept of streams is introduced, where each of the eight streams supports eight channels. This allows for multiple devices to connect to each DMA controller. In this system implementation, only DMA2 can perform memory-to-memory transfers, as only it is connected to the memory (via the bus matrix) on both of its AHB interfaces.
As with the Intel 8237 DMA controller, each channel is connected to a specific I/O device, giving it the ability to set up a DMA request. This is usually done by sending instructions to the device in question, such as setting bits in a register, or using a higher-level interface, or as part of the device or peripheral’s protocol. Within a stream, however, only one channel can be active at any given time.
Unlike the more basic 8237, however, this type of DMA controller can also use a FIFO buffer for features such as changing the transfer width (byte, word, etc.) if this differs between the source and destination.
When it comes to having multiple DMA controllers in a system, some kind of priority system always ensures that there’s a logical order. For channels, either the channel number determines the priority (as with the 8237), or it can be set in the DMA controller’s registers (as with the STM32F7). Multiple DMA controllers can be placed in a hierarchy that ensures order. For the 8237 this is done by having the cascaded 8237s each use a DREQx and DACKx pin on the master controller.
Snooping the bus
Keeping cache data synchronized is essential.
So far this all seems fairly simple and straight-forward: simply hand the DMA request over to the DMA controller and have it work its magic while the CPU goes off to do something more productive than copying over bytes. Unfortunately, there is a big catch here in the form of cache coherence.
As CPUs have gained more and more caches for instructions and data, ranging from the basic level 1 (L1) cache, to the more recent L2, L3, and even L4 caches, keeping the data in those caches synchronized with the data in main memory has become an essential feature.
In a single-core, single processor system this seems easy: you fetch data from system RAM, keep it hanging around in the cache and write it back to system RAM once the next glacially slow access cycle for that spot in system RAM opens up again. Add a second core to the CPU, with its own L1 and possibly L2 cache, and suddenly you have to keep those two caches synchronized, lest any multi-threaded software begins to return some really interesting results.
Now add DMA to this mixture, and you get a situation where not just the data in the caches can change, but the data in system RAM can also change, all without the CPU being aware. To prevent CPUs from using outdated data in their caches instead of using the updated data in RAM or a neighboring cache, a feature called bus snooping was introduced.
What this essentially does is keeping track of what memory address are in a cache, while monitoring any write requests to RAM or CPU caches and either updating all copies or marking those copies as invalid. Depending on the specific system architecture this can be done fully in hardware, or a combination of hardware and software.
Only the Beginning
It should be clear at this point that every DMA implementation is different, depending on the system it was designed for and the needs it seeks to fulfill. While an IBM PC’s DMA controller and the one in an ARM-based MCU are rather similar in their basic design and don’t stray that far apart in terms of total feature set, the DMA controllers which can be found in today’s desktop computers as well as server systems are a whole other ballgame.
Instead of dealing with a 100 Mbit Ethernet connection, or USB 2.0 Fast Speed’s blistering 12 Mbit, DMA controllers in server systems are forced to contend with 40 Gbit and faster Ethernet links, countless lanes of fast-clocked PCIe 4.0-based NVMe storage and much more. None of which should be bothering the CPU overly much if it all possible.
In the desktop space, the continuing push towards more performance, in especially gaming has led to an interesting new chapter in DMA, in the form of storage-to-device requests, e.g. in the form of NVidia’s RTX IO technology. RTX IO itself is based on Microsoft’s DirectStorage API. What RTX IO does is allow the GPU to handle as many of the communication requests to storage and decompressing of assets without involving the CPU. This saves the steps of copying data from storage into system RAM, decompressing it with the CPU and then writing the data again to the GPU’s RAM.
Attack of the DMA
Any good and useful feature of course has to come with a few trade-offs, and for DMA that can be mostly found in things like DMA attacks. These make use of the fact that DMA bypasses a lot of security with its ability to directly write to system memory. The OS normally protects against accessing sensitive parts of the memory space, but DMA bypasses the OS, rendering such protections useless.
The good news here is that in order to make use of a DMA attack, an attacker has to gain physical access to an I/O port on the device which uses DMA. The bad news is that any mitigations are unlikely to have any real impact without compromising the very thing that makes DMA such an essential feature of modern computers.
Although USB (unlike FireWire) does not natively use DMA, the addition of PCIe lanes to USB-C connectors (with Thunderbolt 3/USB 4) means that a DMA attack via a USB-C port could be a real possibility.
Wrapping Up
As we have seen over the past decades, having specialized hardware is highly desirable for certain tasks. Those of us who had to suffer through home computers which had to drop rendering to the screen while spending all CPU cycles on obtaining data from a floppy disk or similar surely have learned to enjoy the benefits that a DMA-filled world with dedicated co-processors has brought us.
Even so, there are certain security risks that come with the use of DMA. In how far they are a concern depends on the application, circumstances and mitigation measures. Much like the humble memcpy() function, DMA is a very powerful tool that can be used for great good or great evil, depending on how it is used. Even as we have to celebrate its existence, it’s worth it to consider its security impact in any new system.
Scanimate Analog Video Synths Produced Oceans of Motion Graphics
By Kristina Panos
Why doesn’t this kind of stuff ever happen to us? One lucky day back in high school, [Dave Sieg] stumbled upon a room full of new equipment and a guy standing there scratching his head. [Dave]’s curiosity about this fledgling television studio was rewarded when that guy asked [Dave] if he wanted to help set it up. From that point on, [Dave] had the video bug. The rest is analog television history.
Today, [Dave] is the proud owner and maintainer of two Scanimate machines — the first R&D prototype, and the last one of only eight ever produced. The Scanimate is essentially an analog synthesizer for video signals, and they made it possible to move words and pictures around on a screen much more easily than ever before. Any animated logo or graphics seen on TV from the mid-1970s to the mid-80s was likely done with one of these huge machines, and we would jump quite high at the chance to fiddle with one of them.
Analog television signals were continuously variable, and much like an analog music synthesizer, the changes imposed on the signal are immediately discernible. In the first video below, [Dave] introduces the Scanimate and plays around with the Viceland logo a bit.
Stick around for the second and third videos where he superimposes the Scanimate’s output on to the video he’s making, all the while twiddling knobs to add oscillators and thoroughly explaining what’s going on. If you’ve ever played around with Lissajous patterns on an oscilloscope, you’ll really have a feel for what’s happening here. In the fourth video, [Dave] dives deeper and dissects the analog circuits that make up this fantastic piece of equipment.
Car manufacturers will often tout a vehicle’s features to appeal to the market, and this often leads to advertisements featuring a cacophony of acronyms and buzzwords to dazzle and confuse the prospective buyer. This can be particularly obvious when looking at drivelines. The terms four-wheel drive, all-wheel drive, and full-time and part-time are bandied about, but what do they actually mean? Are they all the same, meaning all wheels are driven or is there more to it? Let’s dive into the technology and find out.
Part-Time 4WD
Part-time four-wheel drive is the simplest system, most commonly found on older off-road vehicles like Jeeps, Land Cruisers and Land Rovers up to the early 1990s, as well as pickup trucks and other heavy duty applications. In these vehicles, the engine sends its power to a transfer case, which sends an equal amount of torque to the front and rear differentials, and essentially ties their input shafts together. This is good for slippery off-road situations, as some torque is provided to both axles at all times. However, this system has the drawback that it can’t be driven in four-wheel drive mode at all times. With the front and rear differentials rotating together, any difference in rotational speed between the front and rear wheels — such as from turning a corner or uneven tyre wear — would cause a problem. The drive shaft going to one differential would want to turn further than the other, a problem known as wind-up.
Part-time 4WD systems are often found in older off-road vehicles and trucks, such as the Suzuki Samurai, early Land Crusiers, and Jeeps.
Wind-up causes transfer case components to snap or break. Thus, these systems should only be driven in four-wheel drive mode on loose surfaces, where tyres can slip a little to avoid wind-up in the drive train. Hence, they are called part-time four wheel drive systems, as the transfer case can be shifted to 2WD mode, sending power to just one axle for driving on sealed roads. This avoids the wind-up problem, but means that these systems only really add traction when engaged off-road on dirt, sand, or snow.
These systems also often incorporate a “low-range” gear in the transfer case, which gears down the drive ratio to the wheels, allowing much greater torque to be generated at the tyre and the vehicle to be driven more slowly. This is of major benefit in low-traction situations and when trying to slowly negotiate complex trails full of obstacles and ruts.
All-Wheel Drive, or Full-Time 4WD
Obviously, in some situations, it’s desired to drive all four wheels of the vehicle even on high-traction, sealed surfaces. Thus, a solution to get around the problem of driveline wind-up is to install a third differential in a vehicle, in between the front and rear axles, in place of the transfer case. This center differential can allow for rotational speed differences between the front and rear axles, thus making it possible to drive all four wheels even on paved roads.
AWD is more typically found in more car-like vehicles, as well as more modern off-road vehicles. In the latter application, center differential locks or limited-slip differentials are used to ensure good off-road performance.
However, this comes with the drawback that the system can only deliver as much torque to one axle as is given to the other, due to the way differentials work. Thus, for example, if the front wheels are slipping, the rear axle will only receive as much torque as the front, and thus the vehicle will not be able to gain traction.
A variety of solutions are used to get around this problem. Off-road vehicles such as modern Land Cruisers and Range Rovers will have a switchable lock in the center differential, allowing equal torque to be sent to both ends when necessary. Alternatively, any one of a variety of limited-slip differentials can be installed in the center differential location, allowing a variable torque split depending on conditions. These systems are less capable offroad, but are less fuss for typical driving conditions. They are most commonly installed in cars intended for use on-road, but in occasional slippery conditions such as snow and ice. They’re also used in performance cars that drive all four wheels to put down power as effectively as possible for faster acceleration and better grip.
Front-Wheel Drive Based Systems
The other type of popular all-wheel drive system are the front-wheel drive based systems, most notably the Haldex type used in many smaller cars. These are installed most commonly in performance hatchbacks from brands like Volkswagen, Audi, and Mercedes. The systems work by having a typical front-wheel drive engine and transmission layout, with an extra drive shaft that goes to a special coupling which is then connected to the rear differential. Under normal conditions, the coupling, containing clutch packs, stays open, sending no torque to the rear wheels. However, under conditions where the front wheels start to slip and spin faster than the rears, the clutch pack is progressively engaged, sending up to 50% of torque to the rear wheels. The clutch pack is designed to operate at various levels of slip, allowing a variable amount of torque to be sent to the rear, usually anywhere from 100:0 front to back to a full 50:50 split. The Haldex system is often mocked and referred to as “faux-wheel drive”, as it only engages under such conditions. However, it is possible to engage the system manually using hacked controllers.
Haldex AWD is most commonly found on performance hatchbacks, though the technology is also used on mid-engined supercars, too.
These systems are also used in rear-wheel drive based applications — such as the Bugatti Veyron and Lamborghini Aventador. The concept is the same, but as these vehicles are mid-engined, the coupling is instead installed on the front axle, with the rear axle getting the majority of the torque under most conditions.
Conclusion
Automotive marketing will always rely on buzzwords because it’s simply not practical to explain the mechanical specifics of a given vehicle’s driveline in a 30-second ad. However, armed with this knowledge, you should now be confident to shop for a vehicle that meets your needs based on what it’s got under the frame, not just on whatever fancy words are emblazoned on the badging. Be particularly wary of manufacturers that twist widely-accepted naming conventions to trick unknowing customers, and look at the components installed on the vehicle rather than the marketing terms to get a full understanding of how a given car will perform. Once you know what you’re looking for, you’ll be all the more ready to make the right decision!
Web Tool Cranks Up The Power on DJI’s FPV Drone
By Tom Nardi
Apparently, if the GPS on your shiny new DJI FPV Drone detects that it’s not in the United States, it will turn down its transmitter power so as not to run afoul of the more restrictive radio limits elsewhere around the globe. So while all the countries that have put boots on the Moon get to enjoy the full 1,412 mW of power the hardware is capable of, the drone’s software limits everyone else to a paltry 25 mW. As you can imagine, that leads to a considerable performance penalty in terms of range.
But not anymore. A web-based tool called B3YOND promises to reinstate the full power of your DJI FPV Drone no matter where you live by tricking it into believing it’s in the USA. Developed by the team at [D3VL], the unlocking tool uses the new Web Serial API to send the appropriate “FCC Mode” command to the drone’s FPV goggles over USB. Everything is automated, so this hack is available to anyone who’s running a recent version of Chrome or Edge and can click a button a few times.
There’s no source code available yet, though the page does mention they will be putting up a GitHub repository soon. In the meantime, [D3VL] have documented the command packet that needs to be sent to the drone over its MODBUS-like serial protocol for others who might want to roll their own solution. There’s currently an offline Windows-only tool up for download as well, and it sounds like stand-alone versions for Mac and Android are also in the works.
It should probably go without saying that if you need to use this tool, you’ll potentially be violating some laws. In many European countries, 25 mW is the maximum unlicensed transmitter power allowed for UAVs, so that’s certainly something to keep in mind before you flip the switch. Hackaday isn’t in the business of dispensing legal advice, but that said, we wouldn’t want to be caught transmitting at nearly 60 times the legal limit.
Boston Dynamics Stretch Robot Trades Lab Coat For Work Uniform
By Roger Cheng
Boston Dynamics has always built robots with agility few others could match. While great for attention-getting demos, from outside the company it hasn’t been clear how they’ll translate acrobatic skills into revenue. Now we’re getting a peek at a plan in an interview with IEEE Spectrum about their new robot Stretch.
Most Boston Dynamics robots have been research projects, too expensive and not designed for mass production. The closest we got to date was Spot, which was offered for sale and picked up a few high profile jobs like inspecting SpaceX test sites. But Spot was still pretty experimental without an explicit application. In contrast, Stretch has a laser-sharp focus made clear by its official product page: this robot will be looking for warehouse jobs. Specifically, Stretch is designed to handle boxes up to 50 lbs (23 kg). Loading and unloading them, to and from pallets, conveyer belts, trucks, or shipping containers. These jobs are repetitive and tedious back-breaking work with a high injury rate, a perfect opportunity for robots.
But warehouse logistics aren’t as tightly structured as factory automation, demanding more adaptability than typical industrial robots can offer. A niche Boston Dynamics learned it can fill after releasing an earlier demo video showing their research robot Atlas moving some boxes around: they started receiving inquiries into how much that would cost. Atlas is not a product, but wheels were set in motion leading to their Handle robot. Learning from what Handle did well (and not well) in a warehouse environment, the designed evolved to today’s Stretch. The ostrich-like Handle prototype is now relegated to further research into wheeled-legged robots and the occasional fun dance video.
The Stretch preproduction prototypes visible in these videos lacks acrobatic flair of its predecessors, but they still have the perception and planning smarts that made those robots possible. Those skills are just being applied to a narrower problem scope. Once production models are on the job, we look forward to reading some work performance reviews.
Sawblade Turned Beyblade Looks Painful To Tangle With
By Lewin Day
Beyblades were a huge craze quite some years back. Children battled with spinning tops in small plastic arenas, or, if their local toy stores were poorly merchandised, in salad bowls and old pie dishes. The toys were safe enough, despite their destructive ethos, by virtue of being relatively small and lightweight. This “Beyblade” from [i did a thing] is anything but, however.
The build begins with a circular saw blade over 1 foot in diameter, replete with many angry cutting teeth that alone portend danger for any individual unlucky enough to cross its path. Saw blades tend to cut slowly and surely however, so to allow the illicit Bey to deal more traumatic blows, a pair of steel scraps are welded on to deliver striking blows as well. This has the added benefit of adding more mass to the outside of the ‘blade, increasing the energy stored as it spins.
With the terrifying contraption spun up to great RPM by a chainsaw reeling in string, it’s able to demolish cheap wood and bone with little resistance. Shrapnel is thrown in many directions as the spinner attacks various objects, from a melon to an old CRT TV. We’d love to see the concept taken further, with an even deadlier design spun up to even higher speeds, ideally with a different tip that creates a more aggressive motion across the floor.
Wires vs Words — PCB Routing in Python
By Chris Lott
Preferring to spend hours typing code instead of graphically pushing traces around in a PCB layout tool, [James Bowman] over at ExCamera Labs has developed CuFlow, a method for routing PCBs in Python. Whether or not you’re on-board with the concept, you have to admit the results look pretty good.
GD3X Dazzler PCB routed using CuFlow
Key to this project is a concept [James] calls rivers — the Dazzler board shown above contains only eight of them. Connections get to their destination by taking one or more of these rivers which can be split, joined, and merged along the way as needed in a very Pythonic manner. River navigation is performed using Turtle graphics-like commands such as left(90) and the appropriately named shimmy(d)that aligns two displaced rivers. He also makes extensive use of pin / gate swapping to make the routing smoother, and there’s a nifty shuffler feature which arbitrarily reorders signals in a crossbar manner. Routing to complex packages, like the BGA shown, is made easier by embedding signal escapes for each part’s library definition.
We completely agree with [James]’s frustration with so many schematics these days being nothing more than a visual net lists, not representing the logical flow and function of the design at all. However, CuFlow further obfuscates the interconnections by burying them deep inside the wire connection details. Perhaps, if CuFlow were melded with something like the SKiDL Python schematic description language, the concept would gain more traction?
That said, we like the concept and routing methodologies he has implemented in CuFlow. Check it out yourself by visiting the GitHub repository, where he writes in more detail about his motivation and various techniques. You may remember [James] two of his embedded systems development tools that we covered back in 2018 — the SPI Driver and the I2C driver.
Custom Built 12-Port A/V Switch Keeps CRT Well Fed
By Tom Nardi
Classic gaming aficionados who prefer to play on real hardware know the struggle of getting their decades-old consoles connected to a modern TV. Which is why many gamers chose to keep a contemporary CRT TV around for when they want to take a walk down memory lane. Unfortunately those old TVs usually didn’t offer more than a few A/V ports on the back, so you’ll probably need to invest in a A/V switch to keep them all hooked up at once.
That’s the situation [Thomas Sowell] found himself in, except he couldn’t find one with enough ports. Rather than chain switches together, he decided to build his own custom 12-port console selector. With an integrated amplifier to keep everything looking sharp, a handsome walnut and metal enclosure, and a slick graphical interface that shows the logo of the currently selected console on a Vacuum Fluorescent Display (VFD), the final product is a classic gamer’s dream come true.
A peek under the hood.
To switch the audio [Thomas] is using a pair of ADG1606 16-channel analog multiplexers, while video is shuffled around with four MAX4315 8-channel video multiplexer-amplifiers. The math might seem a bit off at first, but he’s using one ADG1606 for each stereo channel and since the switch is for S-Video, each device has a luminance and color signal that needs to be handled separately. The multiplexers are flipped with a ATmega2561 microcontroller, which is also responsible for reading user input from a rotary encoder on the front of the case and displaying the appropriate console logo on the 140×32 Noritake VFD.
You may be surprised to find that [Thomas] considered himself an electronics beginner when he started this project, and that this is only the second PCB he’s ever designed. Was this a bold second project? Sure. But it also speaks to how far DIY electronics has come over the last years. Powerful open source tools, modular components, and of course a community of creative folks willing to share their knowledge and designs, has gone a long way towards redefining whats possible for the individual hacker and maker.