Wrap Your Mind Around Neural Networks

http://feedproxy.google.com/~r/hackaday/LgoM/~3/OEak_Jd6LG4/

http://hackaday.com/?p=253072

Artificial Intelligence is playing an ever increasing role in the lives of civilized nations, though most citizens probably don’t realize it. It’s now commonplace to speak with a computer when calling a business. Facebook is becoming scary accurate at recognizing faces in uploaded photos. Physical interaction with smart phones is becoming a thing of the past… with Apple’s Siri and Google Speech, it’s slowly but surely becoming easier to simply talk to your phone and tell it what to do than typing or touching an icon. Try this if you haven’t before — if you have an Android phone, say “OK Google”, followed by “Lumos”. It’s magic!

Advertisements for products we’re interested in pop up on our social media accounts as if something is reading our minds. Truth is, something is reading our minds… though it’s hard to pin down exactly what that something is. An advertisement might pop up for something that we want, even though we never realized we wanted it until we see it. This is not coincidental, but stems from an AI algorithm.

At the heart of many of these AI applications lies a process known as Deep Learning. There has been a lot of talk about Deep Learning lately, not only here on Hackaday, but all over the interwebs. And like most things related to AI, it can be a bit complicated and difficult to understand without a strong background in computer science.

If you’re familiar with my quantum theory articles, you’ll know that I like to take complicated subjects, strip away the complication the best I can and explain it in a way that anyone can understand. It is the goal of this article to apply a similar approach to this idea of Deep Learning. If neural networks make you cross-eyed and machine learning gives you nightmares, read on. You’ll see that “Deep Learning” sounds like a daunting subject, but is really just a $20 term used to describe something whose underpinnings are relatively simple.

Machine Learning

When we program a machine to perform a task, we write the instructions and the machine performs them. For example, LED on… LED off… there is no need for the machine to know the expected outcome after it has completed the instructions. There is no reason for the machine to know if the LED is on or off. It just does what you told it to do. With machine learning, this process is flipped. We tell the machine the outcome we want, and the machine ‘learns’ the instructions to get there. There are several ways to do this, but let us focus on an easy example:

Early neural network from MIT

If I were to ask you to make a little robot that can guide itself to a target, a simple way to do this would be to put the robot and target on an XY Cartesian plane, and then program the robot to go so many units on the X axis, and then so many units on the Y axis. This straightforward method has the robot simply carrying out instructions, without actually knowing where the target is.  It works only when you know the coordinates for the starting point and target. If either changes, this approach would not work.

Machine Learning allows us to deal with changing coordinates. We tell our robot to find the target, and let it figure out, or learn, its own instructions to get there. One way to do this is have the robot find the distance to the target, and then move in a random direction. Recalculate the distance, move back to where it started and record the distance measurement. Repeating this process will give us several distance measurements after moving from a fixed coordinate. After X amount of measurements are taken, the robot will move in the direction where the distance to the target is shortest, and repeat the sequence. This will eventually allow it to reach the target. In short, the robot is simply using trial-and-error to ‘learn’ how to get to the target. See, this stuff isn’t so hard after all!

This “learning by trial-and-error” idea can be represented abstractly in something that we’ve all heard of — a neural network.

Neural Networks For Dummies

Neural networks get their name from the mass of neurons in your noggin. While the overall network is absurdly complex, the operation of a single neuron is simple. It’s a cell with several inputs and a single output, with chemical-electrical signals providing the IO. The state of the output is determined by the number of active inputs and the strength of those inputs. If there are enough active inputs, a threshold will be crossed and the output will become active. Each output of a neuron acts as the input to another neuron, creating the network.

Perceptron diagram via How to Train a Neuarl Network in Python by Prateek Joshi

Recreating a neuron (and therefore a neural network) in silicon should also be simple. You have several inputs into a summation thingy. Add the inputs up, and if they exceed a specific threshold, output a one. Else output a zero. Bingo! While this lets us sorta mimic a neuron, it’s unfortunately not very useful. In order to make our little silicon neuron worth storing in FLASH memory, we need to make the inputs and outputs less binary… we need to give them strengths, or the more commonly known title: weights.

In the late 1940’s, a man by the name of Frank Rosenblatt invented this thing called a Perceptron. The perceptron is just like our little silicon neuron that we described in the previous paragraph, with a few exceptions. The most important of which is that the inputs have weights. With the introduction of weights and a little feedback, we gain a most interesting ability… the ability to learn.

Source via KDnuggets

Rewind back to our little robot that learns how to get to the target. We gave the robot an outcome, and had it write its own instructions to learn how to achieve that outcome by a trial-and-error process of random movements and distance measurements in an XY coordinate system. The idea of a perceptron is an abstraction of this process. The output of the artificial neuron is our outcome. We want the neuron to give us an expected outcome for a specific set of inputs. We achieve this by having the neuron adjust the weights of the inputs until it achieves the outcome we want.

Adjusting the weights is done by a process called back propagation, which is a form of feedback. So you have a set of inputs, a set of weights and an outcome. We calculate how far the outcome is from where we want it, and then use the difference (known as error) to adjust the weights using a mathematical concept known as gradient decent. This ‘weight adjusting’ process is often called training, but is nothing more than a trial-and-error process, just like with our little robot.

Deep Learning

Deep Learning seems to have more definitions than IoT these days. But the simplest, most straight forward one I can find is a neural network with one or more layers between the input and output and used to solve complex problems. Basically, Deep Learning is just a complex neural network used to do stuff that’s really hard for traditional computers to do.

Deep Learning diagram via A Dummy’s Guide to Deep Learning by Kun Chen

The layers in between the input and output are called hidden layers and dramatically increase the complexity of the neural net. Each layer has a specific purpose, and are arranged in a hierarchy. For instance, if we had a Deep Learning neural net trained to identify a cat in an image, the first layer might look for specific line segments and arcs. Other layers higher in the hierarchy will look at the output of the first layer and try to identify more complex shapes, like circles or triangles. Even higher layers will look for objects, like eyes or whiskers. For a more detailed explanation of hierarchical classification techniques, be sure to check out my articles on invariant representations.

The actual output of a layer is not known exactly because it is trained via a trial-and-error process. Two identical Deep Learning neural networks trained with the same image will produce different outputs from its hidden layers. This brings up some uncomfortable issues, as MIT is finding out.

Now when you hear someone talk about machine learning, neural networks, and deep learning, you should have at least a vague idea of what it is and, more importantly, how it works. Neural Networks appear to be the next big thing, although they have been around for a long time now. Check out [Steven Dufresne’s] article on what has changed over the years, and jump into his tutorial on using TensorFlow to try your hand at machine learning.


Filed under: Featured, Interest, Original Art

The GNU GPL Is An Enforceable Contract At Last

http://feedproxy.google.com/~r/hackaday/LgoM/~3/XaONebMNS74/

http://hackaday.com/?p=258066

It would be difficult to imagine the technological enhancements to the world we live in today without open-source software. You will find it somewhere in most of your consumer electronics, in the unseen data centres of the cloud, in machines, gadgets, and tools, in fact almost anywhere a microcomputer is used in a product. The willingness of software developers to share their work freely under licences that guarantee its continued free propagation has been as large a contributor to the success of our tech economy as any hardware innovation.

Though open-source licences have been with us for decades now, there have been relatively few moments in which they have been truly tested in a court. There have been frequent licence violations in which closed-source products have been found to contain open-source software, but they have more often resulted in out-of-court settlement than lengthy public legal fights. Sometimes the open-source community has gained previously closed-source projects, as their licence violations have involved software whose licence terms included a requirement for a whole project in which it is included to have the same licence. These terms are sometimes referred to as viral clauses by open-source detractors, and the most famous such licence is the GNU GPL, or General Public Licence. If you have ever installed OpenWRT on a router you will have been a beneficiary of this: the project has its roots in the closed-source firmware for a Linksys router that was found to contain GPL code.

Now we have news of an interesting milestone for the legal enforceability of open-source licences, a judge in California has ruled that the GPL is an enforceable contract. Previous case-law had only gone as far as treating GPL violations as a copyright matter, while this case extends its protection to another level.

The case in question involves a Korean developer of productivity software, Hancom Office, who were found to have incorporated the open-source Postscript and PDF encoder Ghostscript into their products without paying its developer a licence fee. Thus their use of Ghostscript falls under the GPL licencing of its open-source public version, and it was  on this basis that Artifex, the developer of Ghostscript, brought the action.

It’s important to understand that this is not a win for Artifex, it is merely a decision on how the game can be played. They must now go forth and fight the case, but that they can do so on the basis of a contract breach rather than a copyright violation should help them as well as all future GPL-licenced developers who find themselves in the same position.

We’re not lawyers here at Hackaday, but if we were to venture an opinion based on gut feeling it would be that we’d expect this case to end in the same way as so many others, with a quiet out-of-court settlement and a lucrative commercial licencing deal for Artifex. But whichever way it ends the important precedent will have been set, the GNU GPL is now an enforceable contract in the eyes of the law. And that can only be a good thing.

Via Hacker News.

GNU logo, CC-BY-SA 2.0.


Filed under: news, software hacks

Gravity Defying Drips of a Bike Pump Controlled Fountain

http://feedproxy.google.com/~r/hackaday/LgoM/~3/5IRrh_cgxQY/

http://hackaday.com/?p=258454

People love to see a trick that fools their senses. This truism was in play at the Crash Space booth this weekend as [Steve Goldstein] and [Kevin Jordan] showed off a drip fountain controlled by a bike pump.

These optical illusion drip fountains use strobing light to seemingly freeze dripping water in mid-air. We’ve seen this before several times (the work of Hackaday alum [Mathieu Stephan] comes to mind) but never with a user input quite as delightful as a bike pump. It’s connected to an air pressure sensor that is monitored by the Arduino that strobes the lights. As someone works the pump, the falling droplets appear to slow, stop, and then begin flowing against gravity.

Sadly this phenomenon is quite difficult to record in a video since the latency of our vision is integral for the trick to work. The frame rate of the video above doesn’t quite mesh with the strobing but look closely and you still see the illusion at times.

In person, the effect is so perfect that it drew a crowd all day throughout the weekend. Kids were invited to run their fingers through the dripping stream to confirm that the water was indeed real. Even if you stick your hand into the illusion it doesn’t break the effect.

The fit and finish of the fountain is commendable. Dark acrylic makes up a triangular case in the shape of the hackerspace’s logo. Water droplets are produced by an oscillating pump and fall from the apex of the triangle. A martini glass at the bottom catches the drops with some steel wool to prevent splashes. The system to recirculate the water is completely hidden from view. It’s a piece of art and really tops off the overall experience. It was a Maker Faire Bay Area hit and rightly so!

As of yet, there are no details published for the build, but [Steve] and [Kevin] sounded like they plan to document their work so keep an eye on the Crash Space page.


Filed under: led hacks

Bitcoin Price Ticker

http://feedproxy.google.com/~r/hackaday/LgoM/~3/W5Sj8jEMg9c/

http://hackaday.com/?p=257936

Are you a Bitcoin miner or trader, but find yourself lacking the compulsive need to check exchange rates like the drug-fuelled daytraders of Wall Street? Fear not – you too can adorn your home or office with a Bitcoin Price Ticker! The post is in Italian but you can read a translated version here.

It’s a straightforward enough build – an Arduino compatible board with an onboard ESP8266 is hooked up with an HD44780-compatible LCD. It’s then a simple matter of scraping the Bitcoin price from the web and displaying it on the LCD. It’s a combination of all the maker staples, tied together with some off-the-shelf libraries – it’s quick, and it works.

[Ed: Oh boo!  The images of the LCD were photoshopped.  Please ignore the next paragraph.]

What makes the build extra nice is the use of custom characters on the LCD. The HD44780 is a character based display, and this project appears to use a screen with two lines of sixteen characters each. However, a custom character set has been implemented in the display which uses several “characters” on the screen to create a single number. It’s a great way to make the display more legible from a distance, as the numbers are much larger, and the Bitcoin logo has been faithfully recreated as well. It’s small touches like this that can really set a project apart. We’d love to see this expanded to display other financial market information and finished off in a nice case.

If you’re wondering what you can actually do with Bitcoin, check out the exploits of this robotic darknet shopper. Oh, and Microsoft will take them, too.


Filed under: Arduino Hacks

A MIDI Harmonica

http://feedproxy.google.com/~r/hackaday/LgoM/~3/fAp4qyySj04/

http://hackaday.com/?p=257359

MIDI, or Musical Instrument Digital Interface, has been the standard for computer control of musical instruments since the 1980s. It is most often associated with electronic instruments such as synthesisers, drum machines, or samplers, but there is nothing to stop it being applied to almost any instrument when combined with the appropriate hardware.

[phearl3ss1] pushes this to the limit by adding MIDI to the most unlikely of instruments. A harmonica might seem to be the ultimate in analogue music, yet he’s created an ingenious Arduino-powered mechanism to play one under MIDI control.

The harmonica itself is mounted on a drawer slide coupled to a wheel taken from a pool sweeper and powered by a motor  that can move the instrument from side to side with a potentiometer providing positional feedback to form a simple servo. The air supply comes from a set of three bellows driven via a crank from another motor, and is delivered by what looks like a piece of PVC pipe to the business end of the harmonica.

The result is definitely a playable MIDI harmonica, though it doesn’t quite catch the essence of the human-played instrument. Judge for yourselves, he’s posted a build video which we’ve placed below the break.

This isn’t the first automated harmonica we’ve shown you. There was this one that also used a slide, and another with a note selector using multiple air pipes.

Harmonica header image: Grassinger [CC BY-SA 3.0].

 


Filed under: musical hacks

Hackaday Links: May 21, 2017

http://feedproxy.google.com/~r/hackaday/LgoM/~3/HESzlVXhRDA/

http://hackaday.com/?p=257905

It’s time to talk about something of supreme importance to all Hackaday readers. The first trailer for the new Star Trek series is out. Some initial thoughts: the production values are through the roof, and some of this was filmed in Jordan (thank the king for that). The writers have thrown in some obvious references to classic Trek in this trailer (taking a spacesuit into a gigantic alien thing a la TMP). There are a few new species, even though this is set about 10 years before waaaait a second, those are the Klingons?

In other news, [Seth MacFarlane] is doing a thing that looks like a Galaxy Quest series. We can only hope it’s half as good as a Galaxy Quest series could be.

The Dayton Hamvention should have been this week, but it’s never going to happen again. The Hara Arena, the traditional venue for the biggest amateur radio meet on the continent (thankfully) closed this year. Last year it was looking old and tired. This year, Hamvention moved to Xenia, Ohio, and it looks like we’re still getting the best ham swap meet on the planet. Remember: if you  drove out to Hamvention, the Air Force museum is well worth the visit. This year they have the fourth hangar open, full of space craft goodness.

Last week we saw an Open Source firmware for hoverboards, electric unicycles, and other explodey bits of self-balancing transportation. [Casainho], the brains behind this outfit, recently received an eBike controller from China. As you would expect, it’s based on the same hardware as these hoverboards and unicycles. That means there’s now Open Source firmware for eBikes.

Last year, [Cisco] built a cute little walking robot. Now it’s up on Kickstarter.

This week saw the announcement of the Monoprice Mini Delta, the much-anticipated 3D printer that will sell for less than $200. For one reason or another, I was cruising eBay this week and came upon this. They say yesterday’s trash is tomorrow’s collectors’ item, you know…

A new Tek scope will be announced in the coming weeks. What are the cool bits? It has a big touchscreen. That’s about all we know.

The ESP32 is the next great wonderchip, and has been for a while now. The ESP32 also has a CAN peripheral stuffed in there somewhere, and that means WiFi and Bluetooth-enabled cars. [Thomas] has been working on getting a driver up and running. There’s a thread on the ESP32 forum, a Hackaday.io page, and a GitHub page.

What do you do when you have a nice old Vacuum Fluorescent Display and want to show some stats from your computer? You build a thing that looks like it’s taken from a cash register. This is a project from [Micah Scott], and it has everything: electronics 3D modeling, magnets, print smoothing, creating snap-fit parts, and beautiful old displays.

Here’s something that randomly showed up in our Tip Line. [Mark] recently found some unused HP 5082-7000 segment displays in a collection of electronic components (pics below). According to some relevant literature, these were the first LED display package available, ever.  They were released in 1969, they’re BCD, and were obviously very expensive. [Mark] is wondering how many of these were actually produced, and we’re all interested in the actual value of these things. If anyone knows if these are just prototypes, or if they went into production (and what they were used for), leave a note in the comments.


Filed under: Hackaday Columns, Hackaday links

Hackaday Prize Entry: A PC-XT Clone Powered By AVR

http://feedproxy.google.com/~r/hackaday/LgoM/~3/4_QoEfG_TCQ/

http://hackaday.com/?p=256743

There is a high probability that the device on which you are reading this comes somehow loosely under the broad definition of a PC. The familiar x86 architecture with peripheral standards has trounced all its competitors over the years, to the extent that it is only in the mobile and tablet space of personal computing that it has not become dominant.

The modern PC with its multi-core processor and 64-bit instruction set is a world away from its 16-bit ancestor from the early 1980s. Those early PCs were computers in the manner of the day, in which there were relatively few peripherals, and the microprocessor bus was exposed almost directly rather than through the abstractions and gatekeepers we’d expect to see today. The 8088 processor with an 8-bit external bus though is the primordial PC processor, and within reason you will find software written for DOS on those earliest IBM machines will often still run on your multiprocessor behemoth over a DOS-like layer on your present-day operating system. This 35-year-plus chain of mostly unbroken compatibility is both a remarkable feat of engineering and a millstone round the necks of modern PC hardware and OS developers.

Those early PCs have captured the attention of [esot.eric], who has come up with the interesting project of interfacing an AVR microcontroller to the 8088 system bus of one of those early PCs. Thus all those PC peripherals could be made to run under the control of something a little more up-to-date. When you consider that the 8088 ran at a modest 300KIPS and that the AVR is capable of running at a by comparison blisteringly fast 22MIPS, the idea was that it should be able to emulate an 8088 at the same speed as an original, if not faster. His progress makes for a long and fascinating read, so far he has accessed the PC’s 640KB of RAM reliably, talked to an ISA-bus parallel port, and made a CGA card produce colours and characters. Interestingly the AVR has the potential for speed enhancements not possible with an 8088, for example it can use its own internal UART with many fewer instructions than it would use to access the PC UART, and its internal Flash memory can contain the PC BIOS and read it a huge amount faster than a real BIOS ROM could be on real PC hardware.

In case you were wondering what use an 8088 PC could be put to, take a look at this impressive demo. Don’t have one yourself? Build one.


Filed under: computer hacks, The Hackaday Prize

DIY USB Power Bank

http://feedproxy.google.com/~r/hackaday/LgoM/~3/RLN65inUcDs/

http://hackaday.com/?p=257318

USB power banks give your phone some extra juice on the go. You can find them in all shapes and sizes from various retailers, but why not build your own?

[Kim] has a walkthrough on how to do just that. This DIY USB Power Bank packs 18650 battery cells and a power management board into a 3D printed case. The four cells provide 16,000 mAh, which should give you a few charges. The end product looks pretty good, and comes in a bit cheaper than buying a power bank of similar capacity.

The power management hardware being used here appears to be a generic part used in many power bank designs. It performs the necessary voltage conversions and manages charge and discharge to avoid damaging the cells. A small display shows the state of the battery pack.

You might prefer to buy a power bank off the shelf, but this design could be perfect solution for adding batteries to other projects. With a few cells and this management board, you have a stable 5 V output with USB charging. The 2.1 A output should be enough to power most boards, including Raspberry Pis. While we’ve seen other DIY Raspberry Pi power banks in the past, this board gets the job done for $3.

 


Filed under: peripherals hacks

Humans May Have Accidentally Created a Radiation Shield Around Earth

http://feedproxy.google.com/~r/hackaday/LgoM/~3/tfJJ3wOwHPo/

http://hackaday.com/?p=258039

 

NASA spends a lot of time researching the Earth and its surrounding space environment. One particular feature of interest are the Van Allen belts, so much so that NASA built special probes to study them! They’ve now discovered a protective bubble they believe has been generated by human transmissions in the VLF range.

VLF transmissions cover the 3-30 kHz range, and thus bandwidth is highly limited. VLF hardware is primarily used to communicate with submarines, often to remind them that, yes, everything is still fine and there’s no need to launch the nukes yet.  It’s also used for navigation and broadcasting time signals.

It seems that this human transmission has created a barrier of sorts in the atmosphere that protects it against radiation from space. Interestingly, the outward edge of this “VLF Bubble” seems to correspond very closely with the innermost edge of the Van Allen belts caused by Earth’s magnetic field. What’s more, the inner limit of the Van Allan belts now appears to be much farther away from the Earth’s surface than it was in the 1960s, which suggests that man-made VLF transmissions could be responsible for pushing the boundary outwards.

Overall, this seems like an accidental, but potentially positive effect of human activity – the barrier protects the Earth from potentially harmful radiation. NASA’s YouTube video on the topic suggests that understanding this mechanism better could enable us to protect our satellites and space vehicles from some of the harmful effects of the space environment.

NASA does a lot of high-end research – like the EM drive that’s got a lot of people very confused right now.

[Thanks bty!]


Filed under: news

Wake Up To Fresh Coffee!

http://feedproxy.google.com/~r/hackaday/LgoM/~3/1TvfPMKLRgY/

http://hackaday.com/?p=257645

Be careful what you say when you are shown a commercial product that you think you could make yourself, you might find yourself having to make good on your promise.

When he was shown a crowdfunded alarm clock coffee maker, [Fabien-Chouteau] said “just give me an espresso machine and I can do the same”. A Nespresso capsule coffee machine duly appeared on his bench, so it was time to make good on the promise.

The operation of a Nespresso machine is simple enough, there is a big lever on the front that opens the capsule slot and allows a spent capsule to drop into a hopper. Drop in a new capsule, pull the lever down to load it into the mechanism, then press one of the buttons to tell it to prime itself. After a minute you can them press either of the large cup or the small cup buttons, and your coffee will be delivered.

To automate this with an alarm clock there is no necessity to operate the lever, it’s safe to leave loading a capsule to the user. Therefore all the clock has to do is trigger the process by operating the buttons. A quick investigation with a multimeter on the button PCB found that the voltage present was 15 V, well above the logic level of the STM32F469 board slated for the clock. Thus a simple circuit was devised using a MOSFET to  do the switching.

Finally, the clock software was created for the STM32F469. The chip’s 2D graphics acceleration hardware and the development board’s high quality display make for a very slick interface indeed.

You can see the resulting clock in the video below the break. It’s an alarm clock coffeemaker we’d be proud to have beside our beds, but there’s one slight worry. On a mains-powered device like the Nespresso the low voltage rails are not always mains-isolated, and it’s not clear whether or not this is the case. Maybe we’d have incorporated an opto-isolator, just in case.

Nespresso machines have featured here before a few times, from this circumvention of their annoying security screws, to a handy tool for identifying which of the colour-coded but annoyingly unlabeled capsules you have before you.


Filed under: clock hacks