Daily Learning Notes for January 19, 2016

Today I finished the second NAND2Tetris project, did all the reading for week 3, and started on the third project!

Finishing the arithmetic logical unit

Yesterday I left off with these two questions to guide my final design for the arithemtic logical unit (ALU):

  • How do I check if a number is zero using my existing logic gates?
  • How do I check if a number is negative (according to the two’s complement method)? In other words, how do I check if the number’s most signigicant bit is 1?

So I got part of it working: I now have a “status output” that correctly checks if the output is less than zero according to the two’s complement method. At first I kept running into this “sub bus of an internal node may not be used” error in the hardware simulator – a limitation of this hardware description language – but I found a workaround thanks to the Hardware Description Language (HDL) Survivor Guide.

Now, how can I check for a 16-bit representation of the number 0 and transform that into a one-bit status output? This is what I’m stuck on now. I don’t have any prebuilt chips that take a 16-bit input and give a single-bit output. I do have a gate that does exactly what I need to check if a number is zero or not… oh, I can just use two of them! Let’s try that.

Once again, drum roll please… Yeah, success!!! No errors, and my output file perfectly matches the comparison file! I win! I finished the second project! Onto the next adventure!

Chapter 3 reading notes

My links for week 3 of this course:

Looks like week three is all about sequential chips and memory! As usual, I feel both intimidated and excited! OK, let’s start reading. Deep breath, and… go!

Sequential versus combinational chips

In the first sentence of chapter 3, I learned a new name for all the chips I’ve been buildling: combinational chips. Their outputs are based on the combinations of their inputs (and nothing else).

Now we’re moving onto sequential chips, which can “preserve data over time.” These are the building blocks for a computer’s memory. It looks like we’re starting with a low-level sequential chip called a flip-flop – I love that term! A friend warned me about the confusion of flip-flops, so I’m going into this expecting my flips to be flopped and my flops to be flipped. But is it really that complicated? I can’t wait to find out!

They say that we will build all of our memory devices from flip-flops – I guess they’re the sequential chip equivalent of the NAND gate.

Note: I won’t have to figure out how to build a data flip-flop for this course, but I can’t help but wonder how to build one! They say that you can build a data flip-flop using NAND gates alone, with the magic of feedback loops. Well, that’s a question for another day. Let’s move on.

Manufacturing time

Memory can’t exist without time, and that’s true for both our brains and our computers. So first they explain the clock, “a master clock that delivers a continuous train of alternating signals” to keep track of time inside the computer. I guess that signal must be connected to all the other sequential chips; that way, they can all know what time it is and stay synchronized!

At the hardware level, this involves an oscillator – don’t ask me what that actually is! All I know is that it alternates between two phases, commonly labeled “low” and “high”.

The time elapsed between the start of the first phase and the end of the second phase is called a cycle. Cool, that’s a good name for it.

There are different types of flip-flops, but it looks like I’m only going to be using a data flip-flop or DFF, which takes a single-bit input and a clock input (which is connected to the master clock signal) and gives a single-bit ouput.

They have an interesting equation here: out(t) = in(t - 1). I’m not sure I understand that, and they really just gloss over it in the book at this point. But I guess that means the output at a given “tick” of the clock is going to be the input that it got on the previous “tick” of the clock. So it doesn’t do anything with the data, it just propels it forward through time? That seems kind of useless!

Ah, but then they talk about registers, which can actually store information over time. Their equation out(t) = out(t - 1) is described as “classical storage behavior” – I wonder what they mean by “classical” in this context? Anyway, according to those equations, the only difference between a flip-flop and a register is that a register’s output… always stays the same? Wait, how would it take an input then?!

Avoiding input ambiguity

Ah, self-referential logic! Maybe this is what my friend tried to warn me about. They have a nifty diagram on page 43 illustrating the problem: if you took the output of a data flip-flop (or DFF) and looped it back into its input, how would you ever change the input?!

In hardware terms, they explain it this way:

“…internal pins must have a fan-in of 1, meaning that they can be fed from a single source only.”

I also love this term “input ambiguity” – yup, that about sums it up! And it sounds like funny robot-speak. Next time somebody asks me a vague question, I’ll reply flatly: “input ambiguity.”

The solution to this self-referential input ambiguity is to use a multiplexor to add a “load bit” which instructs the chip when to start storing a new value and when to keep spitting out the previously-loaded value.

Registers can contain as many bits as you want by combining lots of these single-bit registers. Awesome! And here’s some more jargon: the “width” of a register is the number of bits it can store, and the multi-bit contents of a register are called “words”.

Random Access Memory (RAM)

Once you have registers of arbitrary width, you can stack them together to create random access memory (RAM) units of arbitrary length!

About the name “random access memory”: We need to be able to retrieve data stored in memory in any order – that’s why it’s called “random access” and not something like “sequential access”.

To allow for “random access”, we can assign a unique number to each “word” – this unique number is called an address – and we can build a logic chip that can take an address and access the appropriate data from the RAM.

How exactly a logic chip can select data like that is not yet clear to me, but I’ll figure it out when I build one for this week’s project!

This part will be important for me later: a RAM device has three inputs and one output. It takes in a “load bit”, a “data input”, and an “address input”. Depending on the value of the load bit, it either reads data (by just giving an output) or writes data (by replacing the data in memory with the input data).

A RAM device is measured by its width (the width of each of its words) and its size (the number of words in the RAM). Today’s computers typically use 32-bit-wide or 64-bit-wide RAM with a size of hundreds of millions of bits. (My desktop has 16GB of RAM, for example).

Marvelling at synchronicity

Counters are sequential chips that keep track of time by incrementing an internal number for each new cycle of the clock. And a program counter is a common part of a CPU, but I don’t understand what it does yet!

But I learned so much from just the last couple paragrphs of this chapter’s background section!

This was a revelation for me: sequential chips work because of controlled feedback loops! Not unlike how our own human consciousness seems to depend on some intricate recursion – “strange loops”, which I first read about in Gödel, Escher, Bach. Such a beautiful idea! And isn’t it amazing that such an abstract concept could have a real-world use as practical as building a computer?!

Here’s another crucial piece of information that really made all these concepts click together in my head: the output of a sequential chip only changes at the end of a clock cycle, and this discretization is what enables the chips to stay synchronized even though signals might arrive at a given chip at different times because of hardware constraints like physical distance or interference.

Interestingly, chips can have unstable states during a clock cycle and that’s fine, because we only check on them at the end of each clock cycle.

So with this strategy, computers can only work if the length of their clock cycle is longer than the time it takes a signal to travel the longest distance across the physical implementation of the computer’s hardware! (This is one of those details that I never would’ve realized on my own, but now that I know it, I’m thinking: d’oh! It’s so obvious!)

So the trick to synchronize computers – or anything for that matter – is to have a sufficiently long delay! Now that I think about it, I guess that’s the trick that enables all feats of human synchronization, from synchronzied swimming to playing music! Regular delays of just the right length are built right into the system.

Appendix section A.7 reading notes

I learned a couple new things while reading Appendix A section A.7 in the course book. Here’s a quick summary:

  • Sequential chips are also called clocked chips.
  • I like how the hardware simulator’s clock cycles are called “tick” and “tock”!
  • I can control the clock manually through the simulator’s graphical user interface (GUI) or automate it with a script.
  • Built-in chips use the keyword CLOCKED to identify which pins are clocked; all other chips are clocked if they include a part that is clocked. (And the hardware simulator checks this recursively. Cool stuff!)
  • Only clocked chips are allowed to feed their output into their input; combinational chips aren’t allowed to do that, because it would result in an uncontrolled feedback loop or “data race”!

Designing my first sequential chips

Time to start the third project! I need to build a single-bit register, which I’ll then use to build a 16-bit register, which I’ll then use to build a 16-bit 8-register RAM, which I’ll then use to build progressively larger RAM devices! Oh, and I’ll need to build a 16-bit program counter, even though I don’t know what that is yet.

Building a single-bit register

OK, let’s start at the beginning. A single-bit register takes two single-bit inputs – the data bit and the load bit – and outputs whatever the last input was. If the load bit is set to 1, then it should save the new input; otherwise, it should ignore the input entirely, which at first sounded useless to me but now I understand that the “uselessness” of this mechanism is exactly what allows it to store data over time, which is very useful! Let’s see if I can implement it…

15 minutes later: Success! At first I forgot that I had to fan out the output pin to allow for both the feedback loop and the chip’s final output – just an issue with writing the hardware description language (HDL). My initial design was right! But it was illustrated in chapter 3, so I had it easy. I’m not sure if I could’ve figured it out on my own, but at least I confirmed that I remembered what I read a little while ago.

It’s cool seeing the data persist over time in the hardware simulator! Once the load bit is set to one, the data input bit replaces whatever the old output was and becomes the new output until the next time the load bit is changed.

Building a 16-bit register

Now all I need to do is string together 16 single-bit registers, right? That sounds easy enough, just like when I built 16-bit versions of the basic logic gates for the first project.

Instead of writing out 16 near-identical lines of HDL, I just copy-pasted it and used search-and-replace to change the numbers. But I should know how to automate this with a few lines of code! Alas, nothing comes to mind quickly enough, so I feel like writing that code would take more time than it would save me. Something to learn another day.

Anyway, it works just as expected! Well, that’s enough studying for today. I’ll pick up where I left off tomorrow.

New concepts and jargon

  • Combinational chips produce outputs entirely dependent on the combination of their inputs.
  • Sequential chips (or clocked chips) can preserve data over time, and they consist of “data flip-flops sandwiched between optional combinational logic layers.”
  • Flip-flops are the most basic sequential gates used to build memory devices, and there are several types. They use controlled feedback loops to store and operate on data over time.
    • Data flip-flops (DFF) have a single-bit input, a single-bit output, and a clock input.
  • The master clock “delivers a continuous train of alternating signals” to represent the passage of time in a computer.
  • Oscillators are things that continuously alternate between two phases.
  • A cycle is the “elapsed time” between the start of the “low” phase and end of the “high” phase.
  • Registers store data over time.
  • “Input ambiguity”, my new favorite phrase! This can result from self-referential logic, like when you feed a flip-flop’s output back into its input.
  • A “load bit” instructs the chip when to start storing a new value.
  • The width of a register is the number of bits it can store.
  • Words are the multi-bit contents of a register.
  • Random access memory (RAM) is a form of data storage that allows data to be retrieved in any order, regardless of how it’s physically stored in the device. It can read and write data, and it’s measured by its width (the width of each of its words) and its size (the number of words in the RAM).
  • An address is a unique number assigned to each word in a RAM unit.
  • Counters are sequential chips that keep track of time by incrementing an internal number for each new cycle of the clock.
  • Program counters contain the addresses of the instruction to be run by the computer, fetching the next instructions after each clock cycle. Or something like that – I still don’t feel like I understand program counters!
  • Discretization of sequential chips’ output – the fact that they change their outputs only at the transition between clock cycles – means that they can all be synced together!
  • A data race is what you call the result of a feedback loop based on an “instantaneous and uncontrolled dependency” between a chip’s inputs and outputs.

Questions

  • Who came up with the idea of using controlled feedback loops to create a sequential chip?
  • What does “classical” storage behavior mean? Is there a non-classical behavior?
  • How exactly does a program counter work?
  • How do you build a data flip-flop out of NAND gates?
  • Is it possible to synchronize any system of arbitrary complexity given a long enough delay? (Reminds me of Archimedes: “Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.”)
  • What is recursive ascent? (This phrase appeared in introductions to the course and it appears again here, describing how registers can be combined to make larger and larger RAM devices.)
  • What’s the easiest way for me to write a few lines of code in order to generate repetitive HDL (or repetitive anything, for that matter)?

Learning summary (#TIL)

  • I finished the second project, implementing my first arithmetic logical unit!
  • I learned all about how sequential chips can store data over time by taking advantage of controlled feedback loops, and how they can be combined into larger memory devices.
  • I realized that synchronization (of a computer or anthing else) depends on a repeating delay, and that computers rely on clock cycles based on an alternating signal.
  • I implemented a single-bit and 6-bit register and learned how to test sequential chips in the hardware simulator.

Next steps

Time breakdown

  • Study time: 5 hours
    • Active reading: 2 hours 54 min
    • Building second project: 38 min
    • Building third project: 29 min
    • Watching video lectures: 55 min
  • Learn to Code LA planning: 2 hours 20 min