How programming languages make the computer do work

By , last updated October 1, 2014

Telling a computer what to do using a coding language which both you and it understand is quite a simple idea, though sometimes technically complex to actually perform. But, how do the elementary components of that coding language get made (or coded?) and understood by the computer in the first place, when presumably there are no established building blocks of code for you to use?


You program it in another language. A lot of languages popular today, like PHP, were originally implemented in C/C++. Basically the PHP interpreter is a C program that accepts text input formatted as proper PHP, and does the thing that the PHP is asking for.

C++ was itself originally implemented in C. It started out as a compiler written in C.

C itself was made by writing a compiler in assembly language.

Assembly language was made by writing an assembler directly in binary. (Or ‘compiling by hand’, which means manually turning readable code into unreadable, but functionally identical, binary that can run on the machine natively.)

Binary works because that’s how it was engineered. Computer engineers made the circuits that actually do the adding, push or whatever. They also made it so that you could specify what to do with ‘op codes’ and arguments. A simple, but made up, example CPU might use the opcode 0000 for adding, and accept two 4 bit numbers to add. In that language if I told the CPU¬†0000 0001 0001 it’d add 1 and 1 together and do… whatever it was designed to do with the result.

So now we’re at the bottom. Ultimately all code ends up coming down here to the binary level.

Going deeper

Humans conceptualize things in layers of abstraction. When it comes to computers, this applies especially well, as computers are some of the most advanced things that humanity has come up with.

Let’s start with the bottom layer. At the absolute lowest level, computers work off of the idea that electrons can move from one atom to another, and this results in a transfer of energy. Building on that, moving billions of electrons through a large construct of atoms creates what is called an electrical current. Current is driven by voltage, which comes from other sources of energy. Another important idea is that electrons move better through some materials than others. Using this idea, we can create substances called semiconductors. Different types of semiconductors can be attached to create some interesting effects.

The most important low-level device in a computer is called a transistor. A transistor is created from semiconductors and contains 3 ports. Two of the ports behave like a wire. Current flows from one to the other. The third port controls the current going through those ports, which affects the voltage across the transistor. This makes a transistor like a switch. If the third port has a high voltage going to it, current will move through it faster and the voltage across the transistor will fall, instead going to other parts of the circuit. Conversely, if the third port has a low voltage going to it, current will move through the transistor slower and the voltage across it will rise, taking voltage away from the rest of the circuit. Using this idea, we can create logical circuits.

Read also  C++ unsigned int problems

The basic logical circuits are AND, OR, and NOT. These circuits, among others, are known as gates and are produced using various configurations of transistors. Logic gates work with binary inputs and outputs. For ease of understanding and for the purposes of mathematical calculation, the two binary values are known as 0 and 1. In a real system, 0 and 1 represent 0V and 5V respectively, applied to the transistors inside the gates. AND and OR gates have two inputs and 1 output. AND gates output a 1 only if both inputs are 1, and a 0 otherwise. OR gates output a 0 only if both inputs are 0, and a 1 otherwise. NOT gates have 1 input and 1 output, and simply flip the value from 0 to 1, or vice versa. There are also NAND gates and NOR gates, which simply add a NOT to the end of AND and OR gates respectively. NAND and NOR gates have an interesting property where any circuit in a system can be represented using a configuration using just one of them. They are often used in this way to make systems cheaper, as you only need to deal with one type of gate, but this comes at the price of systems being larger and more complex. There are also XOR gates, which output 1 only if the inputs are not equal. These can make simplifying circuits easier, but they aren’t used as often as the others.

So how do these logic gates become computers? Well, if you think about it, anything you need a computer to do can be done using binary logic. Binary numbers are numbers in base 2, meaning that each digit only has 2 possible values, 1 and 0. Binary 0 is decimal 0, binary 1 is decimal 1, binary 10 is decimal 2, binary 11 is decimal 3, binary 100 is decimal 4, and so on. Counting in binary is simply just flipping the right most bit (binary digit) back and forth. Every time a bit goes from 1 back to 0, flip the bit to the left of it, and cascade the flipping all the way to the left most bit until you reach a 0. Since binary digits only have two values, they work wonderfully with logic gates. If you want to make an adder, for example, you feed each corresponding bit of two numbers into an XOR gate, and if both numbers were 1’s (AND), you add a carry bit to the next bit on the left. More complex versions of this method can make an add happen faster.

Logic gates are a human construct to make looking at computer circuits easier, but they are still transistors underneath it all. Thus, a binary value takes time to go through a gate. This time is called “propagation time”. By taking two gates and feeding the outputs back into the inputs of the other and taking advantage of propagation time, we can create special gates called latches and flip-flops, which are actually capable of storing a single bit. By putting tons of flip flops together, we can create what are called registers, which can store full binary numbers for use in other logic circuits. Registers are the basis of electronic memory. Computer processors use registers for quick access to values that they are using right now. RAM is built up of thousands to billions of structures like registers, and is made to store larger sections of values. Computer programs and the values they need to keep access to are stored in RAM while the program is running.

Read also  Understanding git-add commands

Now we get to the juicy stuff. By taking simple gate logic circuits like adders (combinational logic) and memory circuits like registers (sequential logic) and putting them together in a specific order, we build what is known as a computer architecture. Most architectures are built off of the following model: 1. An instruction is read from memory using a special register called the program counter, which keeps track of where we are in the program 2. The instruction is decoded to find out what it is supposed to do, and to what values. An instruction either performs a mathematical operation on one or more registers, reads or writes to memory, or jumps from one place in the program to another. 3. The instruction is executed. This usually involves a special unit called the arithmetic logic unit, which can perform every operation that the computer needs to run. This is the smarts of the computer. 4. Any memory accesses are done. A value calculated in the previous step could be stored, or used as an address to read from memory. 5. Any values from the previous two steps are written back to registers.

All of this happens on a cycle over and over again until the computer switches to another program or turns off. This cycle is controlled by a device called a clock, which simply outputs a 0 and a 1 on a constant interval, back and forth forever. A tick of the clock usually triggers an instruction to be read from memory, and the rest just cascades in order without regard for the clock. In more complex systems, a process called pipelining is used to allow different parts of the processor to do different things at the same time, so that no part of the processor is waiting and not doing something. In these systems each step has to be synchronized to the clock.

Now that we’ve discussed how computer hardware works, we can finally discuss the software aspect. The architecture of a computer is built alongside of an instruction set architecture. The ISA is a set of all instructions that a particular computer should be able to do. The most common ISA in use right now is the x86 architecture. All ISAs will usually define instructions like add, subtract, multiply, divide, bit shift, branch, read, write, and many others. Every instruction has its own syntax. All instructions include first an opcode that identifies the type of instruction. They will then include a set of fields. All instructions except for some types of jump instructions will specify one or more register numbers to read or write from. Some instructions will include an “immediate field” which allows you to enter a literal number directly into the instruction. Some instructions will include a field for a memory address. Whatever instructions are defined, the instruction decoder in hardware has to know how to parse them all, and the ALU has to be able to execute them all.

ISAs at base level will always define how to represent instructions in binary. This is how they are stored in memory and how the computer will read them. This is known as machine code. But ISAs, modern ones anyway, will also define an assembly language, which is a human readable version of machine code that more or less converts directly to machine code. People, usually the people who designed the ISA, will also provide an assembler, which converts the assembly language to machine code. Assembly is useful for low-level operations, like those performed by an operating system. Other than that, it’s really just useful to know how assembly works, but not so much to use it in practice. It is very slow to program in and there is so much that must be taken into account. One small change in your code can take hours to reimplement.

Read also  Simple Light

People realized these difficulties, and decided to take common tasks found in assembly programs and abstract them out into an even more readable format. This resulted in the creation of what are now known as low-level programming languages. The best and most commonly used example of one of these languages is C. In low-level languages, you get things like functions, simple variable assignment, logical blocks like if, else, while, and switch statements, and easy pointers. All of the complex management that has to go into the final assembly result is handled by a program called a compiler, which converts source code into assembly. An assembler then converts that into machine code, just like before. A lot of really complicated logic goes into compiler design, and they are some of the most advanced programs around.

As computers evolved, so did programming. Today, we have many different ways to create programming languages. We have implemented much higher level features, like object-oriented programming, functional programming, garbage collection, and implicit typing. Many languages still follow the original compiled design, like C# and Java. Java is special, because it runs in a virtual machine called the JVM. When you compile Java code, it compiles to machine code that is only runnable on the JVM. The JVM is a program that runs in the background while Java programs are running, translating JVM instructions to whatever instructions of the machine it is running on. This makes Java platform dependent, because it is up to the JVM to worry about what system it is running on, not the Java compiler itself.

There are also other languages called scripting languages or interpreted languages. These work similarly to Java, except instead of the JVM they have an interpreter, which is a program that receives the source code of the language and essentially runs the code itself, line by line, without compiling it. Because the code is not converted to machine code, scripting languages run slower than compiled ones. Some examples are Python, JavaScript, PHP, Ruby, and Perl.

Computers are cool.


This post was inspired by this reddit post.


Be the first to comment.

Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>