Monday, 16 September 2019

Learn to Program C# (C-Sharp) with a FREE 4 Hour Online Course

I have just released a completely free online course on the C# language. This course used to sell for $99 but from today you can sign up for free. It contains 33 video lessons (over four hours of instruction) pus an eBook and all the source code of the sample projects. Sign up here:

If you want to get started with C# programming on Windows (using a free copy of Visual Studio), this is the course you need!

Incidentally, if you prefer to learn from a book or you need a book to take you deeper into the subjects covered in my course, my book, The Little Book Of C# is available in paperback or for Kindle on Amazon (US), Amazon (UK) and worldwide.

Friday, 16 August 2019

The Little Book Of Ruby Programming

My new book on Ruby programming is published today, available from Amazon (US), Amazon (UK) and worldwide (ISBN: 978-1913132071).
I originally wrote The Little Book Of Ruby way back in 2006. My software development company was creating some Ruby development tools at the time and we really needed a quick way to learn the basic features of the Ruby programming language. Hence The Little Book. This latest edition has been considerably updated, expanded and reformatted and there is also a downloadable source code archive of all the programs in the book. It is the first time I’ve released The Little Book Of Ruby in paperback and Kindle eBook format. If you are new to Ruby and want a really quick way to start programming, I hope this may be of use.

Wednesday, 14 August 2019

Smalltalk – The Most Important Language You’ve Never Used

In my last two posts I mentioned two programming languages with big ideas (Modula-2 and Prolog) which, however, never ranked among the dominant mainstream languages. Now I want to look at one of the most influential languages ever created – Smalltalk.

It would be hard to overstate the influence of Smalltalk. Without Smalltalk there would have been no Mac and no Windows. In all probability you would not now have a mouse attached to your computer, networking might have come around eventually but not as quickly as it did - and object orientation might never have made it to the mainstream.

This is Smalltalk/V - a Windows based Smalltalk from the 1980s.
Then again, it could be argued that object orientation never did make it to the mainstream. Well, not the sort of beautiful, simple, elegant object orientation at the heart of Smalltalk, anyway. C++ was the first object oriented language to take OOP into the mainstream – and most OOP languages that followed appear to have built on the foundations laid by C++ rather than by Smalltalk. As Alan Kay - the ‘founding father’ of Smalltalk once said: “I invented the term 'object-oriented', and I can tell you I did not have C++ in mind".

Most of the current generation of OOP languages take some bits of Smalltalk (such as classes and methods), miss out other bits (such as data-hiding, a class-browsing IDE, image-saving, the ‘message passing’ paradigm etc.) and add on a few things of their own – such as C++’s multiple inheritance. The end result is that those languages are less simple, consistent and coherent than Smalltalk. One of the proud boasts of the developers’ of Pharo (one of the best modern Smalltalk implementations) is that the entire language syntax can be written on a postcard. Try doing that with C++!

Dolphin Smalltalk - a modern Windows-based implementation
Object Orientation in Smalltalk was an attempt to simplify programming: to make code highly modular (the programmer sent ‘messages’ to objects and it was up to the object to ‘decide’ which methods, if any, it used to respond to those messages) and more easily maintainable. In fact, most modern OOP languages have become more complex and harder to maintain.

When I read programming forums on the Internet these days, the main feature which is praised by enthusiastic programmers is the speed with which they can write programs. In fact, the more programming I do (I’ve been at it since the early ’80s), the more I become convinced that the most important thing is not the speed with which I can write programs but the speed with which I can debug them. The simpler the code, the easier it is to debug.

Debugging is  the partner to maintaining. A great many programmers now think it’s neat to contribute a ‘hack’ to some programming project then move on to something else – leaving some other sucker to try to maintain and fix the increasingly incomprehensible code at some future date. For many programmers, debugging and maintaining are not even activities which register in their minds. Quick and clever coding is all they care about. Long-term reliability isn’t. Well, frankly, you wouldn’t want the control systems of a nuclear power station written by quick-and-clever programmers!

Simplicity and maintainability were ideals that shaped Smalltalk and Modula-2. That’s why Smalltalk worked with inheritance (re-using existing features) and encapsulation. It’s why Modula-2 implemented data-hiding inside hermetically sealed modules.

But maybe the three languages that I’ve highlighted (Smalltalk, Modula-2 and Prolog) were simply too different from the languages that eventually came to dominate the world of programming. Smalltalk was perhaps too insular – there was no separation between the programing language and its environment – and its insistence on simplicity made it hard to change the language to add on significant new features. Prolog programs were too uncontrollable, with their wide-ranging searches for solutions to complex problems. Modula-2 was too restrictively ‘modular’ with its authoritarian insistence on the precise separation of one unit of code from another unit of code.

As a consequence, we now have more mundane languages such as Ruby, Python, Java, C++ and C#, which all mix-and-match ideas from earlier languages but seem to lack any single ‘great idea’ of their own. Perhaps that is what the world really wants – workaday languages that may not be perfect but at least get the job done.

Even so, I refuse to believe that they represent the face of the programming future. One day, surely, someone will have a brilliant idea (and no, I can’t even guess what that might be!) that will dramatically change the way we program. Until then, I can only wonder how different our experience of programming might have been if only Prolog, Smalltalk and Modula-2 had become the big trinity of languages instead of C, C++ and Java.

Monday, 12 August 2019

Prolog – The Logical Choice for Programming

In my last blog post I mentioned a few old programming languages that had big ideas. One of the most ambitious of these was Prolog. Since very few programmers these days have any experience of Prolog, let me explain why it was such a remarkable language.

Prolog was designed to do ‘logic programming’. When I first used Prolog, back in the 1980s, I was initially overwhelmed by its expressive power. When using other (‘procedural’) programming languages, you had to find solutions to all your programming problems at the development stage and then hard-code those solutions into your program. With Prolog, on the other hand, you could ask your program questions at runtime and let it look for one or more possible solutions. Heady stuff. This (I thought) must surely be the way that all programs will be written one day.

You can try out Prolog online using Swish 
Prolog programs are constructed from a series of facts and rules. For example, you could assert that Smalltalk and Ruby are OOP languages by declaring the following ‘facts’:


To find a list of all known OOP languages you would just enter this query:


In Prolog, when an identifier begins with capital letter, this indicates an ‘unbound’ variable which Prolog will try to match with known data. In this case, Prolog replies:

L = smalltalk
L = ruby

Now you can go on to define some rules. For example, let’s says that you want to write the rule that Reba only likes Languages that are OOP as long as Dolly does not like those languages. This is the Prolog rule:

likes(reba,Language) :- 

Let’s assume that the program also contains this rule:

likes(dolly, ruby).

You can now enter this query:

likes(reba, L).

The only language returned will be:

L = smalltalk

This is just the tip of the iceberg with Prolog. The language gets really interesting once you start defining enormously complex sets of rules, each of which depends on other rules. The essential idea is that each rule should define some logical proposition. When you write a rule, you can concentrate on a tiny fragment of what might eventually become an amazingly complex set of interdependent propositions. The programmer (in principle) doesn’t have to worry about the ultimate complexity. Instead, as long as each tiny individual rule works, you can rely on the fact that the vast logical network of which they will ultimately form a part will also work. Making sense of that complexity is Prolog’s problem, not the programmer’s.

In principle, Prolog seemed to offer the potential to do ‘real’ AI (programming) of great sophistication. Some people even believed that Prolog would provide the natural path to creating a truly ‘thinking machine’.

Well, years went by and Prolog failed to live up to its promise. Part of the problem was that, while it was great at finding numerous solutions to a problem, it wasn’t so good at finding just one. A nasty little thing called the ‘cut’ (the ! character) was used to stop Prolog searching when once a solution had been found. But the cut, in effect, breaks the logic and scars the beauty (and simplicity) of Prolog. Another problem was that Prolog interpreters were fairly slow. Prolog compilers were created to get around this limitation and Visual Prolog even introduced strict typing (which is not a part of standard Prolog). But in order to gain efficiency, the compiler sacrificed the metaprogramming (self-modifying) capabilities which many people consider fundamental to the language.

Visual Prolog has a good editor, a built-in debugger, design tools and compiler - but purists would say that it isn't 'real' Prolog.
In brief, it’s probably fair to say that Prolog’s greatest enthusiasts were simply unrealistic about what the language might achieve. For the time being, Prolog seems to be an interesting diversion in programming history which has not (so far) delivered upon its early promise.

Still, it was, and is, an amazingly ambitious language that did lots of really interesting things. Unlike most of today’s ‘new and better’ programming language, it didn’t just recycle old ideas in slightly new ways. However, there was another programming language that was just as ambitious as Prolog but ultimately far more influential: Smalltalk. That will be the subject of my next post.

Try out Prolog

There are several implementations of Prolog available, both free and commercial. One of the most complete free editions is SWI-Prolog: In order to use this with an editor/IDE you should also install the SWI-Prolog Editor: Alternatively, you can write and run short SWI-Prolog programs online here: Visual Prolog has (by far!) the best IDE – complete with editor, debugger, visual designer and compiler. It is available in commercial and free editions. However, this is a non-standard version of Prolog:

Sunday, 11 August 2019

Why Are There No Big New Ideas in Programming?

Modern mainstream programming languages are all much of a muchness these days. Take some object orientation, add in some ‘dynamic typing’, maybe add on a fast compiler to give you the programming benefits of a ‘scripting language’ with the efficiency benefits of C… I keep reading the same sorts of claims made for all kinds of ‘new’ languages. Far from seeming at all new, they strike me as lots of old stuff mixed up together in different ways.

Where are all the truly new ideas now?

In all the languages I’ve use in the last forty years or so, only three have struck me as having a ‘big vision’ – languages that have a profound belief in the value of their design; and that belief shapes the language itself from start to finish. None of those languages, however, is now widely used. They are: Modula-2, Prolog and Smalltalk.

The be-all and end-all of Modula-2 was its modularity. You put code inside well-defined units called ‘modules’ and once in there, that code cannot be accessed from outside the module unless it is very precisely imported and exported. If you think this sounds like the modules, units and mixins of most other languages, think again. Java, C#, Ruby, Object Pascal and many other languages are far less strict in their modularity. In fact, it has been my experience that most programmers have so little experience of modular programming that they often have great difficulty even trying to understand the concept and the benefits of strict modularity. That’s one reason why – even though Modula-2 itself may have failed to take the world by storm – I think it would be useful to most programmers, whatever languages they usually use, to have at least some experience of programming Modula-2 or its successor, Oberon.

But even more ‘visionary’ than Modula-2 are Prolog and Smalltalk.  If you have never programmed in Prolog (and I suppose it must be highly likely that you haven’t) I’ll explain why it was such an exciting and ambitious language in my next post. 

Wednesday, 7 August 2019

Writing a Retro Text Adventure in Delphi

It’s no secret that I am a keen adventure gamer. I love playing them. I love programming them. The traditional type of text adventure (sometimes called ‘Interactive Fiction’) is a bit like a book in which the game-player is a character. You walk around the world entering human language commands to “Look at” objects, “Take” and “Drop” them, move around the world to North, South, East and West (and maybe Up and Down too). Unlike modern graphics games, an adventure game can realistically be coded from start to finish by a single programmer with no need to use complicated ‘game frameworks’ to help out.

While my Delphi adventure game has a graphical user interface it is, in essence, a text adventure just like the 70s and 80s originals
I wrote an adventure game called The Golden Wombat Of Destiny back in the ‘80s. Over the years  that game has gained cult status. It’s even been turned into a bizarre musical film on YouTube. You can play it online here:

Programming a game is not a trivial task. If you are using an object oriented language, you need to create class hierarchies in order to create treasures, rooms, a map and a moveable player.  You need good list-management to let the player take objects from rooms and add them to the player’s inventory. And you need to have robust File/IO routines to let you save and reload the entire ‘game state’.

If you really want to understand all the techniques of adventure game programming, I have an in depth course that will guide you through every step of the way using the C# language in Visual Studio.

But you don’t have to use C#. I’ve written similar games in a variety of languages: Ruby, Java, ActionScript, Prolog and Smalltalk to name just a few. Recently one of the students of my C# game programming course asked for some help with writing a game in Object Pascal (the native language of Delphi, and also of Lazarus). OK, so here goes…

Delphi lets you design, compile and debug Object Pascal applications
If you haven’t already got a copy of Delphi, you can download one here:

This is not a tutorial on how to use Delphi. I am therefore assuming that you already know how to create a new VCL application, design a user interface and do basic coding. If you don’t, Embarcadero has some tutorials here:

If you prefer an online video-based tutorial with all the source code, you can get a special deal on my Delphi & Object Pascal Course by clicking this link

The Adventure Begins

First I designed a simple user interface with buttons and a text field to let me enter the name of an object that I want to take or drop. The main game output is displayed in a TRichEdit box which I’ve named DisplayBox.

Now, I want to create the basic objects for my game.  I want a base Thing class which has a name and a description. All other classes in my game derive from Thing. Then I want a Room class to define a location and an Actor class for interactive characters – notably the player. I could put all these classes into a single code file. In fact, I prefer to put them into their own code files, so I’ve created the three files: ThingUnit.pas, RoomUnit.pas and ActorUnit.pas.

Here are the contents of those files:

unit ThingUnit;


  Thing = class(TObject)
  private // hide data
    _name: shortstring;
    _description: shortstring;
    constructor Create(aName, aDescription: shortstring);
    destructor Destroy; override;
    property Name: shortstring read _name write _name;
    property Description: shortstring read _description write _description;


constructor Thing.Create(aName, aDescription: shortstring);
  inherited Create;
  _name := aName;
  _description := aDescription;

destructor Thing.Destroy;
  inherited Destroy;


unit ActorUnit;


uses ThingUnit, RoomUnit;

  Actor = class(Thing)
    _location: Room;
    constructor Create(aName, aDescription: shortstring; aLocation: Room);
    destructor Destroy; override;
    property Location: Room read _location write _location;


constructor Actor.Create(aName, aDescription: shortstring; aLocation: Room);
  inherited Create(aName, aDescription);
  _location := aLocation;

destructor Actor.Destroy;
  inherited Destroy;


unit RoomUnit;


uses ThingUnit;

  Room = class(Thing)
    _n, _s, _w, _e: integer;
    constructor Create(aName, aDescription: shortstring;
      aNorth, aSouth, aWest, anEast: integer);
    destructor Destroy; override;
    property N: integer read _n write _n;
    property S: integer read _s write _s;
    property W: integer read _w write _w;
    property E: integer read _e write _e;


{ === ROOM === }
constructor Room.Create(aName, aDescription: shortstring;
  aNorth, aSouth, aWest, anEast: integer);
  inherited Create(aName, aDescription);
  _n := aNorth;
  _s := aSouth;
  _w := aWest;
  _e := anEast;

destructor Room.Destroy;
  inherited Destroy;


And this is the code in the main file, gameform,pas. Note that the button event-handlers such as TMainForm.NorthBtnClick were created using the Delphi events panel prior to adding code to those methods:

unit gameform;

  Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants,
  System.Classes, Vcl.Graphics,
  Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.StdCtrls, Vcl.ExtCtrls,
  ThingUnit, RoomUnit, ActorUnit;

  TMainForm = class(TForm)
    DisplayBox: TRichEdit;
    NorthBtn: TButton;
    SouthBtn: TButton;
    WestBtn: TButton;
    EastBtn: TButton;
    LookBtn: TButton;
    CheckObBtn: TButton;
    TestSaveBtn: TButton;
    TestLoadBtn: TButton;
    Panel1: TPanel;
    DropBtn: TButton;
    TakeBtn: TButton;
    inputEdit: TEdit;
    InvBtn: TButton;
    procedure FormCreate(Sender: TObject);
    procedure LookBtnClick(Sender: TObject);
    procedure SouthBtnClick(Sender: TObject);
    procedure NorthBtnClick(Sender: TObject);
    procedure WestBtnClick(Sender: TObject);
    procedure EastBtnClick(Sender: TObject);
    { Private declarations }
    procedure CreateMap;
    procedure Display(msg: string);
    procedure MovePlayer(newpos: Integer);
    { Public declarations }

  MainForm: TMainForm;
  room1, room2, room3, room4: Room;
  map: array [0 .. 3] of Room;
  player: Actor;


{$R *.dfm}

procedure TMainForm.Display(msg: string);

procedure TMainForm.CreateMap;
  room1 := Room.Create('Troll Room', 'a dank, dark room that smells of troll',
    -1, 2, -1, 1);
  room2 := Room.Create('Forest',
    'a light, airy forest shimmering with sunlight', -1, -1, 0, -1);
  room3 := Room.Create('Cave',
    'a vast cave with walls covered by luminous moss', 0, -1, -1, 3);
  room4 := Room.Create('Dungeon',
    'a gloomy dungeon. Rats scurry across its floor', -1, -1, 2, -1);
  map[0] := room1;
  map[1] := room2;
  map[2] := room3;
  map[3] := room4;
  player := Actor.Create('You', 'The Player', room1);

procedure TMainForm.FormCreate(Sender: TObject);

procedure TMainForm.LookBtnClick(Sender: TObject);
  Display('You are in ' + player.Location.Name);
  Display('It is ' + player.Location.Description);

procedure TMainForm.MovePlayer(newpos: Integer);
  if (newpos = -1) then
    Display('There is no exit in that direction')
    player.Location := map[newpos];
    Display('You are now in the ' + player.Location.Name);

procedure TMainForm.NorthBtnClick(Sender: TObject);

procedure TMainForm.SouthBtnClick(Sender: TObject);

procedure TMainForm.WestBtnClick(Sender: TObject);

procedure TMainForm.EastBtnClick(Sender: TObject);


So this is already a good basis for the development of an adventure game in Delphi. I have a map (an array) or rooms and a player (an Actor object) capable of moving around. There’s still much to do, however – for example, I need some way of taking and dropping objects and saving and restoring a game. I’ll look at those problems in a future article.

If you are seriously interested in programming text adventure games or learning Delphi, here are special deals on two of my programming courses.

Learn To Program an Adventure Game In C#

You will learn to…

  • Write a retro-style adventure game like ‘Zork’ or ‘Colossal Cave’
  • Master object orientation by creating hierarchies of treasure objects
  • Create rooms and maps using .NET collections, arrays and Dictionaries
  • Create objects with overloaded and overridden methods
  • Serialize networks of data to save and restore games
  • Write modular code using classes, partial classes and subclasses
  • Program user interaction with a ‘natural language’ interface
  • Plus: encapsulation, inheritance, constructors, enums, properties, hidden methods and much more…

Learn To Program Delphi

  • 40+ lectures, over 6 hours of video instruction teaching Object Oriented programming with Pascal
  • Downloadable source code 
  • A 124-page eBook, The Little Book Of Pascal, explains all the topics in depth

Saturday, 3 August 2019

The Terrible Visual Studio 2019 New Project Dialog

Gahhhh! Why do Microsoft make changes that nobody asks for and nobody wants? A couple of years ago they suddenly put all the Visual Studio menus in capital letters (they undid that change after mass protests). Now in Visual Studio 2019, they've replaced a perfectly neat and functional New Project dialog with a sprawling, disorderly mess. I hope, oh how I hope, that this dialog will soon go the way of the capital-letter menus. In the meantime, if you can't work your way around it, I've just written this short guide on the Bitwise Books web site:

Monday, 29 July 2019

Value, Reference and Out Parameters in C# Programming

Confused by all the parameter types in C#? here’s a quick guide to help you sort out the differences between value, reference and out parameters. This is an extract from my book, The Little Book Of C#.

By default, when you pass variables to functions or methods these are passed as ‘copies’. That is, their values are passed as arguments and these values are assigned to the corresponding parameters declared by the function. Any changes made within the method will affect only the copies (the parameters) within the scope of the method. The original variables that were passed as arguments (and which were declared outside the method) retain their original values.

Sometimes, however, you may in fact want any changes that are made to parameters inside a method to change the matching variables (the arguments) in the code that called the method. In order to do that, you can pass variables ‘by reference’. When variables are passed by reference, the original variables (or, to be more accurate, the references to the location of those variables in your computer’s memory) are passed to the function. So any changes made to the parameter values inside the function will also change the variables that were passed as arguments when the function was called.

To pass arguments by reference, both the parameters defined by the function and the arguments passed to the function must be preceded by the keyword ref. The following examples should clarify the difference between ‘by value’ and a ‘by reference’ arguments. In each case, I assume that two int variables have been declared like this:

int firstnumber;
int secondnumber;
firstnumber = 10;
secondnumber = 20;

Example 1: By Value Parameters

private void ByValue(int num1, int num2) {
    num1 = 0;
    num2 = 1;

This method might be called like this:

ByValue(firstnumber, secondnumber);

Remember that firstnumber had the initial value of 10, and secondnumber had the initial value of 20. Only the copies (the values of the parameters, num1 and num2) were changed in the ByValue() method. So, after I call that method, the values of the two variables that I passed as arguments are unchanged:

firstnumber now has the value 10.
secondnumber now has the value 20.

Example 2: By Reference Parameters

private void ByReference(ref int num1, ref int num2) {
    num1 = 0;
    num2 = 1;

This method might be called like this:

ByReference(ref firstnumber, ref secondnumber);

Once again, firstnumber has the initial value of 10, and secondnumber has the initial value of 20. But these are now ref parameters, so the parameters num1 and num2 ‘refer’ to the original variables. When changes are made to the parameters, the original variables are also changed:

firstnumber now has the value 0.
secondnumber now has the value 1.

You may also use out parameters which must be preceded by the out keyword instead of the ref keyword.

Example 3: out Parameters

private void OutParams(out int num1, out int num2) {
    num1 = 0;
    num2 = 1;

This method might be called like this:

int firstnumber;
int secondnumber;
OutParams(out firstnumber, out secondnumber);

In this case, as with ref parameters, the values of the variables that were passed as arguments are changed when the values of the parameters are changed:

firstnumber now has the value 0.
secondnumber now has the value 1.

At first sight, out parameters may seem similar to ref parameters. However, it is not obligatory to assign a value to a variable passed as an out argument before you pass it to a method. It is obligatory to assign a value to a variable passed as a ref argument.

You can see this in the example shown above. I do not initialize the values of firstnumber and secondnumber before calling the OutParams() method. That would not be permitted if I were using ordinary (by value) or ref (by reference) parameters. On the other hand, it is obligatory to assign a value to an out parameter within the method that declares that parameter. This is not obligatory with a ref argument.

If you need a complete guide to C# programming, my book, The Little Book Of C# is available on Amazon (US), Amazon (UK) and worldwide.

Saturday, 27 July 2019

Learn C# (C-Sharp) In A Day

I know, I know. Unless you are a super-fast learner, you really won't be able to learn very much C# in one day. But if you follow the examples in my new book, you will definitely be able to start writing programs in your first day of study. This is the latest in my series of Little Books of programming. My aim is to keep them short, focused and practical. I know you can get a ton of information online so there's no point padding out these books with class library and syntax references. Instead, each book aims to get you writing - and understanding - programs right away...
This book will teach you to program the C# language from the ground up. You will learn about Object Orientation, classes, methods, generic lists and dictionaries, file operations and exception-handling. Whether you are a new programmer or an experienced programmer who wants to learn the C# language quickly and easily, this is the book for you!

This book explains...

  • Fundamentals of C#
  • Object Orientation
  • Static Classes and Methods
  • Visual Studio & .NET
  • Variables, Types, Constants
  • Operators & Tests
  • Methods & Arguments
  • Constructors
  • Acess Modifiers
  • Arrays & Strings
  • Loops & Conditions
  • Files & Directories
  • structs & enums
  • Overloaded and overridden methods
  • Exception-handling
  • Lists & Generics
  • ...and much more

Buy on, and worldwide.

Tuesday, 16 July 2019

Is an Array in C a Pointer?

In some programming languages, arrays are high-level ‘objects’ and the programmer can think of them simply as ordered lists. In C, you have to deal with arrays ‘as they really are’ because C doesn’t try to hide what is going on ‘close to the metal’. One of the common misconceptions (which I’ve read so many times in books and on web sites that I almost started to believe it was true!) is that array ‘variables’ are ‘special types of pointer’. Well, they aren’t. Not only that, array identifiers aren’t even variables.

Let me explain. Let’s assume you’ve declared an array of chars (C’s version of a string)  called str1 and a pointer to an array of chars, str2:

    char str1[] = "Hello";
    char *str2 = "Goodbye";

An array and an address (in C) are equivalent. So str1 is the address at which the array of characters in the string "Hello" are stored. But str2 is a pointer whose value is the address of the string "Goodbye".

In fact, str1 isn’t a variable because its value (the address of an array) cannot be changed. The contents of the array – its individual elements – can be changed. The address of the array, however, cannot. That is why I prefer to call str1 an array ‘identifier’, though many people would call it, somewhat inaccurately, an ‘array variable’.

But, wait a moment. If the value of an array identifier such as char str1[] and the value of a pointer variable such as char *str2 are both addresses, aren’t str1 and str2 both pointers?

No, they are not.

It is an essential feature of a variable that its value can be changed. The value of an array identifier cannot be changed. What’s more, a pointer variable occupies one address; its value can be set to point to different addresses. But an array identifier and its address are one and the same thing. How can that be?

You have to understand what happens during compilation. When your program is compiled, the array identifier, str1, is replaced by the address of the array. That address cannot be changed when your program is run. But str2 is a pointer variable with its own address. Its value (the address of an array) can change if new addresses are assigned to the pointer variable.

If you need to know more about the mysteries of pointers, arrays and addresses in C, I have a book that explains everything (with all the source code examples for you to download). It’s called The Little Book Of Pointers and it’s available as a paperback or eBook from Amazon (US), from Amazon (UK) and other Amazon stores worldwide.

Friday, 12 July 2019

MAGIX PopUp Ads – how to get rid of them

They are like a virus. They infect your computer and make a damn’ nuisance of themselves by popping up adverts, special offers, upgrade deals and, well, more adverts… Upgrade MovieStudio, Buy Sounds for ACID, Download Stuff for VEGAS, Install Junk I really don’t want for MAGIX Music Maker. The damned adverts pop up at the bottom of the screen almost every time I boot up the computer. If there was ever a way to make the customer hate your products, this is it!

Does anyone really want to see these ads popping up on their PC every day???
Actually, I rather like many MAGIX products. But their persistent, irritating, spammy popup adverts are doing their best to make me change my opinion.

Another day, another ad!!!
But how do I get rid of them? I couldn’t see an option anywhere to “Disable our Spamware”. I ended up having to Google for help. I eventually found that I have to uninstall a piece of junkware called MAGIX Connect. Go to Settings, Apps, MAGIX Connect, Uninstall.

Oh joy! Gone at last!
Hurrah! Now the blasted adverts are gone. What I find truly mysterious about this is that MAGIX can’t see the obvious truth that, far from promoting its software, these nasty, trashy, annoying popups are about the worst sort of bad publicity they could possibly have. As I said, their software is generally good. But as for their Spamware…!!!!

Sunday, 7 July 2019

Learn C Programming, Pointers and Recursion

I’m pleased to announce the launch of Bitwise Books! We’ve been working away at this for most of the last year. Our aim is to publish a range of tightly-focused programming books that explain just what you really need to know without any padding.

The series is called The Little Book Of… and our first three titles are:

The Little Book Of C Programming

The Little Book Of Pointers

The Little Book Of Recursion

In addition, we have created a series of free programming guides called A Really Simple Guide To… These include A Really Simple Guide To Object Orientation, C IDEs and Pointers. To can get the guides delivered straight to your inbox (no purchase necessary) from the Bitwise Books site.

We’ll be announcing more Really Simple Guides and Little Books Of (various programming topics) soon.

Monday, 1 July 2019

Free File Sync and Backup

I live in dread that my PC will suddenly cease to function and I’ll lose all my work. In spite of taking daily incremental backups (I use Macrium Reflect for those), what I would really like is to have complete, uncompressed, unarchived, ready-to-run copies of all my data files on a second PC. So if PC Number One goes wrong, I can just switch over to PC Number Two and carry on working. As I have a lot of data – video files for my courses, document files for my books, plus images, program code and all sorts of other stuff, I really, really don’t want to lose anything.

So recently I’ve been using a rather fine file-copying program called FreeFileSync. This lets you synchronize copies of folders and sub-folders. That means that you can, in principle, have two complete copies of your data and let FreeFileSync work out which are the most recent copies and then update any out-of-date files by copying the newer versions over them. In that way you could work on the same data on two PCs and let FreeFileSync synchronize them.

With FreeFileSync you can create named backup sets and synchronize groups of subfolders across two computers.
In fact, my requirements are a bit simpler. I want one ‘working set’ of data and one copied set of data. So instead of synchronizing in ‘two directions’, both to and from my two PCs, I just want it to keep a ‘backup copy’ on PC Two updated with any changes I make to the files on PC One.

It does a pretty good job of this. My initial backup (340Gb of data over 131,549 files) took over ten and a half hours to complete. Thereafter, however, it only copies any changed files. To do that it does a file comparison which takes just a few minutes and a file copy which again takes seconds or minutes. If you need to maintain multiple copies of your files, I recommend that you try out FreeFileSync. My main criticism, so far, is that it doesn’t have a built-in scheduler. So if you need to do automatic timed backups, you are going to have to do a bit of extra work using the Windows Task Scheduler.

My initial backup was huge as this chart (which shows backup progress) proves. Subsequent backups are much smaller and faster.
Even so, this is a useful tool to have. After all, disaster has a habit of striking just when you least expect it. And you really can’t have too many backups!

Tuesday, 21 May 2019

In Search Of The Perfect Keyboard

Oh, how I dream of the old IBM keyboard!

An original 1981 IBM keyboard
I began using PCs way back in the early ’80s. Technology has advanced greatly since then. But the one thing that was better then than now was the keyboard. The IBM keyboard set the standard. A good, solid click and keys that never faded. The keyboard of my old Olivetti M24 was excellent too.

Most keyboards these days are flimsy things. And worst of all, the letters on the keys keep fading away. They do for me anyway. Maybe that’s because I am a heavy keyboard user – I write or program for many hours every day. Or maybe it’s because (as I’ve heard some people claim) I am one of those people whose skin acidity happens to be detrimental to keyboard keys.

Anyway, recently I decided the time had come to replace an old keyboard (a VicTop which cost £29 in 2017 – this model is no longer available but various other Chinese-made ‘mechanical’ keyboards appear to be very similar). Although it was a cheap keyboard, it is remarkably solid, has a lovely ‘clicky’ feel and in the couple of years I’ve had it, has been absolutely reliable. But some of its keys were wearing so badly that I could no longer see which was which.

Ideally I wanted a keyboard with ‘doubleshot’ keys. A doubleshot key is one that is constructed from two layers of plastic. One layer contains the raised shape of a character such as ‘A’. The other layer is, in effect, poured on top of this to form the rest of the key surface. So if ‘A’ was moulded in black plastic and this was covered with white plastic, you end up with a black key with the letter ‘A’ in running through it in white (like the words through a stick of British seaside rock).

Once upon a time, every half-way decent keyboard had doubleshot keys. These days most keyboards just have the letters  ‘painted’ or ‘stuck’ onto the key, which is why they wear off so easily. Some slightly more resilient keys use ‘laser etching’ which means that the characters are etched into a small groove. These won’t actually wear off completely but they can fade.

Filco Majestouch Ninja
It turns out that keyboards with doubleshot keys are now as rare as hen’s teeth. So eventually I settled for an alternative: a keyboard with no letters at all on the top surfaces (so they can’t wear off!) but instead with letters on the front surface – the vertical surface that faces you as you sit at the keyboard. This was the Filco Majestouch Ninja. It’s quite an expensive keyboard (£130 inc. VAT) but it has a nice solid click (look for Cherry Blue keys if a good click is what you are after) and, while it does take a while to get used to the blank key tops, after a couple of days I barely noticed the difference.

VicTop keyboard with its original keys
This left me with my trusty old VicTop keyboard which, in spite of its faded keys, I was reluctant to throw away. I decided to have a go at refurbishing it by replacing the faded keys with doubleshot ones. Bizarrely, even though it’s hard to find a keyboard with doubleshot keys already fitted, replacement doubleshot keys are easy to find. I bought some keys in the good old Olivetti colours (at about £38 inc VAT these cost more than the keyboard itself but much less than a new Ninja keyboard).

VicTop keyboard with replacement Olivetti-style doubleshot keys
It took me about an hour to replace the old keys. And my refurbished keyboard is now pretty close to ideal. I don’t know whether my new Ninja keyboard will outlast my cheap refurbished keyboard. What I can say is that both keyboards are now as near to perfect as I could hope for.

Though I still dream of an old IBM keyboard…

Friday, 17 May 2019

RIP Ken Musgrave - Long Live Mojoworld!

Dr. Forest Kenton Musgrave (aka 'Doc Mojo') was one of the great figures in the evolution of computer digital art, particularly landscape creation through fractal geometry. I still have a (now rare) boxed copy of his wonderful Mojoworld software, which lets you generate hugely complex fractal planets and then send a virtual camera around it looking for interesting views to render as still images or animations. Sadly, Ken Musgrave died (far too young!) last December. Way back 2001, I interviewed him. When Digital Art Live Magazine asked if they could reprint that interview as a tribute, I was very pleased to agree.

You can read the interview online here:

Or go to the Digital Art Live web site here:

Wednesday, 17 April 2019

Zork Source Code Online - the Holy Grail of Text Adventures!

Anyone who knows me (or reads my articles or follows my programming courses) can't fail to have noticed that I am passionate about text adventures. In fact, it was the old Infocom text adventures such as Zork, Starcross and Trinity that first got me interested in programming, long, long ago. So interested, in fact, that I wrote my own adventure, The Golden Wombat Of Destiny.

The one thing I never expected to see was the source code of the classic Infocom games. Well, today, that all changed. Because the code is now online. I don't honestly know if this is in the public domain or not. I think that theoretically Activision 'owns' the code but clearly the company is doing nothing at all with it. That being so, this great treasure of programming really should be available to inspire and instruct coders old and new. You may need to be dedicated, however. This is not written in any mainstream language such as Java or C. It's written in ZIL (and MDL) which are variants of LISP.

More information here:

Tuesday, 12 March 2019

Able2Extract Professional 14 Review

Able2Extract Professional 14 $149.95

Able2Extract is a PDF editing and conversion tool. It can convert files between PDF and a variety of other formats including Word documents, Excel spreadsheets, text, HTML and various image formats. Conversion can be done by loading documents one by one or in batch mode on a selected directory.
PDF on the left. Word on the right? Which is the original? Here the PDF eBook is the source document and I have used Able2Extract to create a Word document from it. If you click the image above to enlarge it you will see that the fully-editable Word document is very faithful to the PDF original, including column formatting and images. There are a few minor differences to text styles however.
Editing features include the ability to add and delete text or graphics, redacting (blacking out) selected text, and adding annotations. There is also built-in form support that lets you create editable forms with ‘fill-in’ fields. For an overview of the principal features see my review of a previous release of Able2Extract Professional 12. The main new features in this release are electronic and digital signing and improved “AI-powered” PDF to Excel conversion.

Here I am creating a new digital signature just by entering my name on the keyboard and letting Able2Extract generate a digital (cryptographic) signature image.

When I need to add my signature to a PDF form, I just drag the previously-generated digital image into place.
Electronic document signing lets you add a signature by drawing it on screen, typing it at the keyboard or adding a pre-prepared image.  Even though an electronic signature may not match your hand-written signature, it uses cryptographic techniques so that it can be verified for authenticity.

The other big new feature in this release are the so-called “AI-powered templates” for converting to and from Excel spreadsheets. According to Investintech, Able2Extract 14 has templates that “can be trained to automatically locate and convert PDF tables that match the table structure stored in a template, making tabular data extraction accurate and easy no matter how long the source document is or the position of tables in a PDF document.”

You can also load up PDF documents and auto-extract any tables to be converted to Excel. When this option is selected, all other text and image content of the PDF file is ignored and only the tabular data is extracted. Excel support goes beyond PDF conversion. Able2Extract lets you load up any of its supported formats (for example a Word document) and convert it direct to Excel.
Here I have a long PDF document that contains several tables. I want to extract the table into an Excel spreadsheet. The conversion option lets me extract the tables alone, ignoring the rest of the text.

This is the end result. My tables have been extracted into an Excel spreadsheet, read for me to edit them.
A common format for conversion is Microsoft Word. I tried out Able2Extract with some quite complicated eBooks with multiple columns, styles, box-outs, graphics tables and more. I have to say the conversion between Word and PDF was remarkably good. In my experience, there are occasionally some changes to font styles and  layout that crop up during conversion. But these are generally quite minor. On the whole, the conversion is fast and fairly accurate.

Overall, this is a very capable PDF conversion program. If you regularly need to convert documents to and from the supported formats or if you need to edit and sign PDF forms, this is about as easy as it gets.

Sunday, 10 March 2019

DxO PhotoLab 2 review

DxO PhotoLab 2 (Elite Edition) £159 (currently on offer at £119.99)
DxO PhotoLab 2 (Essential Edition) £99 (currently on offer at £79.99)

DxO PhotoLab 2 is an image processing packaged aimed specifically at photographers. If you’ve taken a photograph and the colours, sharpness or exposure are not quite what you want, this software will help you enhance the image to achieve a more satisfactory, pleasing or dramatic effect.

Here I am using DxO PhotoLab 2 to process the original image (seen in the top half of the screen) by setting parameters to improve the contrast, colour saturation and brightness. The processed image is shown in the bottom half of the screen
There is more to creating a great photo than just clicking the button on your camera and saving the results onto disk. Once you have the image, there are all kinds of ways you can process it in order to make it really stand out. And that’s where DxO PhotoLab comes in. It is an image processing program which specialises in enhancing and correcting images, with some features specifically tailored for RAW (unprocessed) images.

RAW images contain more information than images saved in formats such JPG or PNG. Some of that information is lost when it is processed into one of those formats. DxO PhotoLab, however, is able to use the information stored in RAW file in order to optimise the image rendering – for example, by correcting optical flaws, or correcting colour balance and sharpness. While RAW processing is its strength, it can also work with other image formats such as JPEG, though the range of optimisations is more limited with those formats.

The software has built-in support for a number of cameras from major manufacturers such as Canon, Panasonic, Nikon, Olympus, Apple, Sony and others. This helps the software to automate the processing of RAW images from specific makes and model of camera. The company claims that it has analysed no less than 42,000 camera/lens combinations to help to automate the enhancement of camera-specific images. You and can find the full list of supported cameras here:

If grainy pictures are a problem, its denoising tools could be a big advantage. Grain is likely to be most problematic in shots taken in low light with high ISO. So if night-time shooting is important to you, denoising could help to add sharpness to your photos. Some of the more advanced features are only available in the Elite Edition. The cheaper Essential edition has the core functionality but lacks dedicated camera support, enhanced denoise and a few other image processing tools. I have been using the Elite edition for this review. You can check the feature lists for both editions here:

Here I have selected an area (the area above the diagonal line which I have simply drawn and rotated on the image) and I am making local edits by dragging the ‘bar chart’ elements to alter colour, contrast, grain, shadows and black density.
PhotoLab 2 includes an automatic repair tool to remove unwanted elements – everything from a bit of dust to a bird in the sky – with a selective retouching tool to mask and replace areas of the image. The main new feature in the latest release of PhotoLab is a tool to let you may make ‘local adjustments’ – that is, you can selected areas of a photograph and apply edits to them. You can, for example, select the sky and then make in-place changes to its colour and contrast.

DxO PhotoLab 2 also has a PhotoLibrary tool for organising your pictures. This provides the ability to search through directories of images in order to find those that match specific criteria such as date, ISO setting or focal length. This is a feature that is broadly similar (though not quite yet as complete) as the tools provided by Lightroom Classic.

If you don’t want to go to the bother of tweaking individual parameters, you can simply apply one of the ‘presets’ to apply a range of effects and enhancements at the click of a mouse button.
PhotoLab 2 comes with arrange of ready-to-use filters and ‘image fixes’ which can be simply applied to photos. However, to get the most from the software you may need to tweak individual parameters which are available in a set of collapsible panels to let you make adjustments to lighting, colour and geometry. This can be quite a detailed and complex process. This is great for professional users who are prepared to put in the time and effort to learn to use all the tools and options. For amateur or occasional users, however, this may seem intimidatingly complicated. Indeed, for those users a simpler-to-use (and cheaper) program such as Smart Photo Editor might be more appropriate.  For a guide to the principal features of DxO PhotoLab 2 see here:

The bottom line: who needs this software? If you want to apply effects and corrections to your photographs, a general-purpose image editing program such as Adobe PhotoShop may be your first choice. Alternatively, if you need a cheaper solution, a program such as Affinity Photo or CyberLink PhotoDirector might fit the bill. For image processing features aimed specifically at photographic enhancement, the obvious competitors to PhotoLab are Adobe LightRoom and Phase One’s Capture One.

Dx0 PhotoLab is aimed at professional photographers who need to make adjustments at the nitty-gritty level of detail. For those users, it provides a solid range of features with dedicated camera support at what, in this range of software, is quite a reasonable price. In short DxO PhotoLab 2 is a good value image processing program for the more serious photographer.