It is often said that that if you keep your clothes long enough they will come back into fashion. That may be true but it also applies to computer topology.
In the early days of commercial computers, the company’s mainframe took pride of place. It’s own special room, air conditioning and a department dedicated to looking after it’s every need. Back then each job was loaded from paper tape or punch cards. A major step forward occurred when terminals allowed remote jobs to be run. Simple teletypes, at first connected by painfully slow lines (110 baud), then came VDUs and faster data connections.
As technology advanced the simple VDU gained intelligence and in 1981 the IBM PC was launched. Very quickly this was expanded to provide local computing in the office. Attaching the PC to the mainframe provided the best of both worlds. As the performance of the PC increased and the range of programs expanded, many smaller tasks could be accomplished locally. The “mainframe” was largely the file storage and database engine.
Companies soon realised that a server (large PC) situated in the office could provide all that was necessary at a fraction of the price. As all those applications grew (all those jobs you didn’t know you needed to do but now could) the need for an IT department re-appeared. At this point and, with the enhancements to the internet, remote storage became possible and was heavily marketed as the way forward. Instead of a mainframe, arrays of rack mount servers filled the equipment rooms of the on-line suppliers.
The next step was to run the apps on those remote servers and potentially use a much lower power workstation – cloud computing was born or to put it another way, large central computing capability and remote terminals. Back to where we started – full circle.
It would be a brave (or foolish) person that would speculate where we will be in another forty years. With the power of the modern smartphone and communication networks maybe “beam me up Scottie” will not prove too far fetched.