Lecture 3, Operating Systems – Process Management

In our previous lecture we looked at how information is stored in a computer system.  This week we are going to finish off our look at operating systems by investigating how information is managed when it is being processed.  This is an area of much interest at the moment as we move into an environment where multiple cores and multiple CPU’s are increasingly common.  This is introducing a whole range of new problems and complexities that we haven’t fully solved yet.  The next few years are going to produce some very exciting new developments.

Students have asked why we don’t talk about that in this subject seeing as it is becoming the norm in todays modern hardware.  The reason for this is that dealing with process management for a single CPU is complex enough.  You must fully understand this in order to appreciate the difficulties of effectively utilising and managing processes across several cores.

To make it more interesting,  new cpu’s such as the CELL are toying with the idea of creating a CPU with specialised cores.  In multi core processors currently every core is exactly the same and is generic.  We have seen however for certain areas such as graphics that a specialised chip can provide dramatic performance benefits.  If AMD or INTEL  (or another company) can create the right mix of specialised cores,  and provide an operating system that can manage the flow of processes efficiently through them then this could lead to some very exciting improvements.

If this is going to happen I think it is going to be OSX and/ or Linux that will be first to take advantage of it.  Apple has consistently proven that it is not scared to dive into new technologies and is in a good position to do so as it controls both the hardware and software on their platform. (an example of this is their adoption of EFI to replace the legacy BIOS,  something Microsoft is still coming to grips with)  Indeed they are already developing tools that could take advantage of this in Grand Central and OpenCL.  Linux,  due to it’s open nature,  and strong following of tinkerers is also very quick to adopt new technologies (as can be seen by the way they progressed to 64 bit processors)

The other item we touched on is virtual memory.  Despite modern systems being equiped with multiple gigabytes of RAM it is still an important aspect of a modern system.  There is always going to be debate as to whether this is true or not and there is also going to be be debate on the ideal amount of virtual memory.  I really think this is silly.  We now have hard drives that are measured in the terabytes so if you ar going to argue over wasted space it really is trivial. Give yourself roughly twice the virtual memory as you have physical memory and be done with it.  Todays OS’s are quite good at managing virtual memory so there are really no advantages to not having it.

In previous years the focus has been on speed.  How fast is your cpu compared to the next?  We have gotten over that now.  There are no major benefits to be gained from increasing the Hz.  The future is about finding better ways to shuffle the data around the system and it is going to provide for some very noticable improvements.  We ar making some very important shifts in the way we think about how a computer manages data. It is moving up to a new level of abstraction and it is going to be a very exciting time indeed.

About these ads

0 Responses to “Lecture 3, Operating Systems – Process Management”



  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





Follow

Get every new post delivered to your Inbox.

%d bloggers like this: