Enjoy the last mins of 2003 and have a great 2004!
Cheers!
Wednesday, December 31, 2003
Friday, December 26, 2003
linux boot splash screen
this is a pretty neat thing a classmate of mine showed me. linux means you can customize every part, right from the background desktop color to the kernel. i'll show you how change the boot screen background image. i am assuming you are using grub as you boot loader and not lilo. you'll have to login as root to perform these changes.
step 1. making an image file
open any image editor; i used gimp. and make the image file you would like to show on boot. remember not to make a "heavy" image as during boot time the graphics drivers are not loaded and you might just get a dark screen. save the image file as *.xpm.
step 2. gzipping it
$ gzip -v *.xpm
this command will create a *.xpm.gz file. move this file to the /boot/grub/ directory. you will find a splash.xpm.gz file here. this is the current splah screen and we are going to replace it.
step 3. changing conf files.
in /boot/grub open the grub.conf file. this the the grub configuration file. i suggest creating a backup file just in case. the options are
default=1 or 0 - this is used to select which os gets default boot priority. time to chane to linux. the order is based on order of title definitions below,
timeout=10 this is the time after which the default os boots. change to a small time if you are hardly using the other os .. read windows.
splashimage=...
this is the line we are actually interested. comment the old line be preceeding it with a hash #. and add your *.xpm.gz path here.
title can be changed to suit your mood. but i have no idea of the other lines. those are grub commands.
step 1. making an image file
open any image editor; i used gimp. and make the image file you would like to show on boot. remember not to make a "heavy" image as during boot time the graphics drivers are not loaded and you might just get a dark screen. save the image file as *.xpm.
step 2. gzipping it
$ gzip -v *.xpm
this command will create a *.xpm.gz file. move this file to the /boot/grub/ directory. you will find a splash.xpm.gz file here. this is the current splah screen and we are going to replace it.
step 3. changing conf files.
in /boot/grub open the grub.conf file. this the the grub configuration file. i suggest creating a backup file just in case. the options are
default=1 or 0 - this is used to select which os gets default boot priority. time to chane to linux. the order is based on order of title definitions below,
timeout=10 this is the time after which the default os boots. change to a small time if you are hardly using the other os .. read windows.
splashimage=...
this is the line we are actually interested. comment the old line be preceeding it with a hash #. and add your *.xpm.gz path here.
title can be changed to suit your mood. but i have no idea of the other lines. those are grub commands.
Assemblies and Metadata 2
I made an error regarding modules. I said they only contain metadata and IL code and NO manifest, but it seems that they do contain a manifest. But I'm sure they can't be executed by the CLR unless referenced by some assembly.
Anyway, I had included code last time, but no real representation of the manifest and metadata. There is a utility application that ships with the Framework SDK called ildasm which stands for "Intermediate Language Disassembler". I suppose the name is a bit misleading since the app allows you to disassemble the manifest and metadata in addition to the IL.
So here it is...
// Album.cs
namespace Albums
{
public interface IAlbum
{
// Properties
string Name
{
get; // Read only
}
int NumberOfSongs
{
get; // Read only
}
// Methods
string BestSong();
}
}
Here's Album.dll's metadata...
In the metadata treeview you will notice the "Albums" namespace at the top. Under that is the one interface "IAlbum" we defined. Under that are the properties and methods the interface defines.
And here's its manifest...
The manifest lists all the assemblies it references starting with the external ones. "mscorlib"
is the System assembly that defines all the basic types that ALL .NET applications require and is implicitly referenced. I guess this is similar to java.lang.
It also lists the publictoken and the version number of the assembly. "mscorlib" is a shared assembly in that ALL .NET apps refer to this ONE assembly. All shared assemblies require a public token to uniquely identify them along with the version number. Next comes this assembly - "Album". Notice no "extern" keyword. Since this is NOT a shared assembly, there is no public token associated with it. There is a hash and a version number (which we did not define).
All the other stuff is used by the CLR at runtime.
// DM.cs
namespace Albums
{
public class DM : IAlbum
{
// Fields
private string name;
private int numberOfSongs;
// Properties
public string Name
{
get
{
return this.name;
}
}
public int NumberOfSongs
{
get
{
return this.numberOfSongs;
}
}
// Methods
public DM()
{
this.name = "Definitely Maybe";
this.numberOfSongs = 11;
}
public string BestSong()
{
return "Live Forever";
}
}
}
Here's DM.netmodule's metadata...
The metadata is very straightforward. There's the "Albums" namespace followed by the class "DM" followed by all its members.
And here's its manifest...
The manifest is similar to Album.dll's manifest. Here, in addition to referencing "mscorlib", it also has a reference to "Album.dll" since it uses the type "IAlbum" in its code. But notice there is no entry in the manifest for an assembly called "DM". This is because it is a module.
Here's DM's constructor's IL code...
All the numbers in between /* */ are tokens defined in member tables of the metadata. As I said before, they act as pointers. The CLR will look at this and index into the tables to get the method defs etc... The IL is a stack based implementation. This code is fairly straightforward but it gets more complicated.
WTSMG.netmodule is essentially the same.
We created "Albums.dll" by just referencing the two modules (DM and WTSMG)...
csc /target:library /addmodule:DM.netmodule;WTSMG.netmodule /out:Albums.dll
Here is Albums.dll's metadata...
Notice there really isn't any metadata there. "Albums.dll" itself didn't define any types itself.
It is a multi file assembly that references the two .netmodules.
Here is its manifest...
Here we have it referencing "mscorlib", "Album" and two external files "DM.netmodule"
along with "WTSMG.netmodule".
// Oasis.cs
using Albums;
namespace Bands
{
public class Oasis
{
// Fields
private IAlbum firstAlbum;
private IAlbum secondAlbum;
// Properties
public bool IsBestBandEver
{
get
{
return true;
}
}
public IAlbum FirstAlbum
{
get
{
return this.firstAlbum;
}
}
public IAlbum SecondAlbum
{
get
{
return this.secondAlbum;
}
}
// Methods
public Oasis()
{
this.firstAlbum = new DM();
this.secondAlbum = new WTSMG();
}
}
}
Oasis.dll Metadata...
The manifest is similar to the others.
Here's some property IL code... Pretty straightforward.
// App.cs
using System;
using Bands;
using Albums;
namespace Assemblies
{
public class App
{
public static void Main()
{
Oasis o = new Oasis();
Console.WriteLine();
Console.WriteLine( "Is Best Band Ever? {0}", o.IsBestBandEver );
Console.WriteLine();
IAlbum album;
album = o.FirstAlbum;
Console.WriteLine( "First Album:" );
Console.WriteLine( " Name: {0}", album.Name );
Console.WriteLine( " Number of Songs: {0}", album.NumberOfSongs );
Console.WriteLine( " Best Song: {0}", album.BestSong() );
Console.WriteLine();
album = o.SecondAlbum;
Console.WriteLine( "Second Album:" );
Console.WriteLine( " Name: {0}", album.Name );
Console.WriteLine( " Number of Songs: {0}", album.NumberOfSongs );
Console.WriteLine( " Best Song: {0}", album.BestSong() );
}
}
}
App metadata...
Just two methods in there... constructor and Main.
App manifest...
Same as others.
Part of IL code for App's Main method
Notice the ".entrypoint" directive in there. The CLR looks for this and will execute from this
point.
So hope this provides a better (graphical) view of the metadata and what the manifest contains.
Anyway, I had included code last time, but no real representation of the manifest and metadata. There is a utility application that ships with the Framework SDK called ildasm which stands for "Intermediate Language Disassembler". I suppose the name is a bit misleading since the app allows you to disassemble the manifest and metadata in addition to the IL.
So here it is...
// Album.cs
namespace Albums
{
public interface IAlbum
{
// Properties
string Name
{
get; // Read only
}
int NumberOfSongs
{
get; // Read only
}
// Methods
string BestSong();
}
}
Here's Album.dll's metadata...
In the metadata treeview you will notice the "Albums" namespace at the top. Under that is the one interface "IAlbum" we defined. Under that are the properties and methods the interface defines.
And here's its manifest...
The manifest lists all the assemblies it references starting with the external ones. "mscorlib"
is the System assembly that defines all the basic types that ALL .NET applications require and is implicitly referenced. I guess this is similar to java.lang.
It also lists the publictoken and the version number of the assembly. "mscorlib" is a shared assembly in that ALL .NET apps refer to this ONE assembly. All shared assemblies require a public token to uniquely identify them along with the version number. Next comes this assembly - "Album". Notice no "extern" keyword. Since this is NOT a shared assembly, there is no public token associated with it. There is a hash and a version number (which we did not define).
All the other stuff is used by the CLR at runtime.
// DM.cs
namespace Albums
{
public class DM : IAlbum
{
// Fields
private string name;
private int numberOfSongs;
// Properties
public string Name
{
get
{
return this.name;
}
}
public int NumberOfSongs
{
get
{
return this.numberOfSongs;
}
}
// Methods
public DM()
{
this.name = "Definitely Maybe";
this.numberOfSongs = 11;
}
public string BestSong()
{
return "Live Forever";
}
}
}
Here's DM.netmodule's metadata...
The metadata is very straightforward. There's the "Albums" namespace followed by the class "DM" followed by all its members.
And here's its manifest...
The manifest is similar to Album.dll's manifest. Here, in addition to referencing "mscorlib", it also has a reference to "Album.dll" since it uses the type "IAlbum" in its code. But notice there is no entry in the manifest for an assembly called "DM". This is because it is a module.
Here's DM's constructor's IL code...
All the numbers in between /* */ are tokens defined in member tables of the metadata. As I said before, they act as pointers. The CLR will look at this and index into the tables to get the method defs etc... The IL is a stack based implementation. This code is fairly straightforward but it gets more complicated.
WTSMG.netmodule is essentially the same.
We created "Albums.dll" by just referencing the two modules (DM and WTSMG)...
csc /target:library /addmodule:DM.netmodule;WTSMG.netmodule /out:Albums.dll
Here is Albums.dll's metadata...
Notice there really isn't any metadata there. "Albums.dll" itself didn't define any types itself.
It is a multi file assembly that references the two .netmodules.
Here is its manifest...
Here we have it referencing "mscorlib", "Album" and two external files "DM.netmodule"
along with "WTSMG.netmodule".
// Oasis.cs
using Albums;
namespace Bands
{
public class Oasis
{
// Fields
private IAlbum firstAlbum;
private IAlbum secondAlbum;
// Properties
public bool IsBestBandEver
{
get
{
return true;
}
}
public IAlbum FirstAlbum
{
get
{
return this.firstAlbum;
}
}
public IAlbum SecondAlbum
{
get
{
return this.secondAlbum;
}
}
// Methods
public Oasis()
{
this.firstAlbum = new DM();
this.secondAlbum = new WTSMG();
}
}
}
Oasis.dll Metadata...
The manifest is similar to the others.
Here's some property IL code... Pretty straightforward.
// App.cs
using System;
using Bands;
using Albums;
namespace Assemblies
{
public class App
{
public static void Main()
{
Oasis o = new Oasis();
Console.WriteLine();
Console.WriteLine( "Is Best Band Ever? {0}", o.IsBestBandEver );
Console.WriteLine();
IAlbum album;
album = o.FirstAlbum;
Console.WriteLine( "First Album:" );
Console.WriteLine( " Name: {0}", album.Name );
Console.WriteLine( " Number of Songs: {0}", album.NumberOfSongs );
Console.WriteLine( " Best Song: {0}", album.BestSong() );
Console.WriteLine();
album = o.SecondAlbum;
Console.WriteLine( "Second Album:" );
Console.WriteLine( " Name: {0}", album.Name );
Console.WriteLine( " Number of Songs: {0}", album.NumberOfSongs );
Console.WriteLine( " Best Song: {0}", album.BestSong() );
}
}
}
App metadata...
Just two methods in there... constructor and Main.
App manifest...
Same as others.
Part of IL code for App's Main method
Notice the ".entrypoint" directive in there. The CLR looks for this and will execute from this
point.
So hope this provides a better (graphical) view of the metadata and what the manifest contains.
Thursday, December 25, 2003
Assemblies and Metadata
Assemblies are just an abstraction... as in there is no .asm file extention. Physically assemblies exist as DLLs and EXEs. DLLs and EXEs are basically Potable Executable (PE) files stored in the Common Object File Format (COFF). This is just some format for storing files in Windows (maybe in other OS's as well... I'm not sure). When you execute either a DLL or an EXE, the OS loader will open up the PE file, process the information in there (which is in COF format) and use that to run it. So, I guess since you can't really execute DLL's directly, the OS loader will throw an exception or something. For EXE's, it will look for the program entry point (main) and run that. Now, assemblies are just PE files, with a .DLL or .EXE extention, but slightly modified. They add some additional data (headers) in these PE files that says tells the OS loader it needs to be managed and executed by the CLR. So, now when the OS loads an assembly, it recognizes that there is a CLR header and tranfers control to the CLR.
Assemblies contain a manifest, one or more modules and any resources like images. They can be either a single file assembly or a multi file assembly. A single file assembly contains everything in one file (gotta be a genius to figure that out?). With multi file assemblies you can have modules as separate entities and just have a reference to them in the assembly manifest. I suppose you can think of them as by value and by reference.
The main difference between a module and an assembly is that ONLY assemblies have a manifest and only assemblies can be executed by the CLR. Modules have metadata about the types it exposes, but since there is no manifest, the CLR doesn't have the information to load/verify/execute it.
Modules contain metadata and IL code. What exactly is the metadata? Basically, it is binary
information that describes every type and member defined in your code. It includes the name and visibility (public/private) of the types and also what base classes/interfaces they implement. And what members (methods/fields/properts/etc...) the types define. These are stored in tables. For ex, there is a Methods table which tells you all the methods that are defined. Each row in the table is given a unique name (token). The IL code, which is also part of the module, refers to this token in the code when it has to call methods... sort of like a pointer.
So I guess this will be much easier to understand if you see an example. In the process I'm going to take my obsession with Oasis to new heights...
// Album.cs
namespace Albums
{
public interface IAlbum
{
// Properties
string Name
{
get; // Read only
}
int NumberOfSongs
{
get; // Read only
}
// Methods
string BestSong();
}
}
This is just an Interface "IAlbum" within the namespace "Albums". I'm going to compile this to a library (dll) assembly...
csc /target:library Album.cs
csc is the C# compiler. /target:library tells the compiler to compile it to a DLL.
I get "Album.dll". This is an assembly with a manifest and some IL. Nothing great here
since this is just defining an interface.
Now we define a couple of albums that implement this interface...
// DM.cs
namespace Albums
{
public class DM : IAlbum
{
// Fields
private string name;
private int numberOfSongs;
// Properties
public string Name
{
get
{
return this.name;
}
}
public int NumberOfSongs
{
get
{
return this.numberOfSongs;
}
}
// Methods
public DM()
{
this.name = "Definitely Maybe";
this.numberOfSongs = 11;
}
public string BestSong()
{
return "Live Forever";
}
}
}
// WTSMG.cs
namespace Albums
{
public class WTSMG : IAlbum
{
// Fields
private string name;
private int numberOfSongs;
// Properties
public string Name
{
get
{
return this.name;
}
}
public int NumberOfSongs
{
get
{
return this.numberOfSongs;
}
}
// Methods
public WTSMG()
{
this.name = "(What's The Story) Morning Glory?";
this.numberOfSongs = 12;
}
public string BestSong()
{
return "Wonderwall";
}
}
}
I'll compile both of these as modules...
csc /target:module /r:Album.dll DM.cs
csc /target:module /r:Album.dll WTSMG.cs
/target:module tells the compiler to compile to a module which is of extension .netmodule.
/r:Album.dll tells it that this module uses some types from "Album.dll" (IAlbum).
I get "DM.netmodule" and "WTSMG.netmodule". They ONLY contain metadata about the types it exposes (class DM and class WTSMG), members and every method's IL code. I can't use these two modules anywhere. The CLR can't execute them since there is NO manifest to query. To be able to use these you have to add them to an assembly...
csc /target:library /addmodule:DM.netmodule;WTSMG.netmodule /out:Albums.dll
This tells the compiler to create a dll called "Albums.dll" and to add the DM and WTSMG modules to it.
Now "Albums.dll" has a manifest and two modules. We can now use "Albums.dll" from other
assemblies...
// Oasis.cs
using Albums;
namespace Bands
{
public class Oasis
{
// Fields
private IAlbum firstAlbum;
private IAlbum secondAlbum;
// Properties
public bool IsBestBandEver
{
get
{
return true;
}
}
public IAlbum FirstAlbum
{
get
{
return this.firstAlbum;
}
}
public IAlbum SecondAlbum
{
get
{
return this.secondAlbum;
}
}
// Methods
public Oasis()
{
this.firstAlbum = new DM();
this.secondAlbum = new WTSMG();
}
}
}
And again compile this to a dll assembly... "Oasis.dll"
csc /target:library /r:Album.dll;Albums.dll Oasis.cs
Here we tell the compiler that we refer to types from both "Album.dll" (IAlbum) and from "Albums.dll" (DM and WTSMG).
So what we have so far is two single file assemblies - Oasis.dll and Album.dll and a multi file assembly - Albums.dll which reference DM.netmodule and WTSMG.netmodule.
Finally we have the App exe.
// App.cs
using System;
using Bands;
using Albums;
namespace Assemblies
{
public class App
{
public static void Main()
{
Oasis o = new Oasis();
Console.WriteLine();
Console.WriteLine( "Is Best Band Ever? {0}", o.IsBestBandEver );
Console.WriteLine();
IAlbum album;
album = o.FirstAlbum;
Console.WriteLine( "First Album:" );
Console.WriteLine( " Name: {0}", album.Name );
Console.WriteLine( " Number of Songs: {0}", album.NumberOfSongs );
Console.WriteLine( " Best Song: {0}", album.BestSong() );
Console.WriteLine();
album = o.SecondAlbum;
Console.WriteLine( "Second Album:" );
Console.WriteLine( " Name: {0}", album.Name );
Console.WriteLine( " Number of Songs: {0}", album.NumberOfSongs );
Console.WriteLine( " Best Song: {0}", album.BestSong() );
}
}
}
Now we create an exe we can execute...
csc /target:exe /r:Album.dll;Oasis.dll App.cs
Here we tell it to create an exe using /target:exe and tell it we refer to types from "Album.dll" (IAlbum) and from "Oasis.dll" (Oasis). We get App.exe
When we run this we get...
Is Best Band Ever? True
First Album:
Name: Definitely Maybe
Number of Songs: 11
Best Song: Live Forever
Second Album:
Name: (What's The Story) Morning Glory?
Number of Songs: 12
Best Song: Wonderwall
So, hope this has explained a bit (better) about assemblies and metadata.
As of now the need for config files has not arisen. I'll write about that in the next blog.
Assemblies contain a manifest, one or more modules and any resources like images. They can be either a single file assembly or a multi file assembly. A single file assembly contains everything in one file (gotta be a genius to figure that out?). With multi file assemblies you can have modules as separate entities and just have a reference to them in the assembly manifest. I suppose you can think of them as by value and by reference.
The main difference between a module and an assembly is that ONLY assemblies have a manifest and only assemblies can be executed by the CLR. Modules have metadata about the types it exposes, but since there is no manifest, the CLR doesn't have the information to load/verify/execute it.
Modules contain metadata and IL code. What exactly is the metadata? Basically, it is binary
information that describes every type and member defined in your code. It includes the name and visibility (public/private) of the types and also what base classes/interfaces they implement. And what members (methods/fields/properts/etc...) the types define. These are stored in tables. For ex, there is a Methods table which tells you all the methods that are defined. Each row in the table is given a unique name (token). The IL code, which is also part of the module, refers to this token in the code when it has to call methods... sort of like a pointer.
So I guess this will be much easier to understand if you see an example. In the process I'm going to take my obsession with Oasis to new heights...
// Album.cs
namespace Albums
{
public interface IAlbum
{
// Properties
string Name
{
get; // Read only
}
int NumberOfSongs
{
get; // Read only
}
// Methods
string BestSong();
}
}
This is just an Interface "IAlbum" within the namespace "Albums". I'm going to compile this to a library (dll) assembly...
csc /target:library Album.cs
csc is the C# compiler. /target:library tells the compiler to compile it to a DLL.
I get "Album.dll". This is an assembly with a manifest and some IL. Nothing great here
since this is just defining an interface.
Now we define a couple of albums that implement this interface...
// DM.cs
namespace Albums
{
public class DM : IAlbum
{
// Fields
private string name;
private int numberOfSongs;
// Properties
public string Name
{
get
{
return this.name;
}
}
public int NumberOfSongs
{
get
{
return this.numberOfSongs;
}
}
// Methods
public DM()
{
this.name = "Definitely Maybe";
this.numberOfSongs = 11;
}
public string BestSong()
{
return "Live Forever";
}
}
}
// WTSMG.cs
namespace Albums
{
public class WTSMG : IAlbum
{
// Fields
private string name;
private int numberOfSongs;
// Properties
public string Name
{
get
{
return this.name;
}
}
public int NumberOfSongs
{
get
{
return this.numberOfSongs;
}
}
// Methods
public WTSMG()
{
this.name = "(What's The Story) Morning Glory?";
this.numberOfSongs = 12;
}
public string BestSong()
{
return "Wonderwall";
}
}
}
I'll compile both of these as modules...
csc /target:module /r:Album.dll DM.cs
csc /target:module /r:Album.dll WTSMG.cs
/target:module tells the compiler to compile to a module which is of extension .netmodule.
/r:Album.dll tells it that this module uses some types from "Album.dll" (IAlbum).
I get "DM.netmodule" and "WTSMG.netmodule". They ONLY contain metadata about the types it exposes (class DM and class WTSMG), members and every method's IL code. I can't use these two modules anywhere. The CLR can't execute them since there is NO manifest to query. To be able to use these you have to add them to an assembly...
csc /target:library /addmodule:DM.netmodule;WTSMG.netmodule /out:Albums.dll
This tells the compiler to create a dll called "Albums.dll" and to add the DM and WTSMG modules to it.
Now "Albums.dll" has a manifest and two modules. We can now use "Albums.dll" from other
assemblies...
// Oasis.cs
using Albums;
namespace Bands
{
public class Oasis
{
// Fields
private IAlbum firstAlbum;
private IAlbum secondAlbum;
// Properties
public bool IsBestBandEver
{
get
{
return true;
}
}
public IAlbum FirstAlbum
{
get
{
return this.firstAlbum;
}
}
public IAlbum SecondAlbum
{
get
{
return this.secondAlbum;
}
}
// Methods
public Oasis()
{
this.firstAlbum = new DM();
this.secondAlbum = new WTSMG();
}
}
}
And again compile this to a dll assembly... "Oasis.dll"
csc /target:library /r:Album.dll;Albums.dll Oasis.cs
Here we tell the compiler that we refer to types from both "Album.dll" (IAlbum) and from "Albums.dll" (DM and WTSMG).
So what we have so far is two single file assemblies - Oasis.dll and Album.dll and a multi file assembly - Albums.dll which reference DM.netmodule and WTSMG.netmodule.
Finally we have the App exe.
// App.cs
using System;
using Bands;
using Albums;
namespace Assemblies
{
public class App
{
public static void Main()
{
Oasis o = new Oasis();
Console.WriteLine();
Console.WriteLine( "Is Best Band Ever? {0}", o.IsBestBandEver );
Console.WriteLine();
IAlbum album;
album = o.FirstAlbum;
Console.WriteLine( "First Album:" );
Console.WriteLine( " Name: {0}", album.Name );
Console.WriteLine( " Number of Songs: {0}", album.NumberOfSongs );
Console.WriteLine( " Best Song: {0}", album.BestSong() );
Console.WriteLine();
album = o.SecondAlbum;
Console.WriteLine( "Second Album:" );
Console.WriteLine( " Name: {0}", album.Name );
Console.WriteLine( " Number of Songs: {0}", album.NumberOfSongs );
Console.WriteLine( " Best Song: {0}", album.BestSong() );
}
}
}
Now we create an exe we can execute...
csc /target:exe /r:Album.dll;Oasis.dll App.cs
Here we tell it to create an exe using /target:exe and tell it we refer to types from "Album.dll" (IAlbum) and from "Oasis.dll" (Oasis). We get App.exe
When we run this we get...
Is Best Band Ever? True
First Album:
Name: Definitely Maybe
Number of Songs: 11
Best Song: Live Forever
Second Album:
Name: (What's The Story) Morning Glory?
Number of Songs: 12
Best Song: Wonderwall
So, hope this has explained a bit (better) about assemblies and metadata.
As of now the need for config files has not arisen. I'll write about that in the next blog.
Re: Basic overview of .NET
i'll try to compare .net with java of what i understood and ask some more questions. correct where ever..
CLR == JVM //both are vitual machines.
BCL == ?? //the actual api
WinForms == Swing/AWT //GUI api
WebForms == JSF //web app design api
ADO == JDBC // database connectivity
IL == bytecode //lang vs platform independence
assemblies are a part which are a bit confusing. Are assemblies == class files with metadata? is a exe an assembly with main? i read somewhere that in .net i do not have to define each class in a seperate .java file. so are the modules seperate classes within one assembly?
i guess metadata deserves another blog. and lots more info on security. thats another topic i have no idea of in java. could you provide additional info on the config files. maybe another blog with source? being a java guy i find it a bit hard to visualize where config files would be used.
i guess i have filled you future blogs pipeline.
CLR == JVM //both are vitual machines.
BCL == ?? //the actual api
WinForms == Swing/AWT //GUI api
WebForms == JSF //web app design api
ADO == JDBC // database connectivity
IL == bytecode //lang vs platform independence
assemblies are a part which are a bit confusing. Are assemblies == class files with metadata? is a exe an assembly with main? i read somewhere that in .net i do not have to define each class in a seperate .java file. so are the modules seperate classes within one assembly?
i guess metadata deserves another blog. and lots more info on security. thats another topic i have no idea of in java. could you provide additional info on the config files. maybe another blog with source? being a java guy i find it a bit hard to visualize where config files would be used.
i guess i have filled you future blogs pipeline.
an excellent web application
you are actually looking at it. blogspot is one of the best webapps i have ever used. makes the task so simple and easy,.. just as if i had always been blogging. everything is well designed and even an average windows user < all pun intended > like me, can use the very complicated functionality.
also have any of you noticed the ad at the top. it scans the text periodically and provides ads based on the text in the blogs. it has shown stuff from ayn rand, optimizing code and now nanotech ... very neat idea.
as a web developer i would really feel some pride if i ever make such a site.
also have any of you noticed the ad at the top. it scans the text periodically and provides ads based on the text in the blogs. it has shown stuff from ayn rand, optimizing code and now nanotech ... very neat idea.
as a web developer i would really feel some pride if i ever make such a site.
Wednesday, December 24, 2003
Basic overview of .NET
.NET is a platform on which you develop applications. It sits just above the OS and provides several services that make application development easier and more uniform. Probably its most important goal is to enable the seamless integration of software components. Other goals include language independance, simplified deployment and better security. Essentially, .NET provides a solid infrastructure to enable these goals.
The part of .NET, developers are most concerned with is the .NET framework. The .NET framework is two things... the Common Language Runtime (CLR) and the Base Class Library (BCL). The CLR is the virtual execution engine where your application runs. It takes care of loading your types (classes/structs/enums/etc...), verifying and enforcing type safety, providing a garbage collector to take care of memory management and also a Just-In-Time compiler (JITer) to compile your intermediate code to
native machine code. Any code running under the CLR is called managed code since it is in essence managed by the CLR. The BCL is a massive library that provides basic data types like int/double/string, collections like ArrayList/Hashtable/Queue/Stack, streams like Memory/File/Network and many more base services. On top of this basic framework are technologies like WinForms for developing windows client apps and WebForms for creating web apps (ASP .NET). And of course there are classes for Data Management (ADO .NET), XML, Remoting and Web Services.
All languages that target .NET produce Intermediate Language (IL) code, NOT native machine code. This is essentially how it achieves language independence. Whatever language you use, when you compile it, the compiler generates IL. Only when you run your application will the CLR load up your code and JIT compile it to native machine code. This is why there is a performance hit when you move to managed code from native (unmanaged) code. As with everything in software development, there are trade-offs. You get the vast services of the CLR and BCL, but you pay a price in efficiency.
An application is really just made up of components. Components in .NET are called assemblies. Assemblies are the smallest thing you can deploy... just like DLL's and EXE's pre .NET. An assembly contains two things... a manifest and one or more modules. The manifest contains everything about the assembly and its modules... including its identity (which involves its version number, some sort of unique hash and possibly a public/private encoding key), what external assemblies the modules refer to and also any resources it uses like images, files etc... among other things. The modules themselves contain IL code and metadata about what types (classes/structs/enums/etc...) it exposes and security permissions.
One thing to notice is that metadata (data about data) is very important in this system. The CLR makes heavy use of this in all aspects. For example, there is data in the assembly about security permissions. So when the CLR loads a particular assembly, it queries this security info and only if the proper permissions are asserted will it execute the code.
Every assembly is self describing. No other information is needed for it to "fit" within the system. There is no need for the registry anymore. There is no registering and unregistering DLL's in .NET. To install an app, you just copy the assemblies to any directory - that's it! Similarly, to uninstall, just delete the directory. Again, because of the richness of the metadata, DLL hell is all but history. You can have multiple versions of the same assembly running side-by-side on your machine. As long as one aspect of the assembly identity is different - be it the version number, the public/private key or hash - that's all that is needed.
Another very appealing aspect is that you can create configuration files to instruct the CLR what assemblies to load after any application has been deployed. When you create your application, you may use version 1 of a certain assembly. Later on, version 2 of that assembly is installed on the machine. You can set up a config file for your app which tells the CLR to use version 2 instead of version 1 of the assembly. You just change the config files... no need to change any of your code or re-installing your app. This is all done dynamically at runtime.
It seems to me that they have thought about the whole framwork quite thoroughly and have come up with a good platform on which to develop applications. A lot of this is very similar to Java, which has a very similar model. Infact, you could say they pretty much copied a lot of it... And they did! But that is the way it is. They have improved a lot of things over Java, just as Java improved on previous systems like Smalltalk.
The part of .NET, developers are most concerned with is the .NET framework. The .NET framework is two things... the Common Language Runtime (CLR) and the Base Class Library (BCL). The CLR is the virtual execution engine where your application runs. It takes care of loading your types (classes/structs/enums/etc...), verifying and enforcing type safety, providing a garbage collector to take care of memory management and also a Just-In-Time compiler (JITer) to compile your intermediate code to
native machine code. Any code running under the CLR is called managed code since it is in essence managed by the CLR. The BCL is a massive library that provides basic data types like int/double/string, collections like ArrayList/Hashtable/Queue/Stack, streams like Memory/File/Network and many more base services. On top of this basic framework are technologies like WinForms for developing windows client apps and WebForms for creating web apps (ASP .NET). And of course there are classes for Data Management (ADO .NET), XML, Remoting and Web Services.
All languages that target .NET produce Intermediate Language (IL) code, NOT native machine code. This is essentially how it achieves language independence. Whatever language you use, when you compile it, the compiler generates IL. Only when you run your application will the CLR load up your code and JIT compile it to native machine code. This is why there is a performance hit when you move to managed code from native (unmanaged) code. As with everything in software development, there are trade-offs. You get the vast services of the CLR and BCL, but you pay a price in efficiency.
An application is really just made up of components. Components in .NET are called assemblies. Assemblies are the smallest thing you can deploy... just like DLL's and EXE's pre .NET. An assembly contains two things... a manifest and one or more modules. The manifest contains everything about the assembly and its modules... including its identity (which involves its version number, some sort of unique hash and possibly a public/private encoding key), what external assemblies the modules refer to and also any resources it uses like images, files etc... among other things. The modules themselves contain IL code and metadata about what types (classes/structs/enums/etc...) it exposes and security permissions.
One thing to notice is that metadata (data about data) is very important in this system. The CLR makes heavy use of this in all aspects. For example, there is data in the assembly about security permissions. So when the CLR loads a particular assembly, it queries this security info and only if the proper permissions are asserted will it execute the code.
Every assembly is self describing. No other information is needed for it to "fit" within the system. There is no need for the registry anymore. There is no registering and unregistering DLL's in .NET. To install an app, you just copy the assemblies to any directory - that's it! Similarly, to uninstall, just delete the directory. Again, because of the richness of the metadata, DLL hell is all but history. You can have multiple versions of the same assembly running side-by-side on your machine. As long as one aspect of the assembly identity is different - be it the version number, the public/private key or hash - that's all that is needed.
Another very appealing aspect is that you can create configuration files to instruct the CLR what assemblies to load after any application has been deployed. When you create your application, you may use version 1 of a certain assembly. Later on, version 2 of that assembly is installed on the machine. You can set up a config file for your app which tells the CLR to use version 2 instead of version 1 of the assembly. You just change the config files... no need to change any of your code or re-installing your app. This is all done dynamically at runtime.
It seems to me that they have thought about the whole framwork quite thoroughly and have come up with a good platform on which to develop applications. A lot of this is very similar to Java, which has a very similar model. Infact, you could say they pretty much copied a lot of it... And they did! But that is the way it is. They have improved a lot of things over Java, just as Java improved on previous systems like Smalltalk.
Re: java rmi basics
So basically, stub and skel are proxy classes? That's how it's so easy and straightforward.
yup. you got that part. in fact there is an advanced option in rmic
$rmic -keep < Serverclass >
this option tells rmic to "keep" the intermediate generated _stub and _skel classes. so you'll get *._stub.java and *._skel.java . though i remember the classes were too advanced !!
Also, the "Naming" stuff... this is just like a directory service isn't it? I suppose this is essentially the same as what web services do, but in a proprietary format. Web services use SOAP (which sits on top of HTTP) and XML instead of rmic and UDDI instead of "Naming". But I guess since it's proprietary it's more efficient.
hey!! i'm a good blogger. correct again. yup.. i meant naming service == registry == directory service. i am not sure how efficient/good rmiregistry is. i don't know if there are any advanced querying options for rmiregistry. also as it is a part of the j2se, i am not sure how mny requests/sec it can handle. i guess UDDI must be more "mission critical".
yup. you got that part. in fact there is an advanced option in rmic
$rmic -keep < Serverclass >
this option tells rmic to "keep" the intermediate generated _stub and _skel classes. so you'll get *._stub.java and *._skel.java . though i remember the classes were too advanced !!
Also, the "Naming" stuff... this is just like a directory service isn't it? I suppose this is essentially the same as what web services do, but in a proprietary format. Web services use SOAP (which sits on top of HTTP) and XML instead of rmic and UDDI instead of "Naming". But I guess since it's proprietary it's more efficient.
hey!! i'm a good blogger. correct again. yup.. i meant naming service == registry == directory service. i am not sure how efficient/good rmiregistry is. i don't know if there are any advanced querying options for rmiregistry. also as it is a part of the j2se, i am not sure how mny requests/sec it can handle. i guess UDDI must be more "mission critical".
Re: java rmi basics
Good blog!
this will autogenerate two files with form *_stub.class and *_skel.class. these are known as stub and skeleton classes. rmic is actually a proprietory network protocol. just like any network app, a rmi app has to pass code thru the app layer, transport layer etc. these two classes define how data is to be transferred between the client and server side. rmic basically checks all the remote methods defined and generates the two classes. the stub is on the server side and skel is on the client side. hence the client app communicates with the skel which talks over the network to the stub which to the server class. in this way the app works over the network.
So basically, stub and skel are proxy classes? When you do anything on your client class, it actually communicates with stub which has all the network plumbing to connect and make the request to skel which in turn talks to the server class? That's how it's so easy and straightforward.
Also, the "Naming" stuff... this is just like a directory service isn't it? That's how the server publishes it's service and also how the client finds it.
I suppose this is essentially the same as what web services do, but in a proprietary format. Web services use SOAP (which sits on top of HTTP) and XML instead of rmic and UDDI instead of "Naming". But I guess since it's proprietary it's more efficient.
this will autogenerate two files with form *_stub.class and *_skel.class. these are known as stub and skeleton classes. rmic is actually a proprietory network protocol. just like any network app, a rmi app has to pass code thru the app layer, transport layer etc. these two classes define how data is to be transferred between the client and server side. rmic basically checks all the remote methods defined and generates the two classes. the stub is on the server side and skel is on the client side. hence the client app communicates with the skel which talks over the network to the stub which to the server class. in this way the app works over the network.
So basically, stub and skel are proxy classes? When you do anything on your client class, it actually communicates with stub which has all the network plumbing to connect and make the request to skel which in turn talks to the server class? That's how it's so easy and straightforward.
Also, the "Naming" stuff... this is just like a directory service isn't it? That's how the server publishes it's service and also how the client finds it.
I suppose this is essentially the same as what web services do, but in a proprietary format. Web services use SOAP (which sits on top of HTTP) and XML instead of rmic and UDDI instead of "Naming". But I guess since it's proprietary it's more efficient.
java rmi basics
rmi stands for remote method invocation. rmi is not a sun,java name but actually a generic concept. you also have rmi in c++ by a different name.
the name suggests what rmi is supposed to do. i remotely run methods.. which is sort of a client-server app. i guess an example will explain things better. three files are required.
1. Interface
import java.rmi.*;
public interface RmiInter extends Remote{
public int add(int a, int b) throws RemoteException;
}
2. Server
import java.net.*;
import java.rmi.*;
import java.rmi.server.*;
public class RmiServer extends UnicastRemoteObject implements RmiInter {
public RmiServer() throws RemoteException{
}
public int add(int a, int b) throws RemoteException{
return (a+b);
}
public static void main(String arg[]){
try{
RmiServer rS = new RmiServer();
Naming.rebind("Inter",rS);
System.out.println("in Server");
}catch(RemoteException re){
}catch(MalformedURLException mue){
}
}
}
3. Client
import java.net.*;
import java.rmi.*;
public class RmiClient{
public static void main(String arg){
try{
int x;
RmiInter r = (RmiInter)Naming.lookup("Inter");
x = r.add(2,3);
System.out.println(x);
}catch(RemoteException re){
}catch(NotBoundException nbe){
}catch(MalformedURLException nbe){
}
}
}
the interface class RmiInter defines the signature/prototype of the method to be made remote and it throws a RemoteException. the server class RmiServer actually defines the method. and the Client app simply uses the method.
in RMI, the client can call code just by knowing the method as defined in the interface. the actual implementation will be on a different server class over the network. thus the client runs code as it were local though it is actually working remotely.
now some steps to make this code work. firstly java all files. then you have to
$ rmic RmiServer
this will autogenerate two files with form *_stub.class and *_skel.class. these are known as stub and skeleton classes. rmic is actually a proprietory network protocol. just like any network app, a rmi app has to pass code thru the app layer, transport layer etc. these two classes define how data is to be transferred between the client and server side. rmic basically checks all the remote methods defined and generates the two classes. the stub is on the server side and skel is on the client side. hence the client app communicates with the skel which talks over the network to the stub which to the server class. in this way the app works over the network.
another major thing in a network is a directory or naming service. as in the client should know where the server is deployed. it asks the naming service where the server is and then connects to it. so first a registry is started by
$ rmiregistry
a port is opened and rmiregistry waits and listens for connections.
the server binds to the rmiregistry in the line with a name Inter
Naming.rebind("Inter",rS);
this name is used by the client to ask for the server. now the registry knows where to redirect a client when he asks for Inter.
the client asks for Inter in the line
RmiInter r = (RmiInter)Naming.lookup("Inter");
an interface instance of RmiInter is returned by the lookup. this instance is used to call the remote methods..
the actual continued steps are
$ java RmiServer
$ java RmiClient
all will be in different shells. now the client simply returns an answer over the network. i guess remote apps don't come simpler than this.
you might ask .. where was the remote part. in a real world app my client lookup can be like this..
RmiInter rmiInter = (RmiInter)java.rmi.Naming.lookup("rmi://152.51.64.12/Inter");
notice the arg in the lookup method. rmi:// shows that the rmi app layer protocol is used. the ip add of the server and bind name is also provided. also a psedo name can be provided for the server app like Inter in this case in the naming registry. the client actually only gets the client side code, the interface and the skel class. all other code remains remote.
the name suggests what rmi is supposed to do. i remotely run methods.. which is sort of a client-server app. i guess an example will explain things better. three files are required.
1. Interface
import java.rmi.*;
public interface RmiInter extends Remote{
public int add(int a, int b) throws RemoteException;
}
2. Server
import java.net.*;
import java.rmi.*;
import java.rmi.server.*;
public class RmiServer extends UnicastRemoteObject implements RmiInter {
public RmiServer() throws RemoteException{
}
public int add(int a, int b) throws RemoteException{
return (a+b);
}
public static void main(String arg[]){
try{
RmiServer rS = new RmiServer();
Naming.rebind("Inter",rS);
System.out.println("in Server");
}catch(RemoteException re){
}catch(MalformedURLException mue){
}
}
}
3. Client
import java.net.*;
import java.rmi.*;
public class RmiClient{
public static void main(String arg){
try{
int x;
RmiInter r = (RmiInter)Naming.lookup("Inter");
x = r.add(2,3);
System.out.println(x);
}catch(RemoteException re){
}catch(NotBoundException nbe){
}catch(MalformedURLException nbe){
}
}
}
the interface class RmiInter defines the signature/prototype of the method to be made remote and it throws a RemoteException. the server class RmiServer actually defines the method. and the Client app simply uses the method.
in RMI, the client can call code just by knowing the method as defined in the interface. the actual implementation will be on a different server class over the network. thus the client runs code as it were local though it is actually working remotely.
now some steps to make this code work. firstly java all files. then you have to
$ rmic RmiServer
this will autogenerate two files with form *_stub.class and *_skel.class. these are known as stub and skeleton classes. rmic is actually a proprietory network protocol. just like any network app, a rmi app has to pass code thru the app layer, transport layer etc. these two classes define how data is to be transferred between the client and server side. rmic basically checks all the remote methods defined and generates the two classes. the stub is on the server side and skel is on the client side. hence the client app communicates with the skel which talks over the network to the stub which to the server class. in this way the app works over the network.
another major thing in a network is a directory or naming service. as in the client should know where the server is deployed. it asks the naming service where the server is and then connects to it. so first a registry is started by
$ rmiregistry
a port is opened and rmiregistry waits and listens for connections.
the server binds to the rmiregistry in the line with a name Inter
Naming.rebind("Inter",rS);
this name is used by the client to ask for the server. now the registry knows where to redirect a client when he asks for Inter.
the client asks for Inter in the line
RmiInter r = (RmiInter)Naming.lookup("Inter");
an interface instance of RmiInter is returned by the lookup. this instance is used to call the remote methods..
the actual continued steps are
$ java RmiServer
$ java RmiClient
all will be in different shells. now the client simply returns an answer over the network. i guess remote apps don't come simpler than this.
you might ask .. where was the remote part. in a real world app my client lookup can be like this..
RmiInter rmiInter = (RmiInter)java.rmi.Naming.lookup("rmi://152.51.64.12/Inter");
notice the arg in the lookup method. rmi:// shows that the rmi app layer protocol is used. the ip add of the server and bind name is also provided. also a psedo name can be provided for the server app like Inter in this case in the naming registry. the client actually only gets the client side code, the interface and the skel class. all other code remains remote.
Friday, December 19, 2003
Re: nanotechnology
I really feel that once we do find a way to build (assemble?) these nano particles they will be fitted with something similar to a cpu (nano cpus?) to be able to be programmed against. Assembling nanoparticles, while in itself a major accomplishment won't reach its full potential until they have intelligence.
From what I know, nano tech is basically in it's infancy. Nothing substantial has been done yet. Pretty much everything is speculation and theorizing. Apparently, this is the century where great strides will be made in this field. We'll have to wait and see. Personally, I don't think anything very substantial happening in our lifetimes. Maybe the next gen/century. What dyou think?
Here's a site that gives some info on it... http://science.howstuffworks.com/nanotechnology.htm
Check out the page titled "A New Industrial Revolution". It seems to me that to be able to accomplish any of what that postulates, them particles will have to have intelligence.
Also, what do you know of quantum computing? Does the quantum refer to size or "parallelness"? Could this be used in nano tech?
From what I know, nano tech is basically in it's infancy. Nothing substantial has been done yet. Pretty much everything is speculation and theorizing. Apparently, this is the century where great strides will be made in this field. We'll have to wait and see. Personally, I don't think anything very substantial happening in our lifetimes. Maybe the next gen/century. What dyou think?
Here's a site that gives some info on it... http://science.howstuffworks.com/nanotechnology.htm
Check out the page titled "A New Industrial Revolution". It seems to me that to be able to accomplish any of what that postulates, them particles will have to have intelligence.
Also, what do you know of quantum computing? Does the quantum refer to size or "parallelness"? Could this be used in nano tech?
Re: nanotechnology
think we are still a long way from a "nanotech industry" as such producing .. like mohnish said ... nanosubmarines which can move around in your body.
Using nanotechnology to create AI or for that matter any machine / processor is very much possible. But the nanoparticles having intelligence is something which doesn't convince me. Nanoparticles are typically atoms (physical entities) and not machines and as such cannot have intelligence.
I think nanotechnology is an approach to manufacturing by manipulation of the basic building blocks - atoms. I am not sure it focusses on manufacturing microscopic machines, though that could well be considered in the future.
Bottomline - The only relation between nanotechnology and AI, which I can think of, is using nanotechnology to build AI, not imparting AI to the nanoparticles.
BTW - With all the shit that we're discussing here, what IS the current level of advancement in nanotechnology?! Anybody knows?
Using nanotechnology to create AI or for that matter any machine / processor is very much possible. But the nanoparticles having intelligence is something which doesn't convince me. Nanoparticles are typically atoms (physical entities) and not machines and as such cannot have intelligence.
I think nanotechnology is an approach to manufacturing by manipulation of the basic building blocks - atoms. I am not sure it focusses on manufacturing microscopic machines, though that could well be considered in the future.
Bottomline - The only relation between nanotechnology and AI, which I can think of, is using nanotechnology to build AI, not imparting AI to the nanoparticles.
BTW - With all the shit that we're discussing here, what IS the current level of advancement in nanotechnology?! Anybody knows?
Thursday, December 18, 2003
re nanotechnology
first up i think we r all a little out of our league discussing nanotech ... so everyone is just speculating!! ... well anywayz here r my two cents :)
nanotechnology is to my knowledge that field of science where objects inorganic or organic are made by direct manipulations of atoms ... that is they actually place atoms one by one to achieve some product .... we should not confuse it with the processor industry ... albeit they r working in nm range that is not nanotech
nanotech has two approaches to it ... Top Down Approach or the Bottom up approach ... top down is the current one where machining and etching techniques r used ... but on a nanoscale ... what scientists want to do is the botttom up approach where one can actually build the nanoscale machines atom by atom
Now AI and nanotech ... i think they r very much possible ... ppl r talking about nanoscale processors which consist of very small rods clicking against each other to give u circuit conncetions (same as that of transistors) ... these things r estimated to be capable of 10^25 instructions per second .... so why can't a nanoscale machine have intelligence ... i say it should and actually "must" have it ... if we can model Neural networks in computers .. then we can certainly do the same thing on a nanoscale.
well thats it from me ... as far as i'm concerned ... the future is brigth .. but i think we are still a long way from a "nanotech industry" as such producing .. like mohnish said ... nanosubmarines which can move around in your body.
dinesh.
nanotechnology is to my knowledge that field of science where objects inorganic or organic are made by direct manipulations of atoms ... that is they actually place atoms one by one to achieve some product .... we should not confuse it with the processor industry ... albeit they r working in nm range that is not nanotech
nanotech has two approaches to it ... Top Down Approach or the Bottom up approach ... top down is the current one where machining and etching techniques r used ... but on a nanoscale ... what scientists want to do is the botttom up approach where one can actually build the nanoscale machines atom by atom
Now AI and nanotech ... i think they r very much possible ... ppl r talking about nanoscale processors which consist of very small rods clicking against each other to give u circuit conncetions (same as that of transistors) ... these things r estimated to be capable of 10^25 instructions per second .... so why can't a nanoscale machine have intelligence ... i say it should and actually "must" have it ... if we can model Neural networks in computers .. then we can certainly do the same thing on a nanoscale.
well thats it from me ... as far as i'm concerned ... the future is brigth .. but i think we are still a long way from a "nanotech industry" as such producing .. like mohnish said ... nanosubmarines which can move around in your body.
dinesh.
Linux distributions
Linux offers a lot of choice and the same is in the case of choosing your distribution or distros in linux lingo. different distros provide different features.. this blog is based on an article in a british computers magazine - pc world.
1. fedora (formerly redhat desktop)
redhat has been the most popular distro with lots of software. its main selling point is its ease of use. most of the major tasks are very easy to configure with custom gui's. another winner is the easy os installation process and rpm's which are installation files. but this distro is a little restrictive and "simple" for real gurus. basically now redhat only provides support and sells the enterprise series os os'es. fedora is open/free to install and works totally on community support rather than previous corporate power. good solution for a newbie.
2. lindows
a relatively new entrant, is a little different. it has no free download. you have to buy the os. they have an automatic server update feature by which you can get the latest app versions, which is cool as there are so many new releases everytime. they bank on the name.. and lots of legal stuff going on.
3. debian
this is more of a geek/guru distro. nothing comes easy in deb. installation is text-based and i guess the kernel module selection part isn't what i am fully ready for right now. they have a different package management system APT, which figures out dependencies much better than rpm of redhat. this distro is more customizable, and slightly difficult as they assume yur not a dumb redhat guy.
3. gentoo
now if debian scared you, hold tight!! these guys don believe in compiling source.and hence great optimization, better performance. i noticed that they have installations for the mac as well. they have some packecge management called portgage. installation is also text-based. this according to the mag is - strictly for linux experts!!
other distros are xandros, lycoris, mandrake and suse which are more like fedora/redhat. caldera was another co, which later became the infamous SCO. i,ve also heard of slackware but have no info...
some distros have liveCD, which is basically a runnable os on a cd, no installation. gentoo, knoppix are some. its really amazing the way knoppix runs and finds out the system hardware at runtime.
so which distro are you going to graduate to??
1. fedora (formerly redhat desktop)
redhat has been the most popular distro with lots of software. its main selling point is its ease of use. most of the major tasks are very easy to configure with custom gui's. another winner is the easy os installation process and rpm's which are installation files. but this distro is a little restrictive and "simple" for real gurus. basically now redhat only provides support and sells the enterprise series os os'es. fedora is open/free to install and works totally on community support rather than previous corporate power. good solution for a newbie.
2. lindows
a relatively new entrant, is a little different. it has no free download. you have to buy the os. they have an automatic server update feature by which you can get the latest app versions, which is cool as there are so many new releases everytime. they bank on the name.. and lots of legal stuff going on.
3. debian
this is more of a geek/guru distro. nothing comes easy in deb. installation is text-based and i guess the kernel module selection part isn't what i am fully ready for right now. they have a different package management system APT, which figures out dependencies much better than rpm of redhat. this distro is more customizable, and slightly difficult as they assume yur not a dumb redhat guy.
3. gentoo
now if debian scared you, hold tight!! these guys don believe in compiling source.and hence great optimization, better performance. i noticed that they have installations for the mac as well. they have some packecge management called portgage. installation is also text-based. this according to the mag is - strictly for linux experts!!
other distros are xandros, lycoris, mandrake and suse which are more like fedora/redhat. caldera was another co, which later became the infamous SCO. i,ve also heard of slackware but have no info...
some distros have liveCD, which is basically a runnable os on a cd, no installation. gentoo, knoppix are some. its really amazing the way knoppix runs and finds out the system hardware at runtime.
so which distro are you going to graduate to??
Re: nanotechnology
actually like hrishi said its hard to imagine nanoparticles and ai. even i would associate nano particles with some properties like charge or something on the basis of which they could be transported or whatever. ai would require some processing power, hard to think of in a nanoparticle. i think of ai as a set of if/then conditional statements. maybe dinesh could provide an overview of general ai. then again right now you are years ahead with that book, so you tell us!!
Wednesday, December 17, 2003
Re: nanotechnology
Well, to me it seems they would be related. When you talk about nano particles, it is going to involve a WHOLE LOT of particles. This will demand that they be able to "talk" and communicate with each other to coordinate themselves.
If for example, you inject nano particles into your blood stream (for whatever purpose), they will have to "flow" together. This will have to involve certain amount of intelligence for them to be able to overcome whatever obstacles they encounter. I suppose you could communicate with them through wireless signals to control them, but they will themselves have to have base knowledge and AI to reach whatever goal they have been programed to reach.
If for example, you inject nano particles into your blood stream (for whatever purpose), they will have to "flow" together. This will have to involve certain amount of intelligence for them to be able to overcome whatever obstacles they encounter. I suppose you could communicate with them through wireless signals to control them, but they will themselves have to have base knowledge and AI to reach whatever goal they have been programed to reach.
nanotechnology
I don't know much about nanotechnology but I can't see how it ties in with AI. As far as my knowledge goes, nanotechnology is basically the technology to manipulation at the atomic level. An example of the effective use of nanotechnology would be creating diamond out of coal by rearranging the atoms. If nanotechnology reaches great levels, it could completely revolutionize the manufacturing industry. I'm not aware of how advanced the technology is today.
Tuesday, December 16, 2003
Nano technology
I just finished reading this book called "Prey" by Michael Crichton. It obviously is completely fictional, but the underlying theme of the book is about nano technology. What do you guys think about this technolgoy and its potentialities?
I dunno much about it, but it seems to me that nano technology will be quite interelated with Artificial Intelligence. Which leads me to another question - how does one actually program AI? When you think about programming... it is about giving PRECISE instructions to the dumb machine about what you want it to do. I don't understand how AI works. How does the machine learn and evolve?
BTW, the book is great - highly recommended. I particularly like Crichton books cause he does SO much research about the subject and incorporates it into his books. For ex. in this book he talks briefly about recursion! Also mentions unix's root. I always get cheap thrills when I come across these in novels ;-)
I dunno much about it, but it seems to me that nano technology will be quite interelated with Artificial Intelligence. Which leads me to another question - how does one actually program AI? When you think about programming... it is about giving PRECISE instructions to the dumb machine about what you want it to do. I don't understand how AI works. How does the machine learn and evolve?
BTW, the book is great - highly recommended. I particularly like Crichton books cause he does SO much research about the subject and incorporates it into his books. For ex. in this book he talks briefly about recursion! Also mentions unix's root. I always get cheap thrills when I come across these in novels ;-)
Re: nmap etc.
I couldn't actually see what the site does as I am behind a firewall. But I suppose it's just a regular port scanner similar to nmap.
About sudo. There's a file /etc/sudoers wherein are listed the super users for the system. One of them, obviously, is root. If you want to convert a normal user to a super user, you add an entry corresponding to his login to the file. Now whenever that user runs a command sudo, the system asks you for YOUR passwd. The command is then run with root previliges. Once you 'sudo', the system remembers the authentication for a few minutes, during which time you needn't enter the passwd for further sudo's.
This brings me to another useful (although potentially dangerous) feature - the suid bit. The set user id bit or the suid bit is a permission given to an executable file by the owner of the file, which allows the file to be executed as if the owner is executing it (with the previliges of the owner). So, a file with the suid bit on owned by root would execute with root previliges when it's run by a normal user. ls -l would show the last alphabet of the permissions to be 's'. While this feature is obviously useful in circumstances where root previliges are required, the file which is suid root has to be robust. Any vulnerabilities in the program could result in obtaining a root shell by buffer overflow or any other means. So suid root programs are strongly discouraged.
More on perms later...
About sudo. There's a file /etc/sudoers wherein are listed the super users for the system. One of them, obviously, is root. If you want to convert a normal user to a super user, you add an entry corresponding to his login to the file. Now whenever that user runs a command sudo
This brings me to another useful (although potentially dangerous) feature - the suid bit. The set user id bit or the suid bit is a permission given to an executable file by the owner of the file, which allows the file to be executed as if the owner is executing it (with the previliges of the owner). So, a file with the suid bit on owned by root would execute with root previliges when it's run by a normal user. ls -l would show the last alphabet of the permissions to be 's'. While this feature is obviously useful in circumstances where root previliges are required, the file which is suid root has to be robust. Any vulnerabilities in the program could result in obtaining a root shell by buffer overflow or any other means. So suid root programs are strongly discouraged.
More on perms later...
Saturday, December 13, 2003
Re: nmap etc.
I found this site which scans the top 1000 (well known) ports of your sys. This is similar to what nmap does right? But I guess this site only scans your sys?
https://grc.com/x/ne.dll?bh0bkyd2
Click on "Proceed" and then "All Service Ports"
https://grc.com/x/ne.dll?bh0bkyd2
Click on "Proceed" and then "All Service Ports"
Friday, December 12, 2003
Re: nmap etc.
great info on nmap.
did you guys notice, windows systems are (Worthy challenge)
and a linux sys with more ports says (Good luck!) .
but got more questions.
how does sudo work. need some info on that like configuration.
did you guys notice, windows systems are (Worthy challenge)
and a linux sys with more ports says (Good luck!) .
but got more questions.
how does sudo work. need some info on that like configuration.
Re: good book
I read "The C++ object Model" by Stan Lippman. What an excellent book! Cheers to Dinesh for the recommendation. It will tell you pretty much everything you mentioned in your post Hrishi.
Other books that I think will be quite worthwhile are the "Effective" series by Scott Myers. From the contents I've seen on amazon.com, it seems to give practical advise on better using C++/STL etc...
Other books that I think will be quite worthwhile are the "Effective" series by Scott Myers. From the contents I've seen on amazon.com, it seems to give practical advise on better using C++/STL etc...
nmap etc.
Here's some more stuff about nmap - the utility Rahul had mentioned some time ago. nmap is a very powerful port scanner which not only tells you what services the remote computer is running but it also tells you whether the ports are firewalled. It also tells you the OS that computer is using.
It's always a good idea to run nmap with root previleges. Check out the following 2 sample outputs.
[hrishikesh@vikings hrishikesh]$ sudo nmap -vO 10.7.201.38
Password:
Starting nmap V. 3.00 ( www.insecure.org/nmap/ )
No tcp,udp, or ICMP scantype specified, assuming SYN Stealth scan. Use -sP if you really don't
want to portscan (and just want to see what hosts are up).
Host (10.7.201.38) appears to be up ... good.
Initiating SYN Stealth Scan against (10.7.201.38)
Adding open port 445/tcp
Adding open port 139/tcp
Adding open port 135/tcp
Adding open port 1025/tcp
The SYN Stealth Scan took 0 seconds to scan 1601 ports.
For OSScan assuming that port 135 is open and port 1 is closed and neither are firewalled
Interesting ports on (10.7.201.38):
(The 1597 ports scanned but not shown below are in state: closed)
Port State Service
135/tcp open loc-srv
139/tcp open netbios-ssn
445/tcp open microsoft-ds
1025/tcp open NFS-or-IIS
Remote operating system guess: Windows Millennium Edition (Me), Win 2000, or WinXP
TCP Sequence Prediction: Class=random positive increments
Difficulty=9567 (Worthy challenge)
IPID Sequence Generation: Incremental
Nmap run completed -- 1 IP address (1 host up) scanned in 1 second
This is the output I got when I ran nmap on our dept mail server -
Interesting ports on shakti.aero.iitb.ac.in (10.101.1.2):
(The 1578 ports scanned but not shown below are in state: closed)
Port State Service
21/tcp open ftp
22/tcp open ssh
23/tcp open telnet
25/tcp open smtp
53/tcp open domain
79/tcp open finger
80/tcp open http
110/tcp open pop-3
111/tcp open sunrpc
135/tcp filtered loc-srv
136/tcp filtered profile
137/tcp filtered netbios-ns
138/tcp filtered netbios-dgm
139/tcp filtered netbios-ssn
143/tcp open imap2
443/tcp open https
445/tcp filtered microsoft-ds
593/tcp filtered http-rpc-epmap
600/tcp open ipcserver
993/tcp open imaps
995/tcp open pop3s
2401/tcp open cvspserver
3306/tcp open mysql
Remote OS guesses: Linux Kernel 2.4.0 - 2.5.20, Linux 2.4.19-pre4 on Alpha, Linux Kernel 2.4.3
SMP (RedHat)
TCP Sequence Prediction: Class=random positive increments
Difficulty=3081814 (Good luck!)
IPID Sequence Generation: All zeros
Nmap run completed -- 1 IP address (1 host up) scanned in 21 seconds
nmap is a popular tool in many network security scanners. The first step to attacking a remote computer is a port scan. Determine the open ports, check out the services running and see if there are any known vulnerabilities. A security scanner basically automates this process. It checks for these things in it's database and gives you a detailed report. Later the version, more are the vulnerabilites it can detect. This is of course the script kiddie approach. The real fun is doing it yourself and writing code to exploit weaknesses. I would have LOVED to do all this (of course without trashing the servers - just to KNOW) but in IIT, if you are caught, you are in REAL DEEP SHIT. I had a great desire to learn more about these things but the fear kept me from going deeper into this and gaining knowledge. So, no motivation, no fundaes! :-(
But no regrets, doing safe programming can be equally fun! Also interesting is linux system administration. In my opinion, it's a lot more systematic and transparent than in windoze. And in the latest versions of linux distros, you can do practically everything. The only reason I ever need to use windoze is when I want to create ppts (and of course, play games). The command line in linux (basically the shell) is extremely powerful and versatile. Once you are addicted to using the command line, the lot of mouse movement and thousand clicks really suck!
More about linux later!
It's always a good idea to run nmap with root previleges. Check out the following 2 sample outputs.
[hrishikesh@vikings hrishikesh]$ sudo nmap -vO 10.7.201.38
Password:
Starting nmap V. 3.00 ( www.insecure.org/nmap/ )
No tcp,udp, or ICMP scantype specified, assuming SYN Stealth scan. Use -sP if you really don't
want to portscan (and just want to see what hosts are up).
Host (10.7.201.38) appears to be up ... good.
Initiating SYN Stealth Scan against (10.7.201.38)
Adding open port 445/tcp
Adding open port 139/tcp
Adding open port 135/tcp
Adding open port 1025/tcp
The SYN Stealth Scan took 0 seconds to scan 1601 ports.
For OSScan assuming that port 135 is open and port 1 is closed and neither are firewalled
Interesting ports on (10.7.201.38):
(The 1597 ports scanned but not shown below are in state: closed)
Port State Service
135/tcp open loc-srv
139/tcp open netbios-ssn
445/tcp open microsoft-ds
1025/tcp open NFS-or-IIS
Remote operating system guess: Windows Millennium Edition (Me), Win 2000, or WinXP
TCP Sequence Prediction: Class=random positive increments
Difficulty=9567 (Worthy challenge)
IPID Sequence Generation: Incremental
Nmap run completed -- 1 IP address (1 host up) scanned in 1 second
This is the output I got when I ran nmap on our dept mail server -
Interesting ports on shakti.aero.iitb.ac.in (10.101.1.2):
(The 1578 ports scanned but not shown below are in state: closed)
Port State Service
21/tcp open ftp
22/tcp open ssh
23/tcp open telnet
25/tcp open smtp
53/tcp open domain
79/tcp open finger
80/tcp open http
110/tcp open pop-3
111/tcp open sunrpc
135/tcp filtered loc-srv
136/tcp filtered profile
137/tcp filtered netbios-ns
138/tcp filtered netbios-dgm
139/tcp filtered netbios-ssn
143/tcp open imap2
443/tcp open https
445/tcp filtered microsoft-ds
593/tcp filtered http-rpc-epmap
600/tcp open ipcserver
993/tcp open imaps
995/tcp open pop3s
2401/tcp open cvspserver
3306/tcp open mysql
Remote OS guesses: Linux Kernel 2.4.0 - 2.5.20, Linux 2.4.19-pre4 on Alpha, Linux Kernel 2.4.3
SMP (RedHat)
TCP Sequence Prediction: Class=random positive increments
Difficulty=3081814 (Good luck!)
IPID Sequence Generation: All zeros
Nmap run completed -- 1 IP address (1 host up) scanned in 21 seconds
nmap is a popular tool in many network security scanners. The first step to attacking a remote computer is a port scan. Determine the open ports, check out the services running and see if there are any known vulnerabilities. A security scanner basically automates this process. It checks for these things in it's database and gives you a detailed report. Later the version, more are the vulnerabilites it can detect. This is of course the script kiddie approach. The real fun is doing it yourself and writing code to exploit weaknesses. I would have LOVED to do all this (of course without trashing the servers - just to KNOW) but in IIT, if you are caught, you are in REAL DEEP SHIT. I had a great desire to learn more about these things but the fear kept me from going deeper into this and gaining knowledge. So, no motivation, no fundaes! :-(
But no regrets, doing safe programming can be equally fun! Also interesting is linux system administration. In my opinion, it's a lot more systematic and transparent than in windoze. And in the latest versions of linux distros, you can do practically everything. The only reason I ever need to use windoze is when I want to create ppts (and of course, play games). The command line in linux (basically the shell) is extremely powerful and versatile. Once you are addicted to using the command line, the lot of mouse movement and thousand clicks really suck!
More about linux later!
good book
Guys,
Just a one sentence answer needed!
For a LONG time, I have been looking for a C++ book which assumes prior knowledge of the language / OOP and then in a very concise fashion, goes into the depth and deals with advanced concepts and more importantly, what goes on INSIDE... if you know what I mean. (something like inside the c++ object book) Most books give a LOT of beginner shit and it's just too painful to cull it and look for the useful sections. Anybody in the same boat?!
Just a one sentence answer needed!
For a LONG time, I have been looking for a C++ book which assumes prior knowledge of the language / OOP and then in a very concise fashion, goes into the depth and deals with advanced concepts and more importantly, what goes on INSIDE... if you know what I mean. (something like inside the c++ object book) Most books give a LOT of beginner shit and it's just too painful to cull it and look for the useful sections. Anybody in the same boat?!
Tuesday, December 09, 2003
WinFX newsgroup post 2
More responses from different message board...
Response 1
Well as the article seems to suggest, Win32 will still be there for backwards compatibility, it is not going anywhere anytime soon. So, as a result, Java should work exactly the same way it does now (although I assume it will be updated) because the current functionality isn't going away. Same with C++.
Response2
Yup - all the old Win32 APIs will carry on working.
But C++ developers have a choice. They can write unmanaged applications just like they always have, using the Win32 APIs. Or they can write managed applications using the managed extensions to C++, in which case they can use all the new APIs. Or, unlike any other language, they can use both! You could write a C++ application that uses Avalon and ::ReadFileEx. (Although I'd rather use FileStream...)
Of course the downside of unmanaged and mixed-mode C++ applications is that they will require the permissions to execute unmanaged code. With the increased tightening of security on Windows this is something you'll want to avoid in future. Especially with Longhorn's SEE (Secure Execution Environment) and also with the ClickOnce deployment available with both Whidbey and Longhorn.
So although C++ developers will always have the option to use either, there will be benefits to confining themselves to pure managed verifiable code. (I.e. writing code that only does what safe C# could also do.)
Response 1
Well as the article seems to suggest, Win32 will still be there for backwards compatibility, it is not going anywhere anytime soon. So, as a result, Java should work exactly the same way it does now (although I assume it will be updated) because the current functionality isn't going away. Same with C++.
Response2
Yup - all the old Win32 APIs will carry on working.
But C++ developers have a choice. They can write unmanaged applications just like they always have, using the Win32 APIs. Or they can write managed applications using the managed extensions to C++, in which case they can use all the new APIs. Or, unlike any other language, they can use both! You could write a C++ application that uses Avalon and ::ReadFileEx. (Although I'd rather use FileStream...)
Of course the downside of unmanaged and mixed-mode C++ applications is that they will require the permissions to execute unmanaged code. With the increased tightening of security on Windows this is something you'll want to avoid in future. Especially with Longhorn's SEE (Secure Execution Environment) and also with the ClickOnce deployment available with both Whidbey and Longhorn.
So although C++ developers will always have the option to use either, there will be benefits to confining themselves to pure managed verifiable code. (I.e. writing code that only does what safe C# could also do.)
WinFX newsgroup post
I posted Rahul's question on a Microsoft Longhorn newsgroup and got a response. First, I hope I framed the question right, and second, the answer is NOT from a Microsoft employee. I wasn't impressed with the response, but judge for yourself.
> Hi,
>
> I'm just curious about something... I could be totally wrong so please
> correct me if I am.
>
> How would having Java in Longhorn work? If Longhorn API's is managed,
> then Java's API's which are also managed, will have to hook into these,
> wouldn't they? Wouldn't this sort of be a situation of have two managed
> layers? A lot of GUI functionality is dependant on the underlying platform,
> so would they have to make calls through Longhorn's API's instead of
> directly accessing them?
The Java UI would either have to use standard windows UI's(which as I
understand it hook into Avalon at some lower level, maybe a GDI thunking
layer?) as they do now, meaning that current java UI toolkits should still
work properly, or they could provide new versions of the toolkits that wrap
the managed API. As a note, if you are using platform specific UI's, you
always have to call through that platforms API, its just a matter of which
API you use.
It is also not clear to me if or how AWT, for example, would work with
Avalon. Its possible that the api's provided by the class set are not
sufficently expressive for the target system, or are to far at odds to
easily come to terms. Such things will remain to be scene as Longhorn
progresses and the various communities start exploring its capabilities.
> Also what about (native) C++? For ex. file streams... Would these need to go
> through the managed API's? If yes, wouldn't that cause some performance
> problems?
Not of any consequence. Any call overhead into the managed code should be
virtually undetectable when you consider the fact that file streams are disk
bound, not cpu bound. The slowest managed code is generally many times
faster than the disk.
> Thanks.
> Hi,
>
> I'm just curious about something... I could be totally wrong so please
> correct me if I am.
>
> How would having Java in Longhorn work? If Longhorn API's is managed,
> then Java's API's which are also managed, will have to hook into these,
> wouldn't they? Wouldn't this sort of be a situation of have two managed
> layers? A lot of GUI functionality is dependant on the underlying platform,
> so would they have to make calls through Longhorn's API's instead of
> directly accessing them?
The Java UI would either have to use standard windows UI's(which as I
understand it hook into Avalon at some lower level, maybe a GDI thunking
layer?) as they do now, meaning that current java UI toolkits should still
work properly, or they could provide new versions of the toolkits that wrap
the managed API. As a note, if you are using platform specific UI's, you
always have to call through that platforms API, its just a matter of which
API you use.
It is also not clear to me if or how AWT, for example, would work with
Avalon. Its possible that the api's provided by the class set are not
sufficently expressive for the target system, or are to far at odds to
easily come to terms. Such things will remain to be scene as Longhorn
progresses and the various communities start exploring its capabilities.
> Also what about (native) C++? For ex. file streams... Would these need to go
> through the managed API's? If yes, wouldn't that cause some performance
> problems?
Not of any consequence. Any call overhead into the managed code should be
virtually undetectable when you consider the fact that file streams are disk
bound, not cpu bound. The slowest managed code is generally many times
faster than the disk.
> Thanks.
What is WinFX?
This is a short - around 10 mins - video from one of the WinFX devs, explaining what it is.
http://msdn.microsoft.com/msdntv/episode.aspx?xml=episodes/en/20031107WINFXBA/manifest.xml
http://msdn.microsoft.com/msdntv/episode.aspx?xml=episodes/en/20031107WINFXBA/manifest.xml
Monday, December 08, 2003
Re: the next JVM (Java Virtual Machine)
if WinFX supports Win32 as a subsystem .. then why would the JVM have to make calls through the .NET VM ... or r ur'll worried that eventually Win32 some years from now will be removed from MS's OS?
Until now every new functionality Microsoft introduced was added to Win32. .NET - what we have today - is a layer on top of that. It covers most, but not all Win32 functionality. So, new managed languages like C# programmed against .NET, while older languages like C++ could program against the C based Win32 APIs. The point of .NET (and Java), is to provide an environment where you don't have to worry about a lot of things like memory management, buffer overflows etc... and it also provides built in security and easier deployment models. They also provide a good OO framework as compared to the "flat" C APIs.
With WinFX and Longhorn, ALL new functionality they introduce like WinFS (Storage) and Avalon (GUI), is going to be managed. So what does a language like C++ have to do? There is no unmanaged equivalent. So, it has to go to the managed world (which btw, is possible - they have managed c++). But as you probably can guess, this ruins performance.
There is always some tradeoff. With native C++, you have all the control you want, but you don't get the productivity and safety of a managed environment like Java or .NET. On the other hand, Java and .NET, suffer in performance because of the added additional layer.
Until now every new functionality Microsoft introduced was added to Win32. .NET - what we have today - is a layer on top of that. It covers most, but not all Win32 functionality. So, new managed languages like C# programmed against .NET, while older languages like C++ could program against the C based Win32 APIs. The point of .NET (and Java), is to provide an environment where you don't have to worry about a lot of things like memory management, buffer overflows etc... and it also provides built in security and easier deployment models. They also provide a good OO framework as compared to the "flat" C APIs.
With WinFX and Longhorn, ALL new functionality they introduce like WinFS (Storage) and Avalon (GUI), is going to be managed. So what does a language like C++ have to do? There is no unmanaged equivalent. So, it has to go to the managed world (which btw, is possible - they have managed c++). But as you probably can guess, this ruins performance.
There is always some tradeoff. With native C++, you have all the control you want, but you don't get the productivity and safety of a managed environment like Java or .NET. On the other hand, Java and .NET, suffer in performance because of the added additional layer.
Re: the next JVM (Java Virtual Machine)
if WinFX supports Win32 as a subsystem .. then why would the JVM have to make calls through the .NET VM ... or r ur'll worried that eventually Win32 some years from now will be removed from MS's OS?
dinesh.
dinesh.
Sunday, December 07, 2003
Re: the next JVM (Java Virtual Machine)
But memory management etc... is java code itself. I don't think it relies on the OS for this. Also, realize that just the main API's that programmers are going to use will be managed with WinFX - not everything. Low level code like memory management/drivers/kernel stuff will still be in whatever they are using C or C++ - maybe even asm.
i meant stuff like getting a memory location from the os etc during initilaization. internal mapping of the memory would be done by the vm itself. though there are many other areas where i see the vm needing the os. writing a simple .class file into the filesystem. or trying to read files over the network, running in a browser as an applet, gui event trapping... i agree that some of this, maybe all might be possible through low-level calls but i am not sure though. it will be very difficult though.
I am sure people will be talking about this and pressure Microsoft into doing something. I'll try posting on some newsgroup and seeing what they have to say.
that should be the best thing to do rather than my speculating. if ms goes ahead with what i make of it all it will definitely be the greatest anti-trust cases of all time!! but ms also has some aces,.. language independence and they have also submitted some sort of standards for .net to european ECMA. so they are sort of open.
i meant stuff like getting a memory location from the os etc during initilaization. internal mapping of the memory would be done by the vm itself. though there are many other areas where i see the vm needing the os. writing a simple .class file into the filesystem. or trying to read files over the network, running in a browser as an applet, gui event trapping... i agree that some of this, maybe all might be possible through low-level calls but i am not sure though. it will be very difficult though.
I am sure people will be talking about this and pressure Microsoft into doing something. I'll try posting on some newsgroup and seeing what they have to say.
that should be the best thing to do rather than my speculating. if ms goes ahead with what i make of it all it will definitely be the greatest anti-trust cases of all time!! but ms also has some aces,.. language independence and they have also submitted some sort of standards for .net to european ECMA. so they are sort of open.
Re: Type Casting
if a new object A is created it would not have the members of B ... these r different objects u r talking of .... only when u write
Right... I was talking about the pointer types - not the actual object. Pointer of type A or B pointing to a B object.
Re: the next JVM (Java Virtual Machine)
the java vm just like any app it relies heavily on the undelying os. the vm for example has to perform actions like memory management.
But memory management etc... is java code itself. I don't think it relies on the OS for this. Also, realize that just the main API's that programmers are going to use will be managed with WinFX - not everything. Low level code like memory management/drivers/kernel stuff will still be in whatever they are using C or C++ - maybe even asm.
but i guess this situation would be no different for other languages trying to work outside the .net compiler. if your app wants to use new longhorn functionality it has to be .net. sounds really scary.
Yeah, I dunno what is going to happen. I am sure people will be talking about this and pressure Microsoft into doing something. What I am surprised about is that I have not read about this anywhere. I just started thinking about it because you brought it up. I'll try posting on some newsgroup and seeing what they have to say.
are you trying to say yu agree with vm in vm??
No, not at all. I was just joking. I wouldn't see any point in that.
Re: Type Casting
Yeah you're right... Pointers of type A as well as pointers of type B will point to memory location 0. It's just that with type A pointers, you will only have access to sizeof( A ) which is 0 to 3. With type B pointers, you have access to everything, 0 to 7, which is sizeof( B ).
if a new object A is created it would not have the members of B ... these r different objects u r talking of .... only when u write
A* p = new B();
only here will a pointer of A have all the data members of A & B .... i'm sure ur'll know this but i'm just clarifying anyway :)
dinesh.
if a new object A is created it would not have the members of B ... these r different objects u r talking of .... only when u write
A* p = new B();
only here will a pointer of A have all the data members of A & B .... i'm sure ur'll know this but i'm just clarifying anyway :)
dinesh.
Re: Type Casting
basically what i was more interested in is how does the parent know who is the derived and vice versa.
The parent NEVER knows who it's children/grand children/... are. Only the derived classes are aware of their parent.
Considering the object on the heap...
// Heap
------------- // address
| int m_dataA // 0
| int m_dataB // 3
------------- // 7
1.
either as you showed both A and B data members are stored in a linear fashion ie in contiguous memory locations. so when i need the entire B, the next sizeof extra B data members has to be considered and that will be B. the pointer always points to A and then based on the type cast simply considers the next memory locations. i hope i am clearer this time.
Yeah you're right... Pointers of type A as well as pointers of type B will point to memory location 0. It's just that with type A pointers, you will only have access to sizeof( A ) which is 0 to 3. With type B pointers, you have access to everything, 0 to 7, which is sizeof( B ).
2.
for larger classes with a large amount of inheritance this linear memory location storage would be inefficient. just like a normal os memory management we could break the memory into partitions and allocate memory for objects in broken discontinous locations. then a table would be required to map various memory locations to the objects. i am just guessing all of this. this sounds inefficient but for larger classes? i know this is a bit into actual vm design / memory management, but in case you guys have any idea.
Actually, ALL objects have their data stored linearly. This is quite efficient. How would breaking it up into partitions be more efficient? I don't see it. Even if the class has 100's of data members... it really is NOT that much considering there are arrays with 1000's of elements which are stored in memory contiguously.
If the data is stored linearly, when you have a pointer to that object, accessing the data members is just a constant time operation. If the object is broken up you will have indirections. Also, to support inheritance, it is almost essential to have it stored linearly. This is how you can have base type pointers to derived type objects.
is the v-table in c++ associated with each of the data-member locations itself pointing to the parent, derived locations etc ? i guess this table really gets complicated with multiple inheritance.
Each class (not object) in a hierarchy has it's own v-table. That v-table just contains the addresses of virtual functions. Consider this ex.
// C++
class A
{
private:
int m_dataA;
public:
A() : m_dataA( 0 ) {};
virtual ~A() {};
virtual void fun() { std::cout << "funA"; };
};
class B : public A
{
private:
int m_dataB;
public:
B() : m_dataB( 0 ) {};
virtual ~B() {};
/*virtual*/ void fun() { std::cout << "funB"; };
};
So both class A and class B will get a two index v-table - one index for the destructor and another for fun(). Also, each object of A and B will get a vptr that points to this table.
// A object on heap
|------------
| int m_dataA
| vptr ---------------> v-table
|------------ ------------------
| address of ~A()
| address of fun()
-------------------
// B object on heap
|------------
| int m_dataA
| int m_dataB
| vptr ---------------> v-table
|------------ ------------------
| address of ~B()
| address of fun()
-------------------
So now, if you have an A pointer pointing to a B object and you call fun();
A* p = new B;
p->fun();
you want it to call B's fun(), not A's. This is where the indirection occurs. The compiler will just make the call through the vptr. I guess maybe something like... *(p->vptr[1])();. This will invoke B's fun().
Same way, if you had another class C, that derives from A, it would also get a v-table with 2 indices. And again if you had an A pointer to a C object and call fun(), it would go through the v-table and the correct fun() would get called. This is the power of polymorphism. You can keep adding new classes and don't have to modify your code. It just works!
Re: MyIE2
I tried this one out but Opera, with it's mouse gestures, still rulz... MyIE2 doesn't have all the mouse gestures of Opera - esp the very important open link in new window, open in new window in background etc. I have become so used to (read dependent) on them that I can't imagine using a browser without them.
Saturday, December 06, 2003
Re: the next JVM (Java Virtual Machine)
Microsoft is trying to faze out Win32 and move to managed code. So anything new that is added is not just a layer on top of the real thing.
that is the main worry for any application, be it either the jvm, or any standard c++ app or acy other language which you would want to compile in a different platform. will these api's have to make calls to the .net api?
the java vm just like any app it relies heavily on the undelying os. the vm for example has to perform actions like memory management.
i am not sure about the other swing api's etc. but if an app has to catch mouse actions etc it has to invoke the os mouse methods, which are all C right now.
consider this article from eclipse swt. swt is an alternative ui api for java. but the break the java cross-platform feature to offer much better os based ui. the article shows how dependant the java api/code is on the os. basically everything.
I don't think anything will happen to Java. It will still be very much alive and kicking.
and so only if the jvm etc has to make calls to a .net api, its dead. basically performance of the jvm would be killed (obviously). now i have considered on the jvm. but i guess this situation would be no different for other languages trying to work outside the .net compiler. if your app wants to use new longhorn functionality it has to be .net. sounds really scary.
I just dunno how it will work in Longhorn - we will have a managed managed world.
are you trying to say yu agree with vm in vm??
that is the main worry for any application, be it either the jvm, or any standard c++ app or acy other language which you would want to compile in a different platform. will these api's have to make calls to the .net api?
the java vm just like any app it relies heavily on the undelying os. the vm for example has to perform actions like memory management.
i am not sure about the other swing api's etc. but if an app has to catch mouse actions etc it has to invoke the os mouse methods, which are all C right now.
consider this article from eclipse swt. swt is an alternative ui api for java. but the break the java cross-platform feature to offer much better os based ui. the article shows how dependant the java api/code is on the os. basically everything.
I don't think anything will happen to Java. It will still be very much alive and kicking.
and so only if the jvm etc has to make calls to a .net api, its dead. basically performance of the jvm would be killed (obviously). now i have considered on the jvm. but i guess this situation would be no different for other languages trying to work outside the .net compiler. if your app wants to use new longhorn functionality it has to be .net. sounds really scary.
I just dunno how it will work in Longhorn - we will have a managed managed world.
are you trying to say yu agree with vm in vm??
Re: Type Casting
thanks for the great explanation on casting. but i think i asked my question a bit wrongly. i was more interested in the actual placing of the objects on the stack/heap so that they can be accessed by the pointer.
basically what i was more interested in is how does the parent know who is the derived and vice versa.
consider the same example
class A
{
public:
int m_dataA;
void functionA() { std::cout << "functionA"; }
};
class B : public class A
{
public:
int m_dataB;
void functionB() { std::cout << "funtionB"; }
};
// Heap
-----------------
| int m_dataA
| int m_dataB
-----------------
i could think of two possibilities.
1.
either as you showed both A and B data members are stored in a linear fashion ie in contiguous memory locations. so when i need the entire B, the next sizeof extra B data members has to be considered and that will be B. the pointer always points to A and then based on the type cast simply considers the next memory locations. i hope i am clearer this time.
2.
for larger classes with a large amount of inheritance this linear memory location storage would be inefficient. just like a normal os memory management we could break the memory into partitions and allocate memory for objects in broken discontinous locations. then a table would be required to map various memory locations to the objects. i am just guessing all of this. this sounds inefficient but for larger classes? i know this is a bit into actual vm design / memory management, but in case you guys have any idea.
is the v-table in c++ associated with each of the data-member locations itself pointing to the parent, derived locations etc ? i guess this table really gets complicated with multiple inheritance.
i am purely guessing all of this. consider a v-table like implementation in java/c#.. where i store the data members of A along with the pointer add for B and parent classes if any, null here. similarly for B i store the add for A and null for derived classes. in such a case i can have a mix of the two above possibilities, storing data-members of a class together and yet large inherited objects are broken up.
so do any of you know what happens really? or just more info on v-tables would be great.
basically what i was more interested in is how does the parent know who is the derived and vice versa.
consider the same example
class A
{
public:
int m_dataA;
void functionA() { std::cout << "functionA"; }
};
class B : public class A
{
public:
int m_dataB;
void functionB() { std::cout << "funtionB"; }
};
// Heap
-----------------
| int m_dataA
| int m_dataB
-----------------
i could think of two possibilities.
1.
either as you showed both A and B data members are stored in a linear fashion ie in contiguous memory locations. so when i need the entire B, the next sizeof extra B data members has to be considered and that will be B. the pointer always points to A and then based on the type cast simply considers the next memory locations. i hope i am clearer this time.
2.
for larger classes with a large amount of inheritance this linear memory location storage would be inefficient. just like a normal os memory management we could break the memory into partitions and allocate memory for objects in broken discontinous locations. then a table would be required to map various memory locations to the objects. i am just guessing all of this. this sounds inefficient but for larger classes? i know this is a bit into actual vm design / memory management, but in case you guys have any idea.
is the v-table in c++ associated with each of the data-member locations itself pointing to the parent, derived locations etc ? i guess this table really gets complicated with multiple inheritance.
i am purely guessing all of this. consider a v-table like implementation in java/c#.. where i store the data members of A along with the pointer add for B and parent classes if any, null here. similarly for B i store the add for A and null for derived classes. in such a case i can have a mix of the two above possibilities, storing data-members of a class together and yet large inherited objects are broken up.
so do any of you know what happens really? or just more info on v-tables would be great.
MyIE2
if anyone of ur'll have always been searching for an IE that has tabs and other stuff .. then u r gonna like this one .... i had basically been driven to Opera because i juts love the Tabs feature ... check out MyIE2 .. this thing is just awesome ... it sits on top of IE without much overhead and gives me all the features of IE and adds a hell of a lot of features it self :)... Opera is good .. but it just warps some sites and i was not always happy with it .. i used to love IE but just for tabs i switched to opera .. now i got what i need :):) ...
dinesh.
dinesh.
Re: the next JVM (Java Virtual Machine)
yet another day having to study instrumentation topics drove me to the highest levels to boredom. so the one way to refresh myself. some net.
While we're on the subject of levels of boredom... I got to read parts of "genesis" and other religious crap for my english exam. I can't contain myself.
this is the first time i ventured to some .NET territory
Yipppeee!
so this very amazing idea popped into my head. will the next jvm, make calls to a .net api??? the jvm running inside the .net vm
This is a very interesting question. I never thought of this.
If you look at Java and .NET, they are both platforms. But they are also quite different in their scope I think. .NET tries to cover the ENTIRE Win32 API's. .NET is basically an almost complete layer on top of Win32. Java, on the other hand, doesn't have that big a scope. It provides a lot of functionality, but does not have as much coverage. If you look at the number of classes/namespaces provided and compare between the two, you'll notice that .NET has a LOT more. This reaches ridiculous proportions in Longhorn with WinFX (check out Reference -> Class Library Reference -> Namespaces).
Another thing to notice is that .NET is VERY reliant on Win32. Most classes are just wrappers for the underlying Win32 implementation. I dunno how much Java depends on the underlying platform. I always thought that they were pretty independant... even the GUI stuff like swing is total Java and NOT just calls to whatever the underlying platform is - Is this right?
With WinFX, Microsoft is trying to faze out Win32 and move to managed code. So anything new that is added is not just a layer on top of the real thing. WinFX IS the real thing. This provides for a lot of cool possibilities. For example, the shell in Longhorn is managed and is exposed with a set of APIs. So you can program against that quite easily. Also, when using the command line, you deal with actual objects. So you can write C# code instead of scripting.
This a transition just like there was a transition from Win16 to Win32, C to C++ etc... When we think of C++ right now, we think of how efficient it is. There was a time, when C was king, they thought C++ sucked performance wise. Similar situation is going on now with Java/.NET. Eventually everything will be managed and that will be the measure of performance. Obviously, the unmanaged world will still be alive, just like C is still alive today. But the "mainstream" will most probably be managed because of Microsoft's influence.
Personally, I don't think anything will happen to Java. It will still be very much alive and kicking. I just dunno how it will work in Longhorn - we will have a managed managed world ;-)
What do you think?
Re: Type Casting
if ppl r wondering if the member function is not part of the class on the heap as mohnish said ... then how does it know where to look for the data members ... then ur answer is the "this" pointer .... which will give information about the instance from which the call was generated.
dinesh.
dinesh.
Re: Type Casting
I am not sure how each language/platform implements type casting. I assume that it is similar in all. I dunno why they would have differing implementations since it is essentially the same concept.
This is what I think goes on. I am not sure about this at all. Maybe Dinesh/Hrishi can add/correct stuff about this.
First, just to be clear, there is a difference between a pointer/reference and an actual object...
A pointer/reference is always of fixed size (4 bytes on 32 bit machines) and has a type associated with it. Once that type is associated with a pointer/reference it can NEVER be changed. These pointers/references are located on the stack and point to instances of classes (objects). They allow you to access and play with actual objects.
Class instances or objects can be located both on the stack and heap in C++/C# and only on the heap in Java. They vary in size based on the class "blueprint". And it's size grows, if it is part of a class hierarchy (inheritance). Uness you have a pointer/reference pointing to them, you CAN'T do anything to them.
I guess you can think of the pointer/reference as providing a "view" of an object.
// C++
class A
{
public:
int m_dataA;
void functionA() { std::cout << "functionA"; }
};
It just contains one data member. So when you create an object of this class, it is just this one member that is loaded on the stack or heap.
// Heap
-----------------
| int m_dataA
-----------------
Now to do something to it, you need a pointer/reference.
A* p = new A;
Through p, you have a "view" of this object. That view is limited to the size of A. As in, p has access to everything from 0 to sizeof(A), if you consider 0 to be the address of where this objected is created. In this case p can view everything. Generally, the size of an object is just the sum of the size of each data member. In this case, sizeof(A) would most probably be equal to sizeof(int). Functions are NOT part of the class.
// Stack // Heap
p------> ----------------- address: 0
| int m_dataA
----------------- address: 3
Now consider if the class is part of a hierarchy.
// C++
class B : public class A
{
public:
int m_dataB;
void functionB() { std::cout << "funtionB"; }
};
Now if we have an object of class B, the heap will look like...
// Heap
-----------------
| int m_dataA
| int m_dataB
-----------------
And if we have a pointer/reference of type B, we can access EVERYTHING in the object, because sizeof(B) includes everything.
B* p = new B;
// Stack // Heap
p------> ----------------- address: 0
| int m_dataA
| int m_dataB
----------------- addresss: 7
But what happens if we have an A pointer/reference to this object? Going back to the view analogy... it will only allow you to see things A can see, which is sizeof(A)
A* p = new B;
// Stack // Heap
p------> ----------------- address: 0
| int m_dataA
| int m_dataB
----------------- address: 7
Here, m_dataB exists and is part of the object on the heap, but it just cannot be seen using a A pointer/reference.
So finally, answering your question, casting just provides a view to an object. In this case, to see ALL of B, you will need to cast it...
B* p2 = (B) p;
This will grant access to everything in B.
In your example, the collection accepts Objects, which is the root class of ALL objects in java/c#. So when you pass in Strings, String objects are created on the heap, but they are viewed through Object references. When you get back these Strings from the collection using get(index), you have to cast them to Strings to be able to view everything.
Regarding the v-tables. Again, I'm not sure how they are implemented in java and c#. I would assume that they do it the same way as C++.
This is what I think goes on. I am not sure about this at all. Maybe Dinesh/Hrishi can add/correct stuff about this.
First, just to be clear, there is a difference between a pointer/reference and an actual object...
A pointer/reference is always of fixed size (4 bytes on 32 bit machines) and has a type associated with it. Once that type is associated with a pointer/reference it can NEVER be changed. These pointers/references are located on the stack and point to instances of classes (objects). They allow you to access and play with actual objects.
Class instances or objects can be located both on the stack and heap in C++/C# and only on the heap in Java. They vary in size based on the class "blueprint". And it's size grows, if it is part of a class hierarchy (inheritance). Uness you have a pointer/reference pointing to them, you CAN'T do anything to them.
I guess you can think of the pointer/reference as providing a "view" of an object.
// C++
class A
{
public:
int m_dataA;
void functionA() { std::cout << "functionA"; }
};
It just contains one data member. So when you create an object of this class, it is just this one member that is loaded on the stack or heap.
// Heap
-----------------
| int m_dataA
-----------------
Now to do something to it, you need a pointer/reference.
A* p = new A;
Through p, you have a "view" of this object. That view is limited to the size of A. As in, p has access to everything from 0 to sizeof(A), if you consider 0 to be the address of where this objected is created. In this case p can view everything. Generally, the size of an object is just the sum of the size of each data member. In this case, sizeof(A) would most probably be equal to sizeof(int). Functions are NOT part of the class.
// Stack // Heap
p------> ----------------- address: 0
| int m_dataA
----------------- address: 3
Now consider if the class is part of a hierarchy.
// C++
class B : public class A
{
public:
int m_dataB;
void functionB() { std::cout << "funtionB"; }
};
Now if we have an object of class B, the heap will look like...
// Heap
-----------------
| int m_dataA
| int m_dataB
-----------------
And if we have a pointer/reference of type B, we can access EVERYTHING in the object, because sizeof(B) includes everything.
B* p = new B;
// Stack // Heap
p------> ----------------- address: 0
| int m_dataA
| int m_dataB
----------------- addresss: 7
But what happens if we have an A pointer/reference to this object? Going back to the view analogy... it will only allow you to see things A can see, which is sizeof(A)
A* p = new B;
// Stack // Heap
p------> ----------------- address: 0
| int m_dataA
| int m_dataB
----------------- address: 7
Here, m_dataB exists and is part of the object on the heap, but it just cannot be seen using a A pointer/reference.
So finally, answering your question, casting just provides a view to an object. In this case, to see ALL of B, you will need to cast it...
B* p2 = (B) p;
This will grant access to everything in B.
In your example, the collection accepts Objects, which is the root class of ALL objects in java/c#. So when you pass in Strings, String objects are created on the heap, but they are viewed through Object references. When you get back these Strings from the collection using get(index), you have to cast them to Strings to be able to view everything.
Regarding the v-tables. Again, I'm not sure how they are implemented in java and c#. I would assume that they do it the same way as C++.
Friday, December 05, 2003
the next JVM (Java Virtual Machine)
yet another day having to study instrumentation topics drove me to the highest levels to boredom. so the one way to refresh myself. some net.
this is the first time i ventured to some .NET territory. and read this article at ondotnet.com, a oreily site. it explains about winfx, the next win api and how its going to change things, and more importantly how .NET fits in. read the article first before continuing. atleast the last line of the conclusion should answer a question dinesh had asked previously. do you need to learn .net.
so this very amazing idea popped into my head. will the next jvm, make calls to a .net api??? the jvm running inside the .net vm. am i just going crazy with instrumentation, you tell me. but if the next api is only managed, any app will need to make calls different from the then legacy win32 api!! for that matter the next C++ compiler. microsoft can argue that it is not cutting out the other languages coz .net has language independence. so java gets kicked out of the win platform, and then thats the end of the world. lalloo becomes pm of india.
is this just microsoft FUD ? is this hypothetical question worthy of being asked in an actual forum?
this is the first time i ventured to some .NET territory. and read this article at ondotnet.com, a oreily site. it explains about winfx, the next win api and how its going to change things, and more importantly how .NET fits in. read the article first before continuing. atleast the last line of the conclusion should answer a question dinesh had asked previously. do you need to learn .net.
so this very amazing idea popped into my head. will the next jvm, make calls to a .net api??? the jvm running inside the .net vm. am i just going crazy with instrumentation, you tell me. but if the next api is only managed, any app will need to make calls different from the then legacy win32 api!! for that matter the next C++ compiler. microsoft can argue that it is not cutting out the other languages coz .net has language independence. so java gets kicked out of the win platform, and then thats the end of the world. lalloo becomes pm of india.
is this just microsoft FUD ? is this hypothetical question worthy of being asked in an actual forum?
Thursday, December 04, 2003
Type Casting ??
i wanted to ask how does type casting objects work? wrt in java/c#.
suppose i add String instances to a collection. as a collection returns Object i then typecast the return objects to String.
public Object get(int index);
this is the signature/prototype for the method to return an object. which is then typecast to String object by (String)get(index);
my basic doubt is that like in polymorphism, the derived class knows about its parent,.. which is obvious. but for something like type casting the parent class has to know who is the derived instance. if one of you could provide a clearer explanation it would be great. do you know of anything like v-tables in java/c#.
suppose i add String instances to a collection. as a collection returns Object i then typecast the return objects to String.
public Object get(int index);
this is the signature/prototype for the method to return an object. which is then typecast to String object by (String)get(index);
my basic doubt is that like in polymorphism, the derived class knows about its parent,.. which is obvious. but for something like type casting the parent class has to know who is the derived instance. if one of you could provide a clearer explanation it would be great. do you know of anything like v-tables in java/c#.
Monday, December 01, 2003
Polymorphism: C++ vs Java vs C#
As mentioned earlier, all three languages provide support for polymorphism. But to enable it, each language requires different syntax.
C++ as we've seen requires a "virtual" keyword in the base class. To override this virtual method you only require to declare a method with the same signature in a derived class. The "virtual" keyword is optional.
class Base
{
public:
virtual void fun() { std::cout << "Base::fun"; }
};
class Derived : public Base
{
public:
/* virtual */ void fun() { std::cout << "Derived::fun"; }
void fun2() { std::cout << "Derived::fun2"; }
};
Base* p;
p = new Base;
p->fun(); // Base's fun()
p = new Derived;
p->fun(); // Derived's fun()
In Java, ALL methods are virtual by default. NO extra keywords are required to enable polymorphism.
public class Base
{
public void fun() { System.out.print( "Base.fun" ); }
}
public class Derived extends Base
{
public void fun() { System.out.print( "Derived.fun" ); }
public void fun2() { System.out.print( "Derived.fun2" ); }
}
Base p;
p = new Base();
p.fun(); // Base's fun()
p = new Derived();
p.fun(); // Derived's fun()
In C# you need TWO keywords to enable polymorphism. In the base class you have to declare a function to be virtual using "virtual". In a derived class you have to explicitly specify that you intend to override a method using "override".
public class Base
{
public virtual void fun() { System.Console.Write( "Base.fun" ); }
}
public class Derived : Base
{
public override void fun() { System.Console.Write( "Derived.fun" ); }
public void fun2() { System.Console.Write( "Derived.fun2" ); }
}
Base p;
p = new Base();
p.fun(); // Base's fun()
p = new Derived();
p.fun(); // Derived's fun()
It's interesting to compare and constract between the different languages and to understand why they chose to do what they did.
C++, as always, is most concerned about space and efficiency. It will ONLY have the overhead of the vtable pointer IF you have any virtuals in there. All methods will be resolved at compile time, statically, by default, unless they are virtual.
Java, on the other hand, concerned most with simplicity, defines all methods virtual. This means that ALL classes regardless of whether they need it or not will have v-tables. Efficiency suffers since ALL method calls will have an indirection. You could explicitly include the keyword "final" which is really the opposite of virtual. This will make the call static and will allow the compiler to resolve it at compile time. So, in Java, all methods will be resolved at run time, dynamically, by default, unless they are final.
Finally we have C#, which goes back to C++ roots. It too will have v-tables ONLY if there are virtual functions in the class. But why introduce the "override" keyword? This is to help with versioning of components. Consider this...
You develop a class which has a bunch of virtual methods. You derive from that class and override whatever you need. All is good... the derived class behaves well. Now, you also define a completely independant new method in the derived class. Sometime later, the Base class designer decides to add a new virtual method with that same name to the Base class. Now, that completely independant method in the Derived class is "unintentially" overriding the new Base class virtual method. More likely than not, it is NOT the correct behavior since it had no idea it was overriding anything. This will happen in both C++ as well as in Java. More easily in Java, since ALL methods are virtual by default. In C++, you have to explicitly say that it's virtual. C# won't have this problem since, you need the "override" keyword in the derived classes.
Extending the above examples...
In C++, if you add a new virtual method to the Base class...
virtual void fun2() { std::cout << "Base::fun2;" }
and in Java if you add a new method to the Base class...
public void fun2() { System.out.print( "Base.fun2" ); }
then the corresponding fun2()'s in Derived will automatically override these.
If you add a new virtual method to the Base class in C#
public virtual void fun2() { System.Console.Write( "Base.fun2" ); }
the compiler will generate a warning (NOT error) telling you that you need to be more explicit about what you are intending to do. Either add the "override" keyword to the derived class method fun2() saying that you DO want polymorphic behavior OR add the "new" keyword saying that you DON'T want polymorphic behavior and that you just want to shadow Base's fun2().
C++ as we've seen requires a "virtual" keyword in the base class. To override this virtual method you only require to declare a method with the same signature in a derived class. The "virtual" keyword is optional.
class Base
{
public:
virtual void fun() { std::cout << "Base::fun"; }
};
class Derived : public Base
{
public:
/* virtual */ void fun() { std::cout << "Derived::fun"; }
void fun2() { std::cout << "Derived::fun2"; }
};
Base* p;
p = new Base;
p->fun(); // Base's fun()
p = new Derived;
p->fun(); // Derived's fun()
In Java, ALL methods are virtual by default. NO extra keywords are required to enable polymorphism.
public class Base
{
public void fun() { System.out.print( "Base.fun" ); }
}
public class Derived extends Base
{
public void fun() { System.out.print( "Derived.fun" ); }
public void fun2() { System.out.print( "Derived.fun2" ); }
}
Base p;
p = new Base();
p.fun(); // Base's fun()
p = new Derived();
p.fun(); // Derived's fun()
In C# you need TWO keywords to enable polymorphism. In the base class you have to declare a function to be virtual using "virtual". In a derived class you have to explicitly specify that you intend to override a method using "override".
public class Base
{
public virtual void fun() { System.Console.Write( "Base.fun" ); }
}
public class Derived : Base
{
public override void fun() { System.Console.Write( "Derived.fun" ); }
public void fun2() { System.Console.Write( "Derived.fun2" ); }
}
Base p;
p = new Base();
p.fun(); // Base's fun()
p = new Derived();
p.fun(); // Derived's fun()
It's interesting to compare and constract between the different languages and to understand why they chose to do what they did.
C++, as always, is most concerned about space and efficiency. It will ONLY have the overhead of the vtable pointer IF you have any virtuals in there. All methods will be resolved at compile time, statically, by default, unless they are virtual.
Java, on the other hand, concerned most with simplicity, defines all methods virtual. This means that ALL classes regardless of whether they need it or not will have v-tables. Efficiency suffers since ALL method calls will have an indirection. You could explicitly include the keyword "final" which is really the opposite of virtual. This will make the call static and will allow the compiler to resolve it at compile time. So, in Java, all methods will be resolved at run time, dynamically, by default, unless they are final.
Finally we have C#, which goes back to C++ roots. It too will have v-tables ONLY if there are virtual functions in the class. But why introduce the "override" keyword? This is to help with versioning of components. Consider this...
You develop a class which has a bunch of virtual methods. You derive from that class and override whatever you need. All is good... the derived class behaves well. Now, you also define a completely independant new method in the derived class. Sometime later, the Base class designer decides to add a new virtual method with that same name to the Base class. Now, that completely independant method in the Derived class is "unintentially" overriding the new Base class virtual method. More likely than not, it is NOT the correct behavior since it had no idea it was overriding anything. This will happen in both C++ as well as in Java. More easily in Java, since ALL methods are virtual by default. In C++, you have to explicitly say that it's virtual. C# won't have this problem since, you need the "override" keyword in the derived classes.
Extending the above examples...
In C++, if you add a new virtual method to the Base class...
virtual void fun2() { std::cout << "Base::fun2;" }
and in Java if you add a new method to the Base class...
public void fun2() { System.out.print( "Base.fun2" ); }
then the corresponding fun2()'s in Derived will automatically override these.
If you add a new virtual method to the Base class in C#
public virtual void fun2() { System.Console.Write( "Base.fun2" ); }
the compiler will generate a warning (NOT error) telling you that you need to be more explicit about what you are intending to do. Either add the "override" keyword to the derived class method fun2() saying that you DO want polymorphic behavior OR add the "new" keyword saying that you DON'T want polymorphic behavior and that you just want to shadow Base's fun2().
Subscribe to:
Posts (Atom)