I thought with the beginning of the new year codeWord could do with a new look. What dyou think? Suggestions are welcome.
Keep them posts coming. Happy 2005!
Friday, December 31, 2004
Wednesday, December 29, 2004
Java Operator Overloading
I saw this comment somewhere...
---
Even if it is trivial to add operator overloading to Java and to make it simple to use, are you absolutely sure its a good idea?
Operator overloading violates one of the central tenets of the Java language design; transparency. If you look at any piece of Java code (no matter who wrote it or where its from) you can easily figure out exactly what it does. There is no "hidden" information; everything is stated explicitly. This philosophy makes Java an ideal language for Open Source and business programming where there may be many different contributors over a long period of time. It is easy to dive into a class, see what's going on and make any modifications necessary.
With operator overloading (and many of the dubious "improvements" made to the Java language in Java 1.5) we lose this transparency. External declarations not referred to internally can completely alter the meaning of a section of code.
That's bad.
Of course, some people don't agree that this emphasis on transparency is useful, so they program in (or at least advocate) other languages (like for example Lisp, Nice, or C++) where the language can be modified and transformed willy nilly. This kind of thing makes an interesting intellectual exercise; it does not however make for a good social environment to program in. For these people, Java must seem limited. However, the vast majority of programmers have rejected this approach (with some relief!) and now program in Java (or its evil twin C#)
The String concatenation argument is often brought up by advocates of operator overloading in Java, and in a way they do have a point; that operator overloading can be useful. That is, it WOULD be useful if it didn't compromise code transparency so flagrantly! There is one big difference between String operator overloading and arbitrary operator overloading. When I sit down to maintain your code, I know what the String "+" operator does; it's in the Java Language Specification. On the other hand, I have no idea what your overriding of the "+" operator does on your "CustomerRecord" class. That is, if I can Without prior knowledge, I can't even tell if an operator is overridden or not!
Operator overloading is indeed not high on Java programmers list of desires (at least those that understand the design philosophy of the language). Rather, the very mention of it provokes feelings of fear and disgust. And rightly so! To those who would like to return to the days where every day was an obfuscated C contest, and where knowledge of the actual language didn't translate into ability to understand and maintain code, I say go elsewhere; to the lands of C++, Perl, Python and their ilk where you will find yourself in eerily familiar territory. Or if you hang around Java long enough you will probably see it ruined by people such as yourself screaming at Sun for more "improvements" along the same lines as Tiger.
---
I agree with his point of transparency. When you're reading someone else's code or even your own code after some gap, trying to make sense of it is difficult. That's why trying to write "clean" self documenting code is important. And it's the reason why "simple" languages like Java and C# are becoming more popular. There are limited (often only one) ways to do things. It's one of the reasons I don't like the C/C++ typedef statement. Most of the time you can't make sense of what the underlying type is.
In the case of operator overloading I disagree with him. I don't think it destroys transperency. If anything I think it makes the code more understandable. Operator overloading is just an abstraction over methods. And it's not like you can overload any operator. There are only a limited and well know operators that have a standard universal meaning that can be overloaded. Yeah, there is potential for abuse, for example, by overloading + to subtract instead of add, but there is nothing stopping anyone from subtracting in an Add() method either.
---
Even if it is trivial to add operator overloading to Java and to make it simple to use, are you absolutely sure its a good idea?
Operator overloading violates one of the central tenets of the Java language design; transparency. If you look at any piece of Java code (no matter who wrote it or where its from) you can easily figure out exactly what it does. There is no "hidden" information; everything is stated explicitly. This philosophy makes Java an ideal language for Open Source and business programming where there may be many different contributors over a long period of time. It is easy to dive into a class, see what's going on and make any modifications necessary.
With operator overloading (and many of the dubious "improvements" made to the Java language in Java 1.5) we lose this transparency. External declarations not referred to internally can completely alter the meaning of a section of code.
That's bad.
Of course, some people don't agree that this emphasis on transparency is useful, so they program in (or at least advocate) other languages (like for example Lisp, Nice, or C++) where the language can be modified and transformed willy nilly. This kind of thing makes an interesting intellectual exercise; it does not however make for a good social environment to program in. For these people, Java must seem limited. However, the vast majority of programmers have rejected this approach (with some relief!) and now program in Java (or its evil twin C#)
The String concatenation argument is often brought up by advocates of operator overloading in Java, and in a way they do have a point; that operator overloading can be useful. That is, it WOULD be useful if it didn't compromise code transparency so flagrantly! There is one big difference between String operator overloading and arbitrary operator overloading. When I sit down to maintain your code, I know what the String "+" operator does; it's in the Java Language Specification. On the other hand, I have no idea what your overriding of the "+" operator does on your "CustomerRecord" class. That is, if I can Without prior knowledge, I can't even tell if an operator is overridden or not!
Operator overloading is indeed not high on Java programmers list of desires (at least those that understand the design philosophy of the language). Rather, the very mention of it provokes feelings of fear and disgust. And rightly so! To those who would like to return to the days where every day was an obfuscated C contest, and where knowledge of the actual language didn't translate into ability to understand and maintain code, I say go elsewhere; to the lands of C++, Perl, Python and their ilk where you will find yourself in eerily familiar territory. Or if you hang around Java long enough you will probably see it ruined by people such as yourself screaming at Sun for more "improvements" along the same lines as Tiger.
---
I agree with his point of transparency. When you're reading someone else's code or even your own code after some gap, trying to make sense of it is difficult. That's why trying to write "clean" self documenting code is important. And it's the reason why "simple" languages like Java and C# are becoming more popular. There are limited (often only one) ways to do things. It's one of the reasons I don't like the C/C++ typedef statement. Most of the time you can't make sense of what the underlying type is.
In the case of operator overloading I disagree with him. I don't think it destroys transperency. If anything I think it makes the code more understandable. Operator overloading is just an abstraction over methods. And it's not like you can overload any operator. There are only a limited and well know operators that have a standard universal meaning that can be overloaded. Yeah, there is potential for abuse, for example, by overloading + to subtract instead of add, but there is nothing stopping anyone from subtracting in an Add() method either.
Tuesday, December 28, 2004
Friday, December 24, 2004
The Concurrency Revolution
Herb Sutter, a C++ heavyweight, writes about the next evolution in programming in The Free Lunch is over: A fundamental turn toward concurrency in software. He acknowledges that Moore's law is going to (has already?) hit limitations and that old single threaded applications won't just magically gain performance as processor speeds increase. As a way to counter the limitations, processors will increasing turn to "parallelism", but apps will need to be tuned to enjoy the benefits.
One thing he mentions in the article is that it will be similar to the OOP shift during the 90s with a similar learning curve. I think that the learning curve is going to be much higher. I haven't really done multithreading programming, but I have read about multithreading in Java and .NET and was briefly introduced to it in one of my classes (I suppose OS will have a much more comprehensive coverage of it). Multi-threading is inherently extremely hard to get right. Our brains are designed to think sequentially. Programming for parallelism is really hard. Even with simple multithreaded programs there are SO many ways to mess up. And because there isn't a straight line to follow, debugging is anothing nightmare.
I think to become as wide spread as OOP, where everyone can easily adapt to the paradigm, it needs to be simplified. Java and .NET have threading built into the platform, which is a start. They have made it easier, but it's still a huge learning curve. Just like we have "Hello World" intro programs, we'll need to start having "Hello Parallel Worlds".
One thing he mentions in the article is that it will be similar to the OOP shift during the 90s with a similar learning curve. I think that the learning curve is going to be much higher. I haven't really done multithreading programming, but I have read about multithreading in Java and .NET and was briefly introduced to it in one of my classes (I suppose OS will have a much more comprehensive coverage of it). Multi-threading is inherently extremely hard to get right. Our brains are designed to think sequentially. Programming for parallelism is really hard. Even with simple multithreaded programs there are SO many ways to mess up. And because there isn't a straight line to follow, debugging is anothing nightmare.
I think to become as wide spread as OOP, where everyone can easily adapt to the paradigm, it needs to be simplified. Java and .NET have threading built into the platform, which is a start. They have made it easier, but it's still a huge learning curve. Just like we have "Hello World" intro programs, we'll need to start having "Hello Parallel Worlds".
Tuesday, December 21, 2004
Monday, December 20, 2004
STL.NET
I had mentioned in a previous post about C++ being adopted to .NET. I also mentioned that the .NET guys were thinking about how to include the STL functionality in the .NET framework library. Well, C++ is special in that it supports different programming paradigms. So they've come up with STL.NET.
Stan Lippman, who is one of the dev's on the project has written an article about it. Here's the summary...
For the experienced programmer, the hardest part of moving to a new development platform such as .NET is often the absence of familiar tools through which she has honed her skills and on which she depends. For the experienced C++ programmer, one such essential toolkit is the Standard Template Library (STL), and its absence under .NET until now has been a significant disappointment. With Visual C++ 2005, we fix that by providing an STL.NET library. This article, the first in a series, provides a general overview of the STL program model using STL.NET – it discusses sequential and associative containers, the generic algorithms, and the iterator abstraction that binds the two, using plenty of program examples to illustrate each point. It begins by briefly considering the alterative container models available to the .NET programmer using C++ -- the existing System::Collections library, the new System::Collections::Generic library, and, of course, STL.NET. To provide for the widest readership, this article does not require familiarity with the STL library; however, it does presume some experience with the C++ programming language.
Stan Lippman, who is one of the dev's on the project has written an article about it. Here's the summary...
For the experienced programmer, the hardest part of moving to a new development platform such as .NET is often the absence of familiar tools through which she has honed her skills and on which she depends. For the experienced C++ programmer, one such essential toolkit is the Standard Template Library (STL), and its absence under .NET until now has been a significant disappointment. With Visual C++ 2005, we fix that by providing an STL.NET library. This article, the first in a series, provides a general overview of the STL program model using STL.NET – it discusses sequential and associative containers, the generic algorithms, and the iterator abstraction that binds the two, using plenty of program examples to illustrate each point. It begins by briefly considering the alterative container models available to the .NET programmer using C++ -- the existing System::Collections library, the new System::Collections::Generic library, and, of course, STL.NET. To provide for the widest readership, this article does not require familiarity with the STL library; however, it does presume some experience with the C++ programming language.
Wednesday, December 15, 2004
Re: which is faster : C or C++?
So, if I want to write such code (if?... hell I DO have to write such code), which is a better option - C or C++? In this case, is it right to say that you could use all the good organisation and 'cleanliness' of using classes and get the same performance if you let go of virtual functions?
I guess you've answered your own question. It's clear that your most important criteria is performance. And since you're only debating on C vs C++, C is more "lightweight" and you should be able to grind out more instructions/cycles with it.
But again, you are the only one who knows enough about your project to make the decision. Generally, you'd need to consider a lot more than just performance when choosing languages. In your case, you're deciding between C and C++. C++ (as we've all agreed) has a lot more to offer over C. But at the same time, you loose certain advantages that C provides - one of them being performance (and again this can be argued forever).
Looking at your project, would OOP be helpful? Dyou think that having classes will help in organizing and designing your project in a "better" way than C with its separation of functions and data? Think about the bigger picture rather than debate about "malloc()" vs "new".
BTW, post some info about your project.
I guess you've answered your own question. It's clear that your most important criteria is performance. And since you're only debating on C vs C++, C is more "lightweight" and you should be able to grind out more instructions/cycles with it.
But again, you are the only one who knows enough about your project to make the decision. Generally, you'd need to consider a lot more than just performance when choosing languages. In your case, you're deciding between C and C++. C++ (as we've all agreed) has a lot more to offer over C. But at the same time, you loose certain advantages that C provides - one of them being performance (and again this can be argued forever).
Looking at your project, would OOP be helpful? Dyou think that having classes will help in organizing and designing your project in a "better" way than C with its separation of functions and data? Think about the bigger picture rather than debate about "malloc()" vs "new".
BTW, post some info about your project.
Tuesday, December 14, 2004
Re: which is faster : C or C++?
I guess I didn't pose my question very clearly... will try to do it in this post. First of all, I must clarify that I am as big a fan of C++ as anybody can be and I'd choose C++ over C almost ALL the time unless it's absolutely necessary to use C. It's that particular 'absolutely necessary' case I'm examining here. All the things you guys have written make sense and I agree completely.
While that little function overhead is insignificant in most cases and is nothing compared to the IMMENSE additional flexibility and functionality that you gain, it would be worthwhile contemplating under what circumstances this overhead could become significant. Codes that go into CFD applications can take ridiculously long time to execute. Let's say a C++ code that does the same thing as a 5 sec C code takes 7.5 sec to execute. Doesn't seem much... you don't give a damn. Stick to C++. But when you're talking about 50 and 75 DAYS, the difference is HUGE. And I'm not kidding here. There are codes which take that long to execute.
So, if I want to write such code (if?... hell I DO have to write such code), which is a better option - C or C++? In this case, is it right to say that you could use all the good organisation and 'cleanliness' of using classes and get the same performance if you let go of virtual functions?
Once again, except for this case of infinitely large execution times, C++ is a better option than C... no doubt about it. But what about this case?
While that little function overhead is insignificant in most cases and is nothing compared to the IMMENSE additional flexibility and functionality that you gain, it would be worthwhile contemplating under what circumstances this overhead could become significant. Codes that go into CFD applications can take ridiculously long time to execute. Let's say a C++ code that does the same thing as a 5 sec C code takes 7.5 sec to execute. Doesn't seem much... you don't give a damn. Stick to C++. But when you're talking about 50 and 75 DAYS, the difference is HUGE. And I'm not kidding here. There are codes which take that long to execute.
So, if I want to write such code (if?... hell I DO have to write such code), which is a better option - C or C++? In this case, is it right to say that you could use all the good organisation and 'cleanliness' of using classes and get the same performance if you let go of virtual functions?
Once again, except for this case of infinitely large execution times, C++ is a better option than C... no doubt about it. But what about this case?
Saturday, December 11, 2004
Re: which is faster : C or C++?
Primarily what I love about C++ is the STL. It has never given me more pleasure to see a library in action. Granted that the organisation of the STL reflects the fact that there were multiple design heads involved but yet it's the most beautiful piece of code I have ever seen.
I dunno if I can say it's the most beautiful piece of code (I haven't actually read the source), but I fully agree with you that it's a fantastic library. The way they have designed it, with such a wonderful separation of containers, iterators, algorithms and functions is quite brilliant.
Whats even better about C++ is it doesn't force a programming paradigm on you, it lets you design your solution in any way you wish, so if you want to have a C style program, well just go right ahead!!
Agree again. The best thing about C++ is that it gives the programmer a lot of freedom. It supports procedural, object oriented and generic programming. I don't think there's any other language that does that. Microsoft is also fully integrating it into .NET in the next version of their compiler, so it will support garbage collection and will have access to the Base Class Library. Just another way C++ can be used.
.NET and Java are supporting generics in their newest versions. Naturally they are looking at C++ for ideas. But I feel the won't be able to come up with as elegant a solution as the STL because they only support OOP. Some proposed functionality I've read about for the next version of .NET collections is quite ugly. Like including same algorithm functionality in each collection. There is no iterator abstraction, so each collection in a way is different. It's a dilema for them. How to support all the functionality in an OOP way. It'll be interesting to see what they finally come up with. Haven't seen how Java handles it.
Always remember that C++ was meant to be a better and "safer" C.
The general trend it seems is that all the guru's (i.e. Bjarne and friends) are encouraging to make use of more abstractions and use the STL in the name of convenience, maintenance and safety. For ex. Go for vectors instead of straight arrays, Use as little manual memory management as possible or if you need to play with pointers go for some of the safe versions available through STL and boost. I took a course on generic programming where we used the STL. We hardly new'd and delete'd. It's a testament to C++'s flexibility. It's able to adapt to the evolving paradigms.
virtual functions are implemented using a lookup table that gives you a function pointer for each derived class type. Thus, these kind of functions can simply not be inlined
If a compiler is smart enough, it should be able to inline some calls to virtual functions. There's a way to explicity (statically) call virtual functions...
class Base
{
public:
virtual void Function1()
{
cout >> "Base::Function1";
}
};
class Derived : public Base
{
public:
virtual void Function1()
{
cout >> "Derived::Function1";
}
void Function2()
{
Base::Function1(); // can be inlined
}
};
Correct me if I'm wrong about this fact. Or if this particular example is wrong.
I think the guys designing and implementing C++ were as concerned about performance as anyone. They did everything possible for limited performance hits. I don't think anyone can fault them. Dinesh had recommended a book long back called "The C++ Object Model". It gives you a good idea about how they implemented a lot of the features. Virtual functions and polymorphism in general is discussed at length. And they give a lot of examples in CFront. So you can see what C code is generated.
I dunno if I can say it's the most beautiful piece of code (I haven't actually read the source), but I fully agree with you that it's a fantastic library. The way they have designed it, with such a wonderful separation of containers, iterators, algorithms and functions is quite brilliant.
Whats even better about C++ is it doesn't force a programming paradigm on you, it lets you design your solution in any way you wish, so if you want to have a C style program, well just go right ahead!!
Agree again. The best thing about C++ is that it gives the programmer a lot of freedom. It supports procedural, object oriented and generic programming. I don't think there's any other language that does that. Microsoft is also fully integrating it into .NET in the next version of their compiler, so it will support garbage collection and will have access to the Base Class Library. Just another way C++ can be used.
.NET and Java are supporting generics in their newest versions. Naturally they are looking at C++ for ideas. But I feel the won't be able to come up with as elegant a solution as the STL because they only support OOP. Some proposed functionality I've read about for the next version of .NET collections is quite ugly. Like including same algorithm functionality in each collection. There is no iterator abstraction, so each collection in a way is different. It's a dilema for them. How to support all the functionality in an OOP way. It'll be interesting to see what they finally come up with. Haven't seen how Java handles it.
Always remember that C++ was meant to be a better and "safer" C.
The general trend it seems is that all the guru's (i.e. Bjarne and friends) are encouraging to make use of more abstractions and use the STL in the name of convenience, maintenance and safety. For ex. Go for vectors instead of straight arrays, Use as little manual memory management as possible or if you need to play with pointers go for some of the safe versions available through STL and boost. I took a course on generic programming where we used the STL. We hardly new'd and delete'd. It's a testament to C++'s flexibility. It's able to adapt to the evolving paradigms.
virtual functions are implemented using a lookup table that gives you a function pointer for each derived class type. Thus, these kind of functions can simply not be inlined
If a compiler is smart enough, it should be able to inline some calls to virtual functions. There's a way to explicity (statically) call virtual functions...
class Base
{
public:
virtual void Function1()
{
cout >> "Base::Function1";
}
};
class Derived : public Base
{
public:
virtual void Function1()
{
cout >> "Derived::Function1";
}
void Function2()
{
Base::Function1(); // can be inlined
}
};
Correct me if I'm wrong about this fact. Or if this particular example is wrong.
I think the guys designing and implementing C++ were as concerned about performance as anyone. They did everything possible for limited performance hits. I don't think anyone can fault them. Dinesh had recommended a book long back called "The C++ Object Model". It gives you a good idea about how they implemented a lot of the features. Virtual functions and polymorphism in general is discussed at length. And they give a lot of examples in CFront. So you can see what C code is generated.
Friday, December 10, 2004
Re: which is faster : C or C++?
Obviously, I am interpreting this question as structured vs object oriented programming. I personally have used C++ for ages, without caring to make a class -- and i really appreciate the fact that it dosnt force a programming paradigm on us :)
Regarding optimization, one thing is clear -- especially in the GNU context: GCC does optimization on an intermediate form of code that it derives from the front-end language like C or C++. Hence, optimization must be just as good for both. In fact, I believe that taking the CFront route (C++ -> C -> ASM) instead of the GCC route (C++ -> ASM) will produce assembly that's just as good, but will take much longer to do so.
So, what the point? How is C++ optimization different? I'd say that the "optimization needs" of a C++ program are different.
Let's take an illustrative historical case of encapsulation -- encapsulation brought in a new era where the number of functions written by a programmer increased manifold! Firstly, because C++ dissuades the use of macros, and secondly, because there is a higher tendency of making constructors and destructors in C++, whereas in C, you'd type in the whole thing each and every time you needed it. Thus, older compilers who did not have good enough support for inlining functions, often failed to produce overall good C++ code.
Of course, any self-respecting compiler today has really good inlining support, so this example I have given probably no longer holds. So now, lets move on to simple polymorphism: virtual functions are implemented using a lookup table that gives you a function pointer for each derived class type. Thus, these kind of functions can simply not be inlined, and they also use "jump to address in a variable"; something like this: (*var)(). A programming style that enforces use of such control jumps is BAD. The reason is that most computer architectures today have built-in support for branch prediction and usage of such statements defeats their purpose. I do not deny that you can do this in C as well; but you would usually not! Whereas, the use of virtual functions in C++ is almost a norm!
I have no idea about multiple inheritence etc., god knows why they created such a feature! Also, I have never used STL, so I don't really know how the widespread use of STL influences the optimization needs of C++ code.
All said, I definitely agree that C++ is a wiser choice than C for any hardcore hosted developmental work, because of lesser development time. I just seriously recommend restricting polymorphism to only those places where it really really simplifies things.
BTW, G++ usually produces bloat in the form of a symbol table, that's used for debugging etc., it won't even be copied to memory... just sits on your harddisk.
Regarding optimization, one thing is clear -- especially in the GNU context: GCC does optimization on an intermediate form of code that it derives from the front-end language like C or C++. Hence, optimization must be just as good for both. In fact, I believe that taking the CFront route (C++ -> C -> ASM) instead of the GCC route (C++ -> ASM) will produce assembly that's just as good, but will take much longer to do so.
So, what the point? How is C++ optimization different? I'd say that the "optimization needs" of a C++ program are different.
Let's take an illustrative historical case of encapsulation -- encapsulation brought in a new era where the number of functions written by a programmer increased manifold! Firstly, because C++ dissuades the use of macros, and secondly, because there is a higher tendency of making constructors and destructors in C++, whereas in C, you'd type in the whole thing each and every time you needed it. Thus, older compilers who did not have good enough support for inlining functions, often failed to produce overall good C++ code.
Of course, any self-respecting compiler today has really good inlining support, so this example I have given probably no longer holds. So now, lets move on to simple polymorphism: virtual functions are implemented using a lookup table that gives you a function pointer for each derived class type. Thus, these kind of functions can simply not be inlined, and they also use "jump to address in a variable"; something like this: (*var)(). A programming style that enforces use of such control jumps is BAD. The reason is that most computer architectures today have built-in support for branch prediction and usage of such statements defeats their purpose. I do not deny that you can do this in C as well; but you would usually not! Whereas, the use of virtual functions in C++ is almost a norm!
I have no idea about multiple inheritence etc., god knows why they created such a feature! Also, I have never used STL, so I don't really know how the widespread use of STL influences the optimization needs of C++ code.
All said, I definitely agree that C++ is a wiser choice than C for any hardcore hosted developmental work, because of lesser development time. I just seriously recommend restricting polymorphism to only those places where it really really simplifies things.
BTW, G++ usually produces bloat in the form of a symbol table, that's used for debugging etc., it won't even be copied to memory... just sits on your harddisk.
Google Suggest
http://www.google.com/webhp?complete=1&hl=en
Yet another innovation from everyone's favorite company. Go through the alphabet to see what the suggestions are. Some are pretty interesting (ex. 'p').
Yet another innovation from everyone's favorite company. Go through the alphabet to see what the suggestions are. Some are pretty interesting (ex. 'p').
Re: which is faster : C or C++?
I agree with Mohnish here. When comparing languages for implementing something you gotta see what suits the purpose best. The cost for virtual functions and polymorphism in C++ is a single virtual table pointer in each object and the resolution of those pointers and what exactly should be called based on the heirarchy of class implementation. But what you get for that is a whole new paradigm under your control. A whole new world view if you will. No more is programming based on thinking about what piece of information is processed when rather we are supplemented to talk in more abstract or high level terms.
A language which gives you the power to do object oriented programming at the cost of a single virtual table pointer is a piece of work in itself. Primarily what I love about C++ is the STL. It has never given me more pleasure to see a library in action. Granted that the organisation of the STL reflects the fact that there were multiple design heads involved but yet it's the most beautiful piece of code I have ever seen, if you don't agree just open up the algorithm or functional standard header and read for yourself, it's beautiful!! :) Whats even better about C++ is it doesn't force a programming paradigm on you, it lets you design your solution in any way you wish, so if you want to have a C style program, well just go right ahead!! Always remember that C++ was meant to be a better and "safer" C.
Over C, I'd choose C++ anyday, besides I hate writing the cumbersume printf() statements for everything, cout is so much better :) (ok, that was my cheesy joke for the day! sorry!!)
Put things in context and you'd see that C++ gives you a lot more than C, atleast thats what I think. One gripe I have with the g++ compiler is that it produces a lot of code, the -strip option does work well but still I have never been able to figure out what bloat code it writes! But then again with memory so cheap now it doesn't really matter.
By the way, can you factually prove that C++'s optimization is not as good as C(or were you saying something else)?? The g++ gives three levels of Optimization - O1, O2 and O3, all my programs are compiled with the -s -O3 options (releasable code that is). C++ by it's very design allows the compilers a lot of lee-way as to what it can optimize. And the GNU compilers sure do make use of it!
All in all whatever the speed comparisons, if I had a big project to work on, I'd be betting on C++ to get the job done in a good and maintainable way!
Dinesh.
A language which gives you the power to do object oriented programming at the cost of a single virtual table pointer is a piece of work in itself. Primarily what I love about C++ is the STL. It has never given me more pleasure to see a library in action. Granted that the organisation of the STL reflects the fact that there were multiple design heads involved but yet it's the most beautiful piece of code I have ever seen, if you don't agree just open up the algorithm or functional standard header and read for yourself, it's beautiful!! :) Whats even better about C++ is it doesn't force a programming paradigm on you, it lets you design your solution in any way you wish, so if you want to have a C style program, well just go right ahead!! Always remember that C++ was meant to be a better and "safer" C.
Over C, I'd choose C++ anyday, besides I hate writing the cumbersume printf() statements for everything, cout is so much better :) (ok, that was my cheesy joke for the day! sorry!!)
Put things in context and you'd see that C++ gives you a lot more than C, atleast thats what I think. One gripe I have with the g++ compiler is that it produces a lot of code, the -strip option does work well but still I have never been able to figure out what bloat code it writes! But then again with memory so cheap now it doesn't really matter.
By the way, can you factually prove that C++'s optimization is not as good as C(or were you saying something else)?? The g++ gives three levels of Optimization - O1, O2 and O3, all my programs are compiled with the -s -O3 options (releasable code that is). C++ by it's very design allows the compilers a lot of lee-way as to what it can optimize. And the GNU compilers sure do make use of it!
All in all whatever the speed comparisons, if I had a big project to work on, I'd be betting on C++ to get the job done in a good and maintainable way!
Dinesh.
Wednesday, December 08, 2004
Re: which is faster : C or C++?
which is faster : C or C++?
I think it would a good idea to first define "faster". What exactly do you mean? Faster in what context? In a one line program or in a 100,000 line program? And how do you analyse the performance?
Personally, I feel it's an exercise in futility to compare what language is "faster" than the other. The reason I feel that way is cause you'll find studies and papers claiming that each language can beat every other one.
Use of virtual functions and run-time polymorphism slows down the code a little. So if this feature of C++ is not used, C++ code would run as fast as C code.
I think this is a wrong approach with which to look at C++. C++ was created to be a "better" C. You can take that to mean whatever you want (everyone has their own opinion about why it's better (if at all)). From what I understand, as applications started getting larger, using C to develop them was getting to be a pain in the arse. They needed something that would make it easier to write maintainable code (Isn't code always easier to write than read?). Enter C++. It created another level of abstraction, just as C created an abstraction over assembly, and assembly over machine code, and machine code over gates, and gates over the 0s and 1s, and the 0s and 1s over the electrons... you get the picture (did I miss a level?).
Anyway, my point (yeah I have one!) is that if you look at C++ feature by feature and look to eliminate something so as to get it to run as "fast" as C... you might as well cut to the chase and go play with electrons.
Having said this, the difference between run times is due to the compilers and not the languages themselves. Last I heard, C++ compilers don't optimize C++ code as well as C compilers optimize C code.
Did you know that the first C++ compiler (CFront) generated C code... not machine code? So any optimization made to C compilers would apply to C++ code as well. Today, every C++ compiler most likely generates native code, but I don't see any reason why they would be any less optimizing than C compilers.
Again the abstractions bit comes in. It's all about the amount of control you (the programer) want to have. You can write programs with 0s and 1s if you want to... you got all the control in the world. I wouldn't imagine it would be very fun to do it, but you can if you want to. You sure as hell won't be very productive. Just as you lost some control when you went from C to C++ (like creating/destroying objects, does multiple things... you don't have control over the entire process), going from C++ to Java/C# you lose even more control. But what you gain is productivity.
The code that goes into CFD applications handles millions of points, so even a little function overhead (eg virtual function) is significant.
I read this on some (smart) dude's blog about performance: "Always set goals and always measure". What's good enough for you? If you code the app in C++ and it's slower than it was with C, but good enough then does it matter? Depending on how good you are with each language you might be a lot more productive with C++. So it's a tradeoff.
Bottomline - if virtual functions, run-time polymorphism isn't used, C++ code would run as fast as C code.
There are a lot more abstractions than just virtual functions in C++, so I doubt if you just avoid that if it would make a huge difference.
For starters, about me -- I have no clue to Java; I'm pretty good at C on UNIX/Linux etc and I can bear C++.
Firstly, cheers on your first post. Hope to see a lot more.
Just to give a brief intro to what we dudes are about...
Hrishi - (you probably know more) C/Linux
Rahul - Java/Linux
Dinesh - C++/AI/Game engines/Philosophy
Yours truely - C#/.NET/Bit of Java/Bit of C++
PS: Did you guys know we guys here call Hrishikesh, "Micro"? Micro?! Huh! near Mega you'd say... well... but then it all started from a Micro-elephant :D
Dyou see the archive links on the right hand side of the page? Go to November 2003 and check the very first post's title and ask Micro to explain it. Post your reaction.
I think it would a good idea to first define "faster". What exactly do you mean? Faster in what context? In a one line program or in a 100,000 line program? And how do you analyse the performance?
Personally, I feel it's an exercise in futility to compare what language is "faster" than the other. The reason I feel that way is cause you'll find studies and papers claiming that each language can beat every other one.
Use of virtual functions and run-time polymorphism slows down the code a little. So if this feature of C++ is not used, C++ code would run as fast as C code.
I think this is a wrong approach with which to look at C++. C++ was created to be a "better" C. You can take that to mean whatever you want (everyone has their own opinion about why it's better (if at all)). From what I understand, as applications started getting larger, using C to develop them was getting to be a pain in the arse. They needed something that would make it easier to write maintainable code (Isn't code always easier to write than read?). Enter C++. It created another level of abstraction, just as C created an abstraction over assembly, and assembly over machine code, and machine code over gates, and gates over the 0s and 1s, and the 0s and 1s over the electrons... you get the picture (did I miss a level?).
Anyway, my point (yeah I have one!) is that if you look at C++ feature by feature and look to eliminate something so as to get it to run as "fast" as C... you might as well cut to the chase and go play with electrons.
Having said this, the difference between run times is due to the compilers and not the languages themselves. Last I heard, C++ compilers don't optimize C++ code as well as C compilers optimize C code.
Did you know that the first C++ compiler (CFront) generated C code... not machine code? So any optimization made to C compilers would apply to C++ code as well. Today, every C++ compiler most likely generates native code, but I don't see any reason why they would be any less optimizing than C compilers.
Again the abstractions bit comes in. It's all about the amount of control you (the programer) want to have. You can write programs with 0s and 1s if you want to... you got all the control in the world. I wouldn't imagine it would be very fun to do it, but you can if you want to. You sure as hell won't be very productive. Just as you lost some control when you went from C to C++ (like creating/destroying objects, does multiple things... you don't have control over the entire process), going from C++ to Java/C# you lose even more control. But what you gain is productivity.
The code that goes into CFD applications handles millions of points, so even a little function overhead (eg virtual function) is significant.
I read this on some (smart) dude's blog about performance: "Always set goals and always measure". What's good enough for you? If you code the app in C++ and it's slower than it was with C, but good enough then does it matter? Depending on how good you are with each language you might be a lot more productive with C++. So it's a tradeoff.
Bottomline - if virtual functions, run-time polymorphism isn't used, C++ code would run as fast as C code.
There are a lot more abstractions than just virtual functions in C++, so I doubt if you just avoid that if it would make a huge difference.
For starters, about me -- I have no clue to Java; I'm pretty good at C on UNIX/Linux etc and I can bear C++.
Firstly, cheers on your first post. Hope to see a lot more.
Just to give a brief intro to what we dudes are about...
Hrishi - (you probably know more) C/Linux
Rahul - Java/Linux
Dinesh - C++/AI/Game engines/Philosophy
Yours truely - C#/.NET/Bit of Java/Bit of C++
PS: Did you guys know we guys here call Hrishikesh, "Micro"? Micro?! Huh! near Mega you'd say... well... but then it all started from a Micro-elephant :D
Dyou see the archive links on the right hand side of the page? Go to November 2003 and check the very first post's title and ask Micro to explain it. Post your reaction.
Hey guys!
Ok, looks like Hrishikesh has plucked the right string there... C/C++ usually gets me started :-)
For starters, about me -- I have no clue to Java; I'm pretty good at C on UNIX/Linux etc and I can bear C++. Regarding my ignorance of Java, all I will say is that the "Hello world!" I wrote took so long to start off that I gave up :D Well, maybe though, my body and soul is written in Java. See, Mohnish added me to the blog almost a week ago. And my first post comes now. Pretty much like the Hello World I wrote... took a looong time to start, but worked fine after that. (bad joke -- you said this was the place :P)
I am looking forward to seeing your comments on C vs C++. In fact lets add Java to it! Let's see what you hard-core Java fellows have to say about the efficiency of object oriented features that Hrishikesh (in my opinion, correctly) labels as having sub-optimal implementations in C++. What about Java?
I will come up with a post detailing what I like and dislike about C++ soon...
Till then,
PS: Did you guys know we guys here call Hrishikesh, "Micro"? Micro?! Huh! near Mega you'd say... well... but then it all started from a Micro-elephant :D
For starters, about me -- I have no clue to Java; I'm pretty good at C on UNIX/Linux etc and I can bear C++. Regarding my ignorance of Java, all I will say is that the "Hello world!" I wrote took so long to start off that I gave up :D Well, maybe though, my body and soul is written in Java. See, Mohnish added me to the blog almost a week ago. And my first post comes now. Pretty much like the Hello World I wrote... took a looong time to start, but worked fine after that. (bad joke -- you said this was the place :P)
I am looking forward to seeing your comments on C vs C++. In fact lets add Java to it! Let's see what you hard-core Java fellows have to say about the efficiency of object oriented features that Hrishikesh (in my opinion, correctly) labels as having sub-optimal implementations in C++. What about Java?
I will come up with a post detailing what I like and dislike about C++ soon...
Till then,
PS: Did you guys know we guys here call Hrishikesh, "Micro"? Micro?! Huh! near Mega you'd say... well... but then it all started from a Micro-elephant :D
which is faster : C or C++?
Let's reignite this age old debate; well maybe not all that age old but definitely something worth discussing. It is a widely regarded notion that C is faster than C++ though I havne't found any concrete reasons or literature to support this claim.
This is what I have inferred from what I have read -
Use of virtual functions and run-time polymorphism slows down the code a little. So if this feature of C++ is not used, C++ code would run as fast as C code.
Having said this, the difference between run times is due to the compilers and not the languages themselves. Last I heard, C++ compilers don't optimize C++ code as well as C compilers optimize C code.
The code that goes into CFD applications handles millions of points, so even a little function overhead (eg virtual function) is significant.
Bottomline - if virtual functions, run-time polymorphism isn't used, C++ code would run as fast as C code.
Thoughts, comments, links?
This is what I have inferred from what I have read -
Use of virtual functions and run-time polymorphism slows down the code a little. So if this feature of C++ is not used, C++ code would run as fast as C code.
Having said this, the difference between run times is due to the compilers and not the languages themselves. Last I heard, C++ compilers don't optimize C++ code as well as C compilers optimize C code.
The code that goes into CFD applications handles millions of points, so even a little function overhead (eg virtual function) is significant.
Bottomline - if virtual functions, run-time polymorphism isn't used, C++ code would run as fast as C code.
Thoughts, comments, links?
Monday, December 06, 2004
Re: Is Some Software Meant to be Secret?
if I provide source of my app, don't I have to provide it during developement phase too?
Isn't this normal practice for open source apps? Couldn't you download daily builds of Firefox?
Yes. Thats why I felt that Tim Bray mentioning that if a super feature is being included it will not give an advantage to rivals till released. Maybe their design would be different but an idea could be incorporated.
I think a major difference between closed source apps and open source counterparts is that open source doesn't really have a strong sense of versioning. It is a very iterative process. Using FireFox as an example... people have been using it way before they released 1.0. It's part of the "culture". You're expected to keep up.
I disagree here. The users of open source API's are generally more adventurous, but the feature set for each version are generally clearly defined. If more co's start using open source products, they will be more slow to update versions and even take beta releases.
How does Sun do it for Java APIs?
I am not sure about the Java API. The Java JDK has been released as a project at java.net. This is a Sun site where loads of open and not-so-open projects are hosted. So you can start off with Java 6.0 today. Sun has mentioned that they are going to provide faster releases in the future.
MS sees a subscription based model as the future.
Dyou really think this model will work?
Dunno. Any new model will take time for adoption. Sun is actually doing it now. It seems scary but it seems more correct to me. In todays world everything is connected to the net. For a co. (who buy software) subscription seems better as they get new releases. Can switch after a year with lower costs. Lots of co's pay loads for new software. That leaves them with old versions very soon. And a lesser functionality version can be passed to the kids to play with. Everyone it seems, would be much more happier. Subscription is like your cable or cell. Its just that we are not used to it now. And with web-services this model seems even more easier to implement.
I just do not see the need to please anyone else
I was joking. You know... going public as in getting listed on an index like Nasdaq and so pleasing our shareholders. Maybe I should make more use of ';-)' in the future ;-)
Dude. That would not compile. Here's why..
1. class shareholderJoke extends nasdaqPatheticJoke {} --- missing
2. And the ;-) Annotation was missing too. (Yup.. I still do not know how to write Annotations!!)
And BTW, we do have a new member but he's been quiet. Hrishi's pal from IIT, Nikhil, is the latest codeWordian (too cheesy?). Let's have some posts dude.
Welcome aboard. This (as you might have realised) is the place for really bad jokes. Might get a bit of knowledge once a while.
Isn't this normal practice for open source apps? Couldn't you download daily builds of Firefox?
Yes. Thats why I felt that Tim Bray mentioning that if a super feature is being included it will not give an advantage to rivals till released. Maybe their design would be different but an idea could be incorporated.
I think a major difference between closed source apps and open source counterparts is that open source doesn't really have a strong sense of versioning. It is a very iterative process. Using FireFox as an example... people have been using it way before they released 1.0. It's part of the "culture". You're expected to keep up.
I disagree here. The users of open source API's are generally more adventurous, but the feature set for each version are generally clearly defined. If more co's start using open source products, they will be more slow to update versions and even take beta releases.
How does Sun do it for Java APIs?
I am not sure about the Java API. The Java JDK has been released as a project at java.net. This is a Sun site where loads of open and not-so-open projects are hosted. So you can start off with Java 6.0 today. Sun has mentioned that they are going to provide faster releases in the future.
MS sees a subscription based model as the future.
Dyou really think this model will work?
Dunno. Any new model will take time for adoption. Sun is actually doing it now. It seems scary but it seems more correct to me. In todays world everything is connected to the net. For a co. (who buy software) subscription seems better as they get new releases. Can switch after a year with lower costs. Lots of co's pay loads for new software. That leaves them with old versions very soon. And a lesser functionality version can be passed to the kids to play with. Everyone it seems, would be much more happier. Subscription is like your cable or cell. Its just that we are not used to it now. And with web-services this model seems even more easier to implement.
I just do not see the need to please anyone else
I was joking. You know... going public as in getting listed on an index like Nasdaq and so pleasing our shareholders. Maybe I should make more use of ';-)' in the future ;-)
Dude. That would not compile. Here's why..
1. class shareholderJoke extends nasdaqPatheticJoke {} --- missing
2. And the ;-) Annotation was missing too. (Yup.. I still do not know how to write Annotations!!)
And BTW, we do have a new member but he's been quiet. Hrishi's pal from IIT, Nikhil, is the latest codeWordian (too cheesy?). Let's have some posts dude.
Welcome aboard. This (as you might have realised) is the place for really bad jokes. Might get a bit of knowledge once a while.
Sunday, December 05, 2004
Re: Is Some Software Meant to be Secret?
if I provide source of my app, don't I have to provide it during developement phase too?
Isn't this normal practice for open source apps? Couldn't you download daily builds of Firefox?
I think a major difference between closed source apps and open source counterparts is that open source doesn't really have a strong sense of versioning. It is a very iterative process. Using FireFox as an example... people have been using it way before they released 1.0. It's part of the "culture". You're expected to keep up. So the release/development phase is sort of blurred. It's not really the case for closed source apps. There's a clear separation. So even if these closed source guys open their code, it would most likely be with the final release. How does Sun do it for Java APIs?
MS sees a subscription based model as the future. I think Web-services will play a big role in this. Sun has a subscription model for JDS and plans something similar for Solaris 10. They even want to offer grid computing wherein the customer simply pays for CPU cycles. So the revenue model is changing.
Dyou really think this model will work? Somehow I can't imagine it will ever be successful. This idea of pay per use will be too hard for many people to swallow. People are used to the idea of owning their software and using it however they want. Moving to the subscription model won't be easy because you're not in control. At anytime, anyone can cut off your access. I think MS did some trials in a few countries and it bombed. Maybe it would work in large companies where there might be a possibility of cutting costs. But for personal use - I highly doubt it.
There should be no pressure on us. We continue what we do. If someone else is interested, they join. Simple. I just do not see the need to please anyone else
I was joking. You know... going public as in getting listed on an index like Nasdaq and so pleasing our shareholders. Maybe I should make more use of ';-)' in the future ;-)
And BTW, we do have a new member but he's been quiet. Hrishi's pal from IIT, Nikhil, is the latest codeWordian (too cheesy?). Let's have some posts dude.
Isn't this normal practice for open source apps? Couldn't you download daily builds of Firefox?
I think a major difference between closed source apps and open source counterparts is that open source doesn't really have a strong sense of versioning. It is a very iterative process. Using FireFox as an example... people have been using it way before they released 1.0. It's part of the "culture". You're expected to keep up. So the release/development phase is sort of blurred. It's not really the case for closed source apps. There's a clear separation. So even if these closed source guys open their code, it would most likely be with the final release. How does Sun do it for Java APIs?
MS sees a subscription based model as the future. I think Web-services will play a big role in this. Sun has a subscription model for JDS and plans something similar for Solaris 10. They even want to offer grid computing wherein the customer simply pays for CPU cycles. So the revenue model is changing.
Dyou really think this model will work? Somehow I can't imagine it will ever be successful. This idea of pay per use will be too hard for many people to swallow. People are used to the idea of owning their software and using it however they want. Moving to the subscription model won't be easy because you're not in control. At anytime, anyone can cut off your access. I think MS did some trials in a few countries and it bombed. Maybe it would work in large companies where there might be a possibility of cutting costs. But for personal use - I highly doubt it.
There should be no pressure on us. We continue what we do. If someone else is interested, they join. Simple. I just do not see the need to please anyone else
I was joking. You know... going public as in getting listed on an index like Nasdaq and so pleasing our shareholders. Maybe I should make more use of ';-)' in the future ;-)
And BTW, we do have a new member but he's been quiet. Hrishi's pal from IIT, Nikhil, is the latest codeWordian (too cheesy?). Let's have some posts dude.
Re: Is Some Software Meant to be Secret?
Tim Bray and Microsoft's Joe Marini
To open source or not. Tis is a very big question.
Wrt the articles, if I provide source of my app, don't I have to provide it during developement phase too? In that case any new feature can be picked up by a rival before its out in the market and then any major benefits may be lost.
If the source is not provided early, then it can be argued that the project is not really open-source.
It depends a lot on what is the source of revenue for the company. If you have a large user-base then money can be made through subscription too. Disruptive technology was pointed out in some previous blog. Lots of open-source are basically destroying closed-proprietary apps. Users can get similar or better features for free and no one wants to pay - like firefox. Unless you have a major app for which there is no competition only then can you afford being closed. But eventually some open-source app will catch up and then you'll not have much of a choice. Basically it depends on the project and the team. For newer applications I think it makes more sense to be open. But then again a proper source of revenue has to be thought of.
MS sees a subscription based model as the future. I think Web-services will play a big role in this. Sun has a subscription model for JDS and plans something similar for Solaris 10. They even want to offer grid computing wherein the customer simply pays for CPU cycles. So the revenue model is changing.
Ok that's two for going public. I guess we'll do it. But remember, that puts pressure on us to please the shareholders.
There should be no pressure on us. We continue what we do. If someone else is interested, they join. Simple. I just do not see the need to please anyone else
To open source or not. Tis is a very big question.
Wrt the articles, if I provide source of my app, don't I have to provide it during developement phase too? In that case any new feature can be picked up by a rival before its out in the market and then any major benefits may be lost.
If the source is not provided early, then it can be argued that the project is not really open-source.
It depends a lot on what is the source of revenue for the company. If you have a large user-base then money can be made through subscription too. Disruptive technology was pointed out in some previous blog. Lots of open-source are basically destroying closed-proprietary apps. Users can get similar or better features for free and no one wants to pay - like firefox. Unless you have a major app for which there is no competition only then can you afford being closed. But eventually some open-source app will catch up and then you'll not have much of a choice. Basically it depends on the project and the team. For newer applications I think it makes more sense to be open. But then again a proper source of revenue has to be thought of.
MS sees a subscription based model as the future. I think Web-services will play a big role in this. Sun has a subscription model for JDS and plans something similar for Solaris 10. They even want to offer grid computing wherein the customer simply pays for CPU cycles. So the revenue model is changing.
Ok that's two for going public. I guess we'll do it. But remember, that puts pressure on us to please the shareholders.
There should be no pressure on us. We continue what we do. If someone else is interested, they join. Simple. I just do not see the need to please anyone else
Saturday, December 04, 2004
New India Glimpses
From this dude's blog. Subscribe to it!
New India Glimpses
India is witnessing amazing change. While life on a day-to-day basis
still has its challenges (poor road infrastructure, erratic power,
limited bandwidth, growing urban-rural divide, quality and
availability of education, a population that is still growing more
rapidly than available resources), there is a lot that is happening to
augur well for the future.
Cellphones: Recently, the number of cellphones in India passed the
number of landlines. This is not just a statistical milestone. It
signifies the choice that Indians are making. By leapfrogging to a
wirefree world, communications in India is being transformed, and so
is life. Hoardings in Mumbai announce the availability of TV via EDGE
networks and railway reservations via the handset. About 2 million new
users a month are being added to the current base of about 45 million
cellphone users. India has one of the lowest tariffs in the world for
mobile telephony. Text messaging has become a way of interaction for
many. Value-added services like ringtones and gaming are growing.
State-of-the-art networks and feature-rich handsets across India are
beckoning the next set of users. Cellphone companies are profitable at
average monthly revenues of Rs 400 ($9) per user.
Cable TV: A hundred channels for all of Rs 250 ($5.50) – that's what
about 55 million households pay to enjoy their television. And there
is no dearth of new channels launching every month. I still remember
the launch of Zee TV, India's first private channel – it happened just
over a decade ago. A mélange of cable companies are now tying up with
Internet Service Providers to offer "broadband" (more like, always-on
narrowband) Internet to homes.
Wireless Data: Reliance Infocomm's CDMA-based wireless data networks
covers more than a thousand towns and cities across India. Lottery
terminals, ATMs and even credit card authorization terminals are using
it to connect to centralised servers. Providing speeds of 30-60 Kbps
(versus a theoretical maximum of 115 Kbps), these data networks are
also providing laptop users the ability to connect to the Internet in
under five seconds for 40 paise a minute (less than a penny) from
almost anywhere in urban and semi-urban India.
Cybercafes: Even as the cost of ownership of a computer remains high,
thousands of cybercafes function as "Tech 7-11s" in neighbourhoods.
Sify's 2,000 iWays offer not just Internet access, but also Internet
telephony and video conferencing.
Internet Telephony: I still remember the time a few years ago when
phone calls to the US cost nearly Rs 100 a minute. The other day, one
of the VoIP company sales representatives came calling offering calls
for less than Rs 2 a minute. Smart Indians are also buying by Vonage
boxes in the US and getting them to India to make calls to the US for
a flat rate of $30 (Rs 1,350) a month. Geography indeed has no
barriers!
eCommerce: For all who think we have been left behind in the b2c
revolution, think again. Indian Railways and Deccan Airways have
proven that Indians will pay for transactions over the Internet. The
Indian Railways website address one of the major pain points in the
life of many – booking train tickets and checking the reservation
status of waitlisted tickets. Deccan Airways, one of the new low-cost
carriers, does bookings of Rs 1.5 crore ($330,000) daily over the
Internet.
Matrimonials and Jobs: The way people find lifemates and new employers
is changing. Sites like Shaadi.com and BharatMatrimony.com offer to
connect prospective brides and grooms. Job portals like
MonsterIndia.com (which also owns JobsAhead) and Naukri.com have
increased liquidity and fluidity for people seeking new career
opportunities.
Retailing: India is witnessing an unprecedented retail revolution as
malls and chains proliferate. Investments in IT are helping them not
only manage their supply-chain effectively but also build and maintain
customer relationships. The malls and multiplexes are becoming new
hangout places. With the boom in outsourced services, a growing
youthful population has more to spend. Easier access to credit is also
fueling an appliances and automobiles boom.
The Rs 500-a-month PC: Recently, HCL launched a computer on
installment payments – Rs 500 per month. This is a good start, even as
computing by itself faces challenges of affordability, desirability,
accessibility and manageability. The computing industry is not
learning two important lessons from the telecom industry – that of
zero-management user devices and subscription plans (as opposed to
installments).
Rural India: For a variety of reasons, rural India still remains
frozen in time. As governments start believing that free electricity
to farmers can be a passport for electoral success, investments in
other areas are likely to get compromised. There are a few signs of
hope – ITC's eChoupals and n-Logue's kiosks are providing a platform
for trade and services. But rural India still has a long way to go.
India is arriving as a market for global companies. Virgin is
considering investments in telecom and low-cost airlines. Cisco closed
a $100 million deal with VSNL for metro Ethernet. Most luxury brands
are already available or will be. India is a melting pot for many
simultaneous revolutions across multiple industries. As urban incomes
grow, a generation seeks to race ahead. With one of the most youthful
populations in the world, aspirations are on the rise. The next few
years are critical. If we can do things right, we can unlock the
potential of millions. If not…it will be yet another case of so near,
yet so far. The race is not with China, it is against our own
mindsets. Tomorrow's world is happening. Our actions can hasten it or
delay it. Hopefully, this time around, we can cross the chasm. For
that, India needs to build its digital infrastructure right.
As HP's Carly Fiorina wrote in The World in 2005: "Getting there is
going to require the right blend of realism and optimism. We need to
be realistic that none of this is going to be easy. But we also need
to be optimistic, because if we get this right, digital technology
will make more things more possible for more people in more places
than at any time in history. That alone is worth the journey." The
next Google will come out of the opportunities that technology is
creating in the context of the next users. What can we do to build out
tomorrow's world first in India and then across other emerging
markets?
New India Glimpses
India is witnessing amazing change. While life on a day-to-day basis
still has its challenges (poor road infrastructure, erratic power,
limited bandwidth, growing urban-rural divide, quality and
availability of education, a population that is still growing more
rapidly than available resources), there is a lot that is happening to
augur well for the future.
Cellphones: Recently, the number of cellphones in India passed the
number of landlines. This is not just a statistical milestone. It
signifies the choice that Indians are making. By leapfrogging to a
wirefree world, communications in India is being transformed, and so
is life. Hoardings in Mumbai announce the availability of TV via EDGE
networks and railway reservations via the handset. About 2 million new
users a month are being added to the current base of about 45 million
cellphone users. India has one of the lowest tariffs in the world for
mobile telephony. Text messaging has become a way of interaction for
many. Value-added services like ringtones and gaming are growing.
State-of-the-art networks and feature-rich handsets across India are
beckoning the next set of users. Cellphone companies are profitable at
average monthly revenues of Rs 400 ($9) per user.
Cable TV: A hundred channels for all of Rs 250 ($5.50) – that's what
about 55 million households pay to enjoy their television. And there
is no dearth of new channels launching every month. I still remember
the launch of Zee TV, India's first private channel – it happened just
over a decade ago. A mélange of cable companies are now tying up with
Internet Service Providers to offer "broadband" (more like, always-on
narrowband) Internet to homes.
Wireless Data: Reliance Infocomm's CDMA-based wireless data networks
covers more than a thousand towns and cities across India. Lottery
terminals, ATMs and even credit card authorization terminals are using
it to connect to centralised servers. Providing speeds of 30-60 Kbps
(versus a theoretical maximum of 115 Kbps), these data networks are
also providing laptop users the ability to connect to the Internet in
under five seconds for 40 paise a minute (less than a penny) from
almost anywhere in urban and semi-urban India.
Cybercafes: Even as the cost of ownership of a computer remains high,
thousands of cybercafes function as "Tech 7-11s" in neighbourhoods.
Sify's 2,000 iWays offer not just Internet access, but also Internet
telephony and video conferencing.
Internet Telephony: I still remember the time a few years ago when
phone calls to the US cost nearly Rs 100 a minute. The other day, one
of the VoIP company sales representatives came calling offering calls
for less than Rs 2 a minute. Smart Indians are also buying by Vonage
boxes in the US and getting them to India to make calls to the US for
a flat rate of $30 (Rs 1,350) a month. Geography indeed has no
barriers!
eCommerce: For all who think we have been left behind in the b2c
revolution, think again. Indian Railways and Deccan Airways have
proven that Indians will pay for transactions over the Internet. The
Indian Railways website address one of the major pain points in the
life of many – booking train tickets and checking the reservation
status of waitlisted tickets. Deccan Airways, one of the new low-cost
carriers, does bookings of Rs 1.5 crore ($330,000) daily over the
Internet.
Matrimonials and Jobs: The way people find lifemates and new employers
is changing. Sites like Shaadi.com and BharatMatrimony.com offer to
connect prospective brides and grooms. Job portals like
MonsterIndia.com (which also owns JobsAhead) and Naukri.com have
increased liquidity and fluidity for people seeking new career
opportunities.
Retailing: India is witnessing an unprecedented retail revolution as
malls and chains proliferate. Investments in IT are helping them not
only manage their supply-chain effectively but also build and maintain
customer relationships. The malls and multiplexes are becoming new
hangout places. With the boom in outsourced services, a growing
youthful population has more to spend. Easier access to credit is also
fueling an appliances and automobiles boom.
The Rs 500-a-month PC: Recently, HCL launched a computer on
installment payments – Rs 500 per month. This is a good start, even as
computing by itself faces challenges of affordability, desirability,
accessibility and manageability. The computing industry is not
learning two important lessons from the telecom industry – that of
zero-management user devices and subscription plans (as opposed to
installments).
Rural India: For a variety of reasons, rural India still remains
frozen in time. As governments start believing that free electricity
to farmers can be a passport for electoral success, investments in
other areas are likely to get compromised. There are a few signs of
hope – ITC's eChoupals and n-Logue's kiosks are providing a platform
for trade and services. But rural India still has a long way to go.
India is arriving as a market for global companies. Virgin is
considering investments in telecom and low-cost airlines. Cisco closed
a $100 million deal with VSNL for metro Ethernet. Most luxury brands
are already available or will be. India is a melting pot for many
simultaneous revolutions across multiple industries. As urban incomes
grow, a generation seeks to race ahead. With one of the most youthful
populations in the world, aspirations are on the rise. The next few
years are critical. If we can do things right, we can unlock the
potential of millions. If not…it will be yet another case of so near,
yet so far. The race is not with China, it is against our own
mindsets. Tomorrow's world is happening. Our actions can hasten it or
delay it. Hopefully, this time around, we can cross the chasm. For
that, India needs to build its digital infrastructure right.
As HP's Carly Fiorina wrote in The World in 2005: "Getting there is
going to require the right blend of realism and optimism. We need to
be realistic that none of this is going to be easy. But we also need
to be optimistic, because if we get this right, digital technology
will make more things more possible for more people in more places
than at any time in history. That alone is worth the journey." The
next Google will come out of the opportunities that technology is
creating in the context of the next users. What can we do to build out
tomorrow's world first in India and then across other emerging
markets?
Friday, December 03, 2004
Is Some Software Meant to be Secret?
Wednesday, December 01, 2004
The Daily WTF
Check it out at http://thedailywtf.com/forums.aspx. RSS feed at http://thedailywtf.com/rss.aspx?ForumID=12&Mode=0
Describes itself as "Curious Perversions in Information Technology". Everyday they post a new coding horror. These (most) are taken from real world code which people have come across. Covers a whole range of languages.
As a sampler, check out today's post...
------------------------------------------------------------------------------------
The .NET developers out there have likely heard that using a StringBuilder is a much better practice than string concatenation. Something about strings being immutable and creating new strings in memory for every concatenation. But, I'm not sure that this (as found by Andrey Shchekin) is what they had in mind ...
public override string getClassVersion() {
return
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append("V0.01")
.append(", native: ibfs32.dll(").ToString())
.append(DotNetAdapter.getToken(this.mainVersionBuffer.ToString(), 2)).ToString())
.append(") [type").ToString())
.append(this.portType).ToString())
.append(":").ToString())
.append(DotNetAdapter.getToken(this.typeVersionBuffer.ToString(), 0xff)).ToString())
.append("](").ToString())
.append(DotNetAdapter.getToken(this.typeVersionBuffer.ToString(), 2)).ToString())
.append(")").ToString();
}
Note, that it is J#, StringBuffer and StringBuilder are the same thing.
Describes itself as "Curious Perversions in Information Technology". Everyday they post a new coding horror. These (most) are taken from real world code which people have come across. Covers a whole range of languages.
As a sampler, check out today's post...
------------------------------------------------------------------------------------
The .NET developers out there have likely heard that using a StringBuilder is a much better practice than string concatenation. Something about strings being immutable and creating new strings in memory for every concatenation. But, I'm not sure that this (as found by Andrey Shchekin) is what they had in mind ...
public override string getClassVersion() {
return
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append("V0.01")
.append(", native: ibfs32.dll(").ToString())
.append(DotNetAdapter.getToken(this.mainVersionBuffer.ToString(), 2)).ToString())
.append(") [type").ToString())
.append(this.portType).ToString())
.append(":").ToString())
.append(DotNetAdapter.getToken(this.typeVersionBuffer.ToString(), 0xff)).ToString())
.append("](").ToString())
.append(DotNetAdapter.getToken(this.typeVersionBuffer.ToString(), 2)).ToString())
.append(")").ToString();
}
Note, that it is J#, StringBuffer and StringBuilder are the same thing.
Subscribe to:
Posts (Atom)