Friday, December 31, 2004

New Look

I thought with the beginning of the new year codeWord could do with a new look. What dyou think? Suggestions are welcome.

Keep them posts coming. Happy 2005!

Wednesday, December 29, 2004

Java Operator Overloading

I saw this comment somewhere...

---
Even if it is trivial to add operator overloading to Java and to make it simple to use, are you absolutely sure its a good idea?

Operator overloading violates one of the central tenets of the Java language design; transparency. If you look at any piece of Java code (no matter who wrote it or where its from) you can easily figure out exactly what it does. There is no "hidden" information; everything is stated explicitly. This philosophy makes Java an ideal language for Open Source and business programming where there may be many different contributors over a long period of time. It is easy to dive into a class, see what's going on and make any modifications necessary.

With operator overloading (and many of the dubious "improvements" made to the Java language in Java 1.5) we lose this transparency. External declarations not referred to internally can completely alter the meaning of a section of code.

That's bad.

Of course, some people don't agree that this emphasis on transparency is useful, so they program in (or at least advocate) other languages (like for example Lisp, Nice, or C++) where the language can be modified and transformed willy nilly. This kind of thing makes an interesting intellectual exercise; it does not however make for a good social environment to program in. For these people, Java must seem limited. However, the vast majority of programmers have rejected this approach (with some relief!) and now program in Java (or its evil twin C#)

The String concatenation argument is often brought up by advocates of operator overloading in Java, and in a way they do have a point; that operator overloading can be useful. That is, it WOULD be useful if it didn't compromise code transparency so flagrantly! There is one big difference between String operator overloading and arbitrary operator overloading. When I sit down to maintain your code, I know what the String "+" operator does; it's in the Java Language Specification. On the other hand, I have no idea what your overriding of the "+" operator does on your "CustomerRecord" class. That is, if I can Without prior knowledge, I can't even tell if an operator is overridden or not!

Operator overloading is indeed not high on Java programmers list of desires (at least those that understand the design philosophy of the language). Rather, the very mention of it provokes feelings of fear and disgust. And rightly so! To those who would like to return to the days where every day was an obfuscated C contest, and where knowledge of the actual language didn't translate into ability to understand and maintain code, I say go elsewhere; to the lands of C++, Perl, Python and their ilk where you will find yourself in eerily familiar territory. Or if you hang around Java long enough you will probably see it ruined by people such as yourself screaming at Sun for more "improvements" along the same lines as Tiger.
---

I agree with his point of transparency. When you're reading someone else's code or even your own code after some gap, trying to make sense of it is difficult. That's why trying to write "clean" self documenting code is important. And it's the reason why "simple" languages like Java and C# are becoming more popular. There are limited (often only one) ways to do things. It's one of the reasons I don't like the C/C++ typedef statement. Most of the time you can't make sense of what the underlying type is.

In the case of operator overloading I disagree with him. I don't think it destroys transperency. If anything I think it makes the code more understandable. Operator overloading is just an abstraction over methods. And it's not like you can overload any operator. There are only a limited and well know operators that have a standard universal meaning that can be overloaded. Yeah, there is potential for abuse, for example, by overloading + to subtract instead of add, but there is nothing stopping anyone from subtracting in an Add() method either.

Tuesday, December 28, 2004

GLAT

This one's pretty old, but incase you hadn't seen it. It's the Google Labs Appitude Test. Check out the four pages here, here, here and here.

They could've saved a lot of trees but just saying "Anyone with an IQ below 140 need not apply".

Scott McNeally at his best

http://www.theregister.co.uk/2004/12/23/mcnealys_xmas_dream/print.html

Friday, December 24, 2004

The Concurrency Revolution

Herb Sutter, a C++ heavyweight, writes about the next evolution in programming in The Free Lunch is over: A fundamental turn toward concurrency in software. He acknowledges that Moore's law is going to (has already?) hit limitations and that old single threaded applications won't just magically gain performance as processor speeds increase. As a way to counter the limitations, processors will increasing turn to "parallelism", but apps will need to be tuned to enjoy the benefits.

One thing he mentions in the article is that it will be similar to the OOP shift during the 90s with a similar learning curve. I think that the learning curve is going to be much higher. I haven't really done multithreading programming, but I have read about multithreading in Java and .NET and was briefly introduced to it in one of my classes (I suppose OS will have a much more comprehensive coverage of it). Multi-threading is inherently extremely hard to get right. Our brains are designed to think sequentially. Programming for parallelism is really hard. Even with simple multithreaded programs there are SO many ways to mess up. And because there isn't a straight line to follow, debugging is anothing nightmare.

I think to become as wide spread as OOP, where everyone can easily adapt to the paradigm, it needs to be simplified. Java and .NET have threading built into the platform, which is a start. They have made it easier, but it's still a huge learning curve. Just like we have "Hello World" intro programs, we'll need to start having "Hello Parallel Worlds".

Tuesday, December 21, 2004

EPIC 2014

http://www.broom.org/epic/

Where dyou think the future of online news is heading?

Monday, December 20, 2004

STL.NET

I had mentioned in a previous post about C++ being adopted to .NET. I also mentioned that the .NET guys were thinking about how to include the STL functionality in the .NET framework library. Well, C++ is special in that it supports different programming paradigms. So they've come up with STL.NET.

Stan Lippman, who is one of the dev's on the project has written an article about it. Here's the summary...

For the experienced programmer, the hardest part of moving to a new development platform such as .NET is often the absence of familiar tools through which she has honed her skills and on which she depends. For the experienced C++ programmer, one such essential toolkit is the Standard Template Library (STL), and its absence under .NET until now has been a significant disappointment. With Visual C++ 2005, we fix that by providing an STL.NET library. This article, the first in a series, provides a general overview of the STL program model using STL.NET – it discusses sequential and associative containers, the generic algorithms, and the iterator abstraction that binds the two, using plenty of program examples to illustrate each point. It begins by briefly considering the alterative container models available to the .NET programmer using C++ -- the existing System::Collections library, the new System::Collections::Generic library, and, of course, STL.NET. To provide for the widest readership, this article does not require familiarity with the STL library; however, it does presume some experience with the C++ programming language.

Wednesday, December 15, 2004

Re: which is faster : C or C++?

So, if I want to write such code (if?... hell I DO have to write such code), which is a better option - C or C++? In this case, is it right to say that you could use all the good organisation and 'cleanliness' of using classes and get the same performance if you let go of virtual functions?

I guess you've answered your own question. It's clear that your most important criteria is performance. And since you're only debating on C vs C++, C is more "lightweight" and you should be able to grind out more instructions/cycles with it.

But again, you are the only one who knows enough about your project to make the decision. Generally, you'd need to consider a lot more than just performance when choosing languages. In your case, you're deciding between C and C++. C++ (as we've all agreed) has a lot more to offer over C. But at the same time, you loose certain advantages that C provides - one of them being performance (and again this can be argued forever).

Looking at your project, would OOP be helpful? Dyou think that having classes will help in organizing and designing your project in a "better" way than C with its separation of functions and data? Think about the bigger picture rather than debate about "malloc()" vs "new".

BTW, post some info about your project.

Tuesday, December 14, 2004

Re: which is faster : C or C++?

I guess I didn't pose my question very clearly... will try to do it in this post. First of all, I must clarify that I am as big a fan of C++ as anybody can be and I'd choose C++ over C almost ALL the time unless it's absolutely necessary to use C. It's that particular 'absolutely necessary' case I'm examining here. All the things you guys have written make sense and I agree completely.

While that little function overhead is insignificant in most cases and is nothing compared to the IMMENSE additional flexibility and functionality that you gain, it would be worthwhile contemplating under what circumstances this overhead could become significant. Codes that go into CFD applications can take ridiculously long time to execute. Let's say a C++ code that does the same thing as a 5 sec C code takes 7.5 sec to execute. Doesn't seem much... you don't give a damn. Stick to C++. But when you're talking about 50 and 75 DAYS, the difference is HUGE. And I'm not kidding here. There are codes which take that long to execute.

So, if I want to write such code (if?... hell I DO have to write such code), which is a better option - C or C++? In this case, is it right to say that you could use all the good organisation and 'cleanliness' of using classes and get the same performance if you let go of virtual functions?

Once again, except for this case of infinitely large execution times, C++ is a better option than C... no doubt about it. But what about this case?


Saturday, December 11, 2004

Re: which is faster : C or C++?

Primarily what I love about C++ is the STL. It has never given me more pleasure to see a library in action. Granted that the organisation of the STL reflects the fact that there were multiple design heads involved but yet it's the most beautiful piece of code I have ever seen.

I dunno if I can say it's the most beautiful piece of code (I haven't actually read the source), but I fully agree with you that it's a fantastic library. The way they have designed it, with such a wonderful separation of containers, iterators, algorithms and functions is quite brilliant.

Whats even better about C++ is it doesn't force a programming paradigm on you, it lets you design your solution in any way you wish, so if you want to have a C style program, well just go right ahead!!

Agree again. The best thing about C++ is that it gives the programmer a lot of freedom. It supports procedural, object oriented and generic programming. I don't think there's any other language that does that. Microsoft is also fully integrating it into .NET in the next version of their compiler, so it will support garbage collection and will have access to the Base Class Library. Just another way C++ can be used.

.NET and Java are supporting generics in their newest versions. Naturally they are looking at C++ for ideas. But I feel the won't be able to come up with as elegant a solution as the STL because they only support OOP. Some proposed functionality I've read about for the next version of .NET collections is quite ugly. Like including same algorithm functionality in each collection. There is no iterator abstraction, so each collection in a way is different. It's a dilema for them. How to support all the functionality in an OOP way. It'll be interesting to see what they finally come up with. Haven't seen how Java handles it.

Always remember that C++ was meant to be a better and "safer" C.

The general trend it seems is that all the guru's (i.e. Bjarne and friends) are encouraging to make use of more abstractions and use the STL in the name of convenience, maintenance and safety. For ex. Go for vectors instead of straight arrays, Use as little manual memory management as possible or if you need to play with pointers go for some of the safe versions available through STL and boost. I took a course on generic programming where we used the STL. We hardly new'd and delete'd. It's a testament to C++'s flexibility. It's able to adapt to the evolving paradigms.

virtual functions are implemented using a lookup table that gives you a function pointer for each derived class type. Thus, these kind of functions can simply not be inlined

If a compiler is smart enough, it should be able to inline some calls to virtual functions. There's a way to explicity (statically) call virtual functions...

class Base
{
     public:
          virtual void Function1()
          {
               cout >> "Base::Function1";
          }
};

class Derived : public Base
{
     public:
          virtual void Function1()
          {
               cout >> "Derived::Function1";
          }

          void Function2()
          {
               Base::Function1(); // can be inlined
          }
};


Correct me if I'm wrong about this fact. Or if this particular example is wrong.

I think the guys designing and implementing C++ were as concerned about performance as anyone. They did everything possible for limited performance hits. I don't think anyone can fault them. Dinesh had recommended a book long back called "The C++ Object Model". It gives you a good idea about how they implemented a lot of the features. Virtual functions and polymorphism in general is discussed at length. And they give a lot of examples in CFront. So you can see what C code is generated.

Friday, December 10, 2004

Re: which is faster : C or C++?

Obviously, I am interpreting this question as structured vs object oriented programming. I personally have used C++ for ages, without caring to make a class -- and i really appreciate the fact that it dosnt force a programming paradigm on us :)

Regarding optimization, one thing is clear -- especially in the GNU context: GCC does optimization on an intermediate form of code that it derives from the front-end language like C or C++. Hence, optimization must be just as good for both. In fact, I believe that taking the CFront route (C++ -> C -> ASM) instead of the GCC route (C++ -> ASM) will produce assembly that's just as good, but will take much longer to do so.

So, what the point? How is C++ optimization different? I'd say that the "optimization needs" of a C++ program are different.

Let's take an illustrative historical case of encapsulation -- encapsulation brought in a new era where the number of functions written by a programmer increased manifold! Firstly, because C++ dissuades the use of macros, and secondly, because there is a higher tendency of making constructors and destructors in C++, whereas in C, you'd type in the whole thing each and every time you needed it. Thus, older compilers who did not have good enough support for inlining functions, often failed to produce overall good C++ code.

Of course, any self-respecting compiler today has really good inlining support, so this example I have given probably no longer holds. So now, lets move on to simple polymorphism: virtual functions are implemented using a lookup table that gives you a function pointer for each derived class type. Thus, these kind of functions can simply not be inlined, and they also use "jump to address in a variable"; something like this: (*var)(). A programming style that enforces use of such control jumps is BAD. The reason is that most computer architectures today have built-in support for branch prediction and usage of such statements defeats their purpose. I do not deny that you can do this in C as well; but you would usually not! Whereas, the use of virtual functions in C++ is almost a norm!

I have no idea about multiple inheritence etc., god knows why they created such a feature! Also, I have never used STL, so I don't really know how the widespread use of STL influences the optimization needs of C++ code.

All said, I definitely agree that C++ is a wiser choice than C for any hardcore hosted developmental work, because of lesser development time. I just seriously recommend restricting polymorphism to only those places where it really really simplifies things.

BTW, G++ usually produces bloat in the form of a symbol table, that's used for debugging etc., it won't even be copied to memory... just sits on your harddisk.

Google Suggest

http://www.google.com/webhp?complete=1&hl=en

Yet another innovation from everyone's favorite company. Go through the alphabet to see what the suggestions are. Some are pretty interesting (ex. 'p').

Re: which is faster : C or C++?

I agree with Mohnish here. When comparing languages for implementing something you gotta see what suits the purpose best. The cost for virtual functions and polymorphism in C++ is a single virtual table pointer in each object and the resolution of those pointers and what exactly should be called based on the heirarchy of class implementation. But what you get for that is a whole new paradigm under your control. A whole new world view if you will. No more is programming based on thinking about what piece of information is processed when rather we are supplemented to talk in more abstract or high level terms.

A language which gives you the power to do object oriented programming at the cost of a single virtual table pointer is a piece of work in itself. Primarily what I love about C++ is the STL. It has never given me more pleasure to see a library in action. Granted that the organisation of the STL reflects the fact that there were multiple design heads involved but yet it's the most beautiful piece of code I have ever seen, if you don't agree just open up the algorithm or functional standard header and read for yourself, it's beautiful!! :) Whats even better about C++ is it doesn't force a programming paradigm on you, it lets you design your solution in any way you wish, so if you want to have a C style program, well just go right ahead!! Always remember that C++ was meant to be a better and "safer" C.

Over C, I'd choose C++ anyday, besides I hate writing the cumbersume printf() statements for everything, cout is so much better :) (ok, that was my cheesy joke for the day! sorry!!)

Put things in context and you'd see that C++ gives you a lot more than C, atleast thats what I think. One gripe I have with the g++ compiler is that it produces a lot of code, the -strip option does work well but still I have never been able to figure out what bloat code it writes! But then again with memory so cheap now it doesn't really matter.

By the way, can you factually prove that C++'s optimization is not as good as C(or were you saying something else)?? The g++ gives three levels of Optimization - O1, O2 and O3, all my programs are compiled with the -s -O3 options (releasable code that is). C++ by it's very design allows the compilers a lot of lee-way as to what it can optimize. And the GNU compilers sure do make use of it!

All in all whatever the speed comparisons, if I had a big project to work on, I'd be betting on C++ to get the job done in a good and maintainable way!

Dinesh.

Wednesday, December 08, 2004

Re: which is faster : C or C++?

which is faster : C or C++?

I think it would a good idea to first define "faster". What exactly do you mean? Faster in what context? In a one line program or in a 100,000 line program? And how do you analyse the performance?

Personally, I feel it's an exercise in futility to compare what language is "faster" than the other. The reason I feel that way is cause you'll find studies and papers claiming that each language can beat every other one.

Use of virtual functions and run-time polymorphism slows down the code a little. So if this feature of C++ is not used, C++ code would run as fast as C code.

I think this is a wrong approach with which to look at C++. C++ was created to be a "better" C. You can take that to mean whatever you want (everyone has their own opinion about why it's better (if at all)). From what I understand, as applications started getting larger, using C to develop them was getting to be a pain in the arse. They needed something that would make it easier to write maintainable code (Isn't code always easier to write than read?). Enter C++. It created another level of abstraction, just as C created an abstraction over assembly, and assembly over machine code, and machine code over gates, and gates over the 0s and 1s, and the 0s and 1s over the electrons... you get the picture (did I miss a level?).

Anyway, my point (yeah I have one!) is that if you look at C++ feature by feature and look to eliminate something so as to get it to run as "fast" as C... you might as well cut to the chase and go play with electrons.

Having said this, the difference between run times is due to the compilers and not the languages themselves. Last I heard, C++ compilers don't optimize C++ code as well as C compilers optimize C code.

Did you know that the first C++ compiler (CFront) generated C code... not machine code? So any optimization made to C compilers would apply to C++ code as well. Today, every C++ compiler most likely generates native code, but I don't see any reason why they would be any less optimizing than C compilers.

Again the abstractions bit comes in. It's all about the amount of control you (the programer) want to have. You can write programs with 0s and 1s if you want to... you got all the control in the world. I wouldn't imagine it would be very fun to do it, but you can if you want to. You sure as hell won't be very productive. Just as you lost some control when you went from C to C++ (like creating/destroying objects, does multiple things... you don't have control over the entire process), going from C++ to Java/C# you lose even more control. But what you gain is productivity.

The code that goes into CFD applications handles millions of points, so even a little function overhead (eg virtual function) is significant.

I read this on some (smart) dude's blog about performance: "Always set goals and always measure". What's good enough for you? If you code the app in C++ and it's slower than it was with C, but good enough then does it matter? Depending on how good you are with each language you might be a lot more productive with C++. So it's a tradeoff.

Bottomline - if virtual functions, run-time polymorphism isn't used, C++ code would run as fast as C code.

There are a lot more abstractions than just virtual functions in C++, so I doubt if you just avoid that if it would make a huge difference.

For starters, about me -- I have no clue to Java; I'm pretty good at C on UNIX/Linux etc and I can bear C++.

Firstly, cheers on your first post. Hope to see a lot more.

Just to give a brief intro to what we dudes are about...
Hrishi - (you probably know more) C/Linux
Rahul - Java/Linux
Dinesh - C++/AI/Game engines/Philosophy
Yours truely - C#/.NET/Bit of Java/Bit of C++

PS: Did you guys know we guys here call Hrishikesh, "Micro"? Micro?! Huh! near Mega you'd say... well... but then it all started from a Micro-elephant :D

Dyou see the archive links on the right hand side of the page? Go to November 2003 and check the very first post's title and ask Micro to explain it. Post your reaction.

Hey guys!

Ok, looks like Hrishikesh has plucked the right string there... C/C++ usually gets me started :-)

For starters, about me -- I have no clue to Java; I'm pretty good at C on UNIX/Linux etc and I can bear C++. Regarding my ignorance of Java, all I will say is that the "Hello world!" I wrote took so long to start off that I gave up :D Well, maybe though, my body and soul is written in Java. See, Mohnish added me to the blog almost a week ago. And my first post comes now. Pretty much like the Hello World I wrote... took a looong time to start, but worked fine after that. (bad joke -- you said this was the place :P)

I am looking forward to seeing your comments on C vs C++. In fact lets add Java to it! Let's see what you hard-core Java fellows have to say about the efficiency of object oriented features that Hrishikesh (in my opinion, correctly) labels as having sub-optimal implementations in C++. What about Java?

I will come up with a post detailing what I like and dislike about C++ soon...
Till then,

PS: Did you guys know we guys here call Hrishikesh, "Micro"? Micro?! Huh! near Mega you'd say... well... but then it all started from a Micro-elephant :D

which is faster : C or C++?

Let's reignite this age old debate; well maybe not all that age old but definitely something worth discussing. It is a widely regarded notion that C is faster than C++ though I havne't found any concrete reasons or literature to support this claim.

This is what I have inferred from what I have read -

Use of virtual functions and run-time polymorphism slows down the code a little. So if this feature of C++ is not used, C++ code would run as fast as C code.

Having said this, the difference between run times is due to the compilers and not the languages themselves. Last I heard, C++ compilers don't optimize C++ code as well as C compilers optimize C code.

The code that goes into CFD applications handles millions of points, so even a little function overhead (eg virtual function) is significant.

Bottomline - if virtual functions, run-time polymorphism isn't used, C++ code would run as fast as C code.

Thoughts, comments, links?

Monday, December 06, 2004

Re: Is Some Software Meant to be Secret?

if I provide source of my app, don't I have to provide it during developement phase too?

Isn't this normal practice for open source apps? Couldn't you download daily builds of Firefox?


Yes. Thats why I felt that Tim Bray mentioning that if a super feature is being included it will not give an advantage to rivals till released. Maybe their design would be different but an idea could be incorporated.


I think a major difference between closed source apps and open source counterparts is that open source doesn't really have a strong sense of versioning. It is a very iterative process. Using FireFox as an example... people have been using it way before they released 1.0. It's part of the "culture". You're expected to keep up.

I disagree here. The users of open source API's are generally more adventurous, but the feature set for each version are generally clearly defined. If more co's start using open source products, they will be more slow to update versions and even take beta releases.


How does Sun do it for Java APIs?

I am not sure about the Java API. The Java JDK has been released as a project at java.net. This is a Sun site where loads of open and not-so-open projects are hosted. So you can start off with Java 6.0 today. Sun has mentioned that they are going to provide faster releases in the future.




MS sees a subscription based model as the future.

Dyou really think this model will work?


Dunno. Any new model will take time for adoption. Sun is actually doing it now. It seems scary but it seems more correct to me. In todays world everything is connected to the net. For a co. (who buy software) subscription seems better as they get new releases. Can switch after a year with lower costs. Lots of co's pay loads for new software. That leaves them with old versions very soon. And a lesser functionality version can be passed to the kids to play with. Everyone it seems, would be much more happier. Subscription is like your cable or cell. Its just that we are not used to it now. And with web-services this model seems even more easier to implement.



I just do not see the need to please anyone else

I was joking. You know... going public as in getting listed on an index like Nasdaq and so pleasing our shareholders. Maybe I should make more use of ';-)' in the future ;-)


Dude. That would not compile. Here's why..
1. class shareholderJoke extends nasdaqPatheticJoke {} --- missing
2. And the ;-) Annotation was missing too. (Yup.. I still do not know how to write Annotations!!)


And BTW, we do have a new member but he's been quiet. Hrishi's pal from IIT, Nikhil, is the latest codeWordian (too cheesy?). Let's have some posts dude.

Welcome aboard. This (as you might have realised) is the place for really bad jokes. Might get a bit of knowledge once a while.

Sunday, December 05, 2004

Re: Is Some Software Meant to be Secret?

if I provide source of my app, don't I have to provide it during developement phase too?

Isn't this normal practice for open source apps? Couldn't you download daily builds of Firefox?

I think a major difference between closed source apps and open source counterparts is that open source doesn't really have a strong sense of versioning. It is a very iterative process. Using FireFox as an example... people have been using it way before they released 1.0. It's part of the "culture". You're expected to keep up. So the release/development phase is sort of blurred. It's not really the case for closed source apps. There's a clear separation. So even if these closed source guys open their code, it would most likely be with the final release. How does Sun do it for Java APIs?

MS sees a subscription based model as the future. I think Web-services will play a big role in this. Sun has a subscription model for JDS and plans something similar for Solaris 10. They even want to offer grid computing wherein the customer simply pays for CPU cycles. So the revenue model is changing.

Dyou really think this model will work? Somehow I can't imagine it will ever be successful. This idea of pay per use will be too hard for many people to swallow. People are used to the idea of owning their software and using it however they want. Moving to the subscription model won't be easy because you're not in control. At anytime, anyone can cut off your access. I think MS did some trials in a few countries and it bombed. Maybe it would work in large companies where there might be a possibility of cutting costs. But for personal use - I highly doubt it.

There should be no pressure on us. We continue what we do. If someone else is interested, they join. Simple. I just do not see the need to please anyone else

I was joking. You know... going public as in getting listed on an index like Nasdaq and so pleasing our shareholders. Maybe I should make more use of ';-)' in the future ;-)

And BTW, we do have a new member but he's been quiet. Hrishi's pal from IIT, Nikhil, is the latest codeWordian (too cheesy?). Let's have some posts dude.

Re: Is Some Software Meant to be Secret?

Tim Bray and Microsoft's Joe Marini

To open source or not. Tis is a very big question.

Wrt the articles, if I provide source of my app, don't I have to provide it during developement phase too? In that case any new feature can be picked up by a rival before its out in the market and then any major benefits may be lost.

If the source is not provided early, then it can be argued that the project is not really open-source.

It depends a lot on what is the source of revenue for the company. If you have a large user-base then money can be made through subscription too. Disruptive technology was pointed out in some previous blog. Lots of open-source are basically destroying closed-proprietary apps. Users can get similar or better features for free and no one wants to pay - like firefox. Unless you have a major app for which there is no competition only then can you afford being closed. But eventually some open-source app will catch up and then you'll not have much of a choice. Basically it depends on the project and the team. For newer applications I think it makes more sense to be open. But then again a proper source of revenue has to be thought of.

MS sees a subscription based model as the future. I think Web-services will play a big role in this. Sun has a subscription model for JDS and plans something similar for Solaris 10. They even want to offer grid computing wherein the customer simply pays for CPU cycles. So the revenue model is changing.


Ok that's two for going public. I guess we'll do it. But remember, that puts pressure on us to please the shareholders.

There should be no pressure on us. We continue what we do. If someone else is interested, they join. Simple. I just do not see the need to please anyone else


Saturday, December 04, 2004

New India Glimpses

From this dude's blog. Subscribe to it!

New India Glimpses

India is witnessing amazing change. While life on a day-to-day basis
still has its challenges (poor road infrastructure, erratic power,
limited bandwidth, growing urban-rural divide, quality and
availability of education, a population that is still growing more
rapidly than available resources), there is a lot that is happening to
augur well for the future.

Cellphones: Recently, the number of cellphones in India passed the
number of landlines. This is not just a statistical milestone. It
signifies the choice that Indians are making. By leapfrogging to a
wirefree world, communications in India is being transformed, and so
is life. Hoardings in Mumbai announce the availability of TV via EDGE
networks and railway reservations via the handset. About 2 million new
users a month are being added to the current base of about 45 million
cellphone users. India has one of the lowest tariffs in the world for
mobile telephony. Text messaging has become a way of interaction for
many. Value-added services like ringtones and gaming are growing.
State-of-the-art networks and feature-rich handsets across India are
beckoning the next set of users. Cellphone companies are profitable at
average monthly revenues of Rs 400 ($9) per user.

Cable TV: A hundred channels for all of Rs 250 ($5.50) – that's what
about 55 million households pay to enjoy their television. And there
is no dearth of new channels launching every month. I still remember
the launch of Zee TV, India's first private channel – it happened just
over a decade ago. A mélange of cable companies are now tying up with
Internet Service Providers to offer "broadband" (more like, always-on
narrowband) Internet to homes.

Wireless Data: Reliance Infocomm's CDMA-based wireless data networks
covers more than a thousand towns and cities across India. Lottery
terminals, ATMs and even credit card authorization terminals are using
it to connect to centralised servers. Providing speeds of 30-60 Kbps
(versus a theoretical maximum of 115 Kbps), these data networks are
also providing laptop users the ability to connect to the Internet in
under five seconds for 40 paise a minute (less than a penny) from
almost anywhere in urban and semi-urban India.

Cybercafes: Even as the cost of ownership of a computer remains high,
thousands of cybercafes function as "Tech 7-11s" in neighbourhoods.
Sify's 2,000 iWays offer not just Internet access, but also Internet
telephony and video conferencing.

Internet Telephony: I still remember the time a few years ago when
phone calls to the US cost nearly Rs 100 a minute. The other day, one
of the VoIP company sales representatives came calling offering calls
for less than Rs 2 a minute. Smart Indians are also buying by Vonage
boxes in the US and getting them to India to make calls to the US for
a flat rate of $30 (Rs 1,350) a month. Geography indeed has no
barriers!

eCommerce: For all who think we have been left behind in the b2c
revolution, think again. Indian Railways and Deccan Airways have
proven that Indians will pay for transactions over the Internet. The
Indian Railways website address one of the major pain points in the
life of many – booking train tickets and checking the reservation
status of waitlisted tickets. Deccan Airways, one of the new low-cost
carriers, does bookings of Rs 1.5 crore ($330,000) daily over the
Internet.

Matrimonials and Jobs: The way people find lifemates and new employers
is changing. Sites like Shaadi.com and BharatMatrimony.com offer to
connect prospective brides and grooms. Job portals like
MonsterIndia.com (which also owns JobsAhead) and Naukri.com have
increased liquidity and fluidity for people seeking new career
opportunities.

Retailing: India is witnessing an unprecedented retail revolution as
malls and chains proliferate. Investments in IT are helping them not
only manage their supply-chain effectively but also build and maintain
customer relationships. The malls and multiplexes are becoming new
hangout places. With the boom in outsourced services, a growing
youthful population has more to spend. Easier access to credit is also
fueling an appliances and automobiles boom.

The Rs 500-a-month PC: Recently, HCL launched a computer on
installment payments – Rs 500 per month. This is a good start, even as
computing by itself faces challenges of affordability, desirability,
accessibility and manageability. The computing industry is not
learning two important lessons from the telecom industry – that of
zero-management user devices and subscription plans (as opposed to
installments).

Rural India: For a variety of reasons, rural India still remains
frozen in time. As governments start believing that free electricity
to farmers can be a passport for electoral success, investments in
other areas are likely to get compromised. There are a few signs of
hope – ITC's eChoupals and n-Logue's kiosks are providing a platform
for trade and services. But rural India still has a long way to go.

India is arriving as a market for global companies. Virgin is
considering investments in telecom and low-cost airlines. Cisco closed
a $100 million deal with VSNL for metro Ethernet. Most luxury brands
are already available or will be. India is a melting pot for many
simultaneous revolutions across multiple industries. As urban incomes
grow, a generation seeks to race ahead. With one of the most youthful
populations in the world, aspirations are on the rise. The next few
years are critical. If we can do things right, we can unlock the
potential of millions. If not…it will be yet another case of so near,
yet so far. The race is not with China, it is against our own
mindsets. Tomorrow's world is happening. Our actions can hasten it or
delay it. Hopefully, this time around, we can cross the chasm. For
that, India needs to build its digital infrastructure right.

As HP's Carly Fiorina wrote in The World in 2005: "Getting there is
going to require the right blend of realism and optimism. We need to
be realistic that none of this is going to be easy. But we also need
to be optimistic, because if we get this right, digital technology
will make more things more possible for more people in more places
than at any time in history. That alone is worth the journey." The
next Google will come out of the opportunities that technology is
creating in the context of the next users. What can we do to build out
tomorrow's world first in India and then across other emerging
markets?

Friday, December 03, 2004

Is Some Software Meant to be Secret?

Straight off of slashdot...

"Tim Bray and Microsoft's Joe Marini are doing a back-and forth on Open Source. Tim serves (open everything), Joe returns (secret-source is good business) and Tim volleys (the closed-source niche is shrinking)."

Any opinions?

Wednesday, December 01, 2004

The Daily WTF

Check it out at http://thedailywtf.com/forums.aspx. RSS feed at http://thedailywtf.com/rss.aspx?ForumID=12&Mode=0

Describes itself as "Curious Perversions in Information Technology". Everyday they post a new coding horror. These (most) are taken from real world code which people have come across. Covers a whole range of languages.

As a sampler, check out today's post...

------------------------------------------------------------------------------------
The .NET developers out there have likely heard that using a StringBuilder is a much better practice than string concatenation. Something about strings being immutable and creating new strings in memory for every concatenation. But, I'm not sure that this (as found by Andrey Shchekin) is what they had in mind ...


public override string getClassVersion() {
return
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append(
new StringBuffer().append("V0.01")
.append(", native: ibfs32.dll(").ToString())
.append(DotNetAdapter.getToken(this.mainVersionBuffer.ToString(), 2)).ToString())
.append(") [type").ToString())
.append(this.portType).ToString())
.append(":").ToString())
.append(DotNetAdapter.getToken(this.typeVersionBuffer.ToString(), 0xff)).ToString())
.append("](").ToString())
.append(DotNetAdapter.getToken(this.typeVersionBuffer.ToString(), 2)).ToString())
.append(")").ToString();
}

Note, that it is J#, StringBuffer and StringBuilder are the same thing.

Monday, November 29, 2004

Re: General Stuff

I agree with Rahul... no harm making the blog public. The idea is to make our asses less dumb as time passes!

Ok that's two for going public. I guess we'll do it. But remember, that puts pressure on us to please the shareholders.

Rahul update the blog header to what you think is ok. I forgot what it was before.

Also, do you guys mind inviting a wing-mate of mine? The guy's name is Nikhil and he knows a whole lot of stuff about computers, programming and stuff... would be really good to have him on the blog.

Not at all. Five heads are better than four. Send me his email address and I'll send him an invite.

Re: General Stuff

I agree with Rahul... no harm making the blog public. The idea is to make our asses less dumb as time passes!

Also, do you guys mind inviting a wing-mate of mine? The guy's name is Nikhil and he knows a whole lot of stuff about computers, programming and stuff... would be really good to have him on the blog.

I have a dumbass doubt about AOP... missed the earlier discussions, and too lazy to read all the archives! Are there any AO languages like we have OO languages or is it just a way of designing software?


Re: Quantum Computing??

Quantum mechanics applies in the microscopic world. You can't substitute cats and coloured balls for electrons... it just doesn't work that way.

I don't know about qubits; I won't comment on them. But don't generalise your thoughts about the qubit to quantum mechanics. The biggest success for quantum mechanics lies in the fact that it agrees with experiment. I don't think I need to say anything further; and if agreement with experiment isn't sufficient to convince some people about the correctness of a theory, I don't give a damn.

And btw, if you don't know much about a subject and there's some genius who's spent his entire life studying it, it would be rational to actually listen to that guy...
I may not have a degree in physics, but I'm not going to sit quivering over whatever the great minds tell me. I think for myself and do believe that physics concepts should be and could be explained to the average joe.

lol!

Sunday, November 28, 2004

Re: General Stuff

One use I can think right away is security. Make sure before something is executed that it has the right permissions or something.

Authentication is one of the standard uses of AOP. Any more ideas?? You might just get the reason for the entire world to start using AOP.


And unless something like this is baked into the system itself (like Java or .NET) it won't be very popular. What dyou feel?

I had been to an IBM techday Live seminar in Mumbai. The speakers were good. And at really high posts. One of them was the consultant for the other IBM consultants. The guy who teaches the million $ an hour fees IBM consultant. Whoaa!! So I asked him the same "What are the uses of AOP beyond the general Logging etc?" question. He told me that AOP concepts would be used within the next tools with Model Driven Architecture(MDA). Before he could explain further he had to break; and continue; with the seminar. Now I am not sure about MDA. Ever read anything about it? It seemed like using tools to auto-generate code. Design patterns are described for which code is generated. Don't ask me questions on this!!

Another future trend I heard of is Business Process Execution Language (BPEL). Also BPEL is a standard on which even MS is doing some work. BPELJ is the Java implementation for BPEL. I saw a few demos for BPELJ. Again I do not know enough to explain. Read on any of these topics?? I think I'll just go back to reading the String class in the Java API.

Now these things have industry support, wider than AOP. And these things are getting baked in Java atleast.

So I suppose even though ours is a private blog, it may not really matter.
And by the way why aren't we public??

First, what is avivaint? I couldn't find the word on the codeword page that came up on google.


avivaint was a site I made, and had put some app of mine on the site for downloading. You'll find avivaint in this blog.



And dyou want to make it public? You want others reading our dumbass discussions?


If we get good readers, they might want to join. And getting more inputs will be a good thing. I do not see many guys out here, who are really passionate for programming and I suppose if we get guys like us (those who indulge in dumbass discussions) it will be beneficial. Then again getting public may not get us anything. But it seems better. I say we get listed in Nasdaq too!!

Btw if you think we should go public, change to an appropriate blog header

Saturday, November 27, 2004

Re: General Stuff

As Mohn mentioned, the Proxy class feature is similar to an AOP in a way. Both essentially inject code. I have not been able to get many uses of AOP. Always get the same examples like Logging, Authentication and a few more. You guys see anything more? What do you think would be the uses of the Proxy class?

The biggest benefit of the Proxy class feature is that it is dynamic. You can create a bunch of "pluggable" components and hook them into the system at runtime. Thats a huge advantage. And since you can inject your own code dynamically I guess you can do almost anything you want. You are like the gatekeeper. One use I can think right away is security. Make sure before something is executed that it has the right permissions or something.

Somehow I feel AOP will take a while to take off. Right now there is a bit of buzz about it, but it may take many years before it becomes mainstream. There's already enough to learn as it is. And adding another entry to the acronym soup isn't very encouraging. And unless something like this is baked into the system itself (like Java or .NET) it won't be very popular. What dyou feel?

So I suppose even though ours is a private blog, it may not really matter.

And by the way why aren't we public??


First, what is avivaint? I couldn't find the word on the codeword page that came up on google.

I realize even though ours isn't a public blog it will still show up on google and other search engines. I was just saying that since it's not public, there is less visibility so the chances of others finding it and linking to it are less.

And dyou want to make it public? You want others reading our dumbass discussions?

Thursday, November 25, 2004

Re: GNU Classpath

Did you know about this project? Apparently it's associated with Red Hat. These guys are developing the Java API from scratch. Since Sun isn't making it open source, they are doing the next best thing.

I guess the rational behind this project is that the JCP doesn't really listen to the "small guy". Only big companies have a voice. So with this anyone can include their 2 cents.


Its understandable that some guys want a FREE modifiable version of Java. But personally I think it will harm the language as it will definitely break WORA.

Gnu guys have a lot of good software. But Gnu Classpath will not work as there is no industry support. You'll just have a bunch of ultra geeks coding in it for kicks. Check out Gcj. This is one of the compilers
in the Gnu platform which allows Java code to be compiled to native binaries directly.

Sun is not making Java free for modification. But almost all the source is open. At this link you can download the entire source of the Sun JDK. So not only are the Java API's open but also the JDK which is really really amazing.

Any small guy can make himsself heard if it is really sensible. There are so many open source projects which are causing changes in the Java API because the ideas are really good. But no one is really interested in Gnu Classpath.

Re: Quantum Computing??

Whoa!! Wait a minute! I'm not calling "Quantum Mechanics" irrational, there is obviously phenomena related to the atomic and sub-atomic level, what I think is irrational are the explanations and the conclusions formed on the basis of not really knowing whats happening there! Prime example again is Schrodinger's Cat Experiment!

I do believe quantum mechanics still requires a rational explanation, perhaps Lewis Little's "Theory of Elementary Waves" will provide that, I surely hope so!!

And again, the qubit is a contradiction (atleast to me), unless it can be proved that it isn't, I won't accept it. It would be great to have a computer as fast as what a 'quantum' computer could do, but not yet sure it's possible with the qubit.

I may not have a degree in physics, but I'm not going to sit quivering over whatever the great minds tell me. I think for myself and do believe that physics concepts should be and could be explained to the average joe.

Dinesh.

Wednesday, November 24, 2004

Re: Quantum Computing??

I concur with Mohnish on this one...

Please provide some context to that. I have no idea why the theory of gravitation even came up!!

I just gave that as an example, because that's the most well known theory in all of physics.

By the way, we got "used" to it because it's a law of nature, not something followed as a matter of convenience.

Quantum mechanics is very accurate when it comes to predicting phenomena the quantum level and it's not followed as a matter of convenience. Experiments agree with it and it is a well established theory.

And for the record, as far as gravity is concerned, it's not an exact 'law' of nature. It works for all day to day situations but when you scale things up, when it comes to predicting the motion of heavenly bodies accurately, gravity breaks down. That's where you need general relativity. Similarly, at the other end of the spectrum, at the quantum level, gravity fails; even general relativity does. That's where you need quantum mechanics.

I don't know about 'string theories'. They could be mathematically beautiful but inaccurate... I have no clue. But general relativity and quantum mechanics are the two massive pillars of physics and they're very strong.

We are not scientists so we will never fully comprehend this stuff, but I have no reason to doubt the work.
But these guys are the pretty bright and I trust they know what they're doing.

Exactly. And bright is an understatement. Supremely intelligent human beings like Richard Feynman have contributed to quantum mechanics. We're not even qualified to comment on the fundamentals of the subject, let alone judge it. And here we have someone with not even an undergraduate degree in physics calling quantum mechanics 'irrational'. That's more than just ridiculous...

GNU Classpath

Did you know about this project? Apparently it's associated with Red Hat. These guys are developing the Java API from scratch. Since Sun isn't making it open source, they are doing the next best thing.

Dyou think this is good or bad for Java? I doubt it will have much impact. Java is already available on a plethora of platforms and all the source for the APIs is available if you're curious. I don't see much benefit in this project cause any "forking" they do won't run on other Java VMs. And that will prevent many people from using it cause the cross platform bit is a big point for Java.

I guess the rational behind this project is that the JCP doesn't really listen to the "small guy". Only big companies have a voice. So with this anyone can include their 2 cents.

Re: Quantum Computing??

Have you ever read anything on "String Theory"?? Scientists pine about how they wished reality would conform to the "String Theory", it is just mathematically beautiful and symmetric. But reality really refuses to conform to it, lol!

Can't say I have read anything substantial on String Theory. And it's just as well cause I doubt I'd understand it. But I have seen many documentaries on the subject. This one in particular I thought was quite good.

I don't get why you have such skepticism regarding quantum mechanics and the irregularities associated with it. We are not scientists so we will never fully comprehend this stuff, but I have no reason to doubt the work. I guess you could call it blind faith (something like religion) at this point since it's all just theory right now. But these guys are the pretty bright and I trust they know what they're doing.

I have read that String Theory is just mathematically beautiful and a great effort, but it holds little reference to physical reality.

So why dyou readily believe the stuff you've read discrediting string theory as correct?

I hope something as fast as the quantum computer is possible though. That would really be neat. But so far, unless quantum mechanics can be rationally explained, I see no hope for it.

I think they already have working models of very very simple quantum computers at research labs. I read that IBM has something going. Try googling it.

Re: General Stuff

Great to have everyone back on codeWord. Lets write some good blogs!!

the Proxy class in java.lang.reflect in the Java Documentation.

As Mohn mentioned, the Proxy class feature is similar to an AOP in a way. Both essentially inject code. I have not been able to get many uses of AOP. Always get the same examples like Logging, Authentication and a few more. You guys see anything more? What do you think would be the uses of the Proxy class?


Also I have changed my display name. Simply to get codeword as a link when I search for myself on google. ;). I do get a few hits at ncb.ernet.in and a few linux posts I made, but codeWord would be much better to be linked to.

lol. You realize this is a private blog right? It's not listed in the blogger directory anywhere, so no links pointing to it. Google ranks based on popularity of site. Based on that, I wouldn't get my hopes up. Anyway, I helped you out... gave the search bots something to chase on your behalf.


Try this link. codeWord is there as of now. So I suppose even though ours is a private blog, it may not really matter.

And by the way why aren't we public??

Tuesday, November 23, 2004

Re: Quantum Computing??

Coming to the question about 'rationality'. You wouldn't call the theory of gravitation irrational, would you? After all, why the hell should two bodies attract each other just because they have mass and that too with the inverse square law? But it's something we've gotten used to.

Please provide some context to that. I have no idea why the theory of gravitation even came up!!

By the way, we got "used" to it because it's a law of nature, not something followed as a matter of convenience.

Dinesh.

Monday, November 22, 2004

Re: Quantum Computing??

I hope something as fast as the quantum computer is possible though. That would really be neat. But so far, unless quantum mechanics can be rationally explained, I see no hope for it.

I don't know much about quantum computers but from whatever I know about quantum mechanics, I wouldn't call it irrational... no way. It may not follow directly from common sense but it is mathematically sound and explains things at the microscopic level. The quest for the 'grand unified theory' won't lead us anywhere for quite some time... that's what I feel... it will be way too complicated. And we have two theories to explain phenomena at two totally different levels - general relativity and quantum mechanics. So we aren't lost...

Coming to the question about 'rationality'. You wouldn't call the theory of gravitation irrational, would you? After all, why the hell should two bodies attract each other just because they have mass and that too with the inverse square law? But it's something we've gotten used to. Unfortunately, quantum mechanics is not so simple and it would take a lot to get 'used to' it. But the theory's amazing. Whether quantum computing can become a reality is a totally different question...

Sunday, November 21, 2004

Re: Quantum Computing??

Have you ever read anything on "String Theory"?? Scientists pine about how they wished reality would conform to the "String Theory", it is just mathematically beautiful and symmetric. But reality really refuses to conform to it, lol!

I have read that String Theory is just mathematically beautiful and a great effort, but it holds little reference to physical reality.

I hope something as fast as the quantum computer is possible though. That would really be neat. But so far, unless quantum mechanics can be rationally explained, I see no hope for it.

Saturday, November 20, 2004

Re: General Stuff

Mohn.. could you post more on the Java Proxy classes you mentioned in your recent AOP like post. I am aware of basic Reflection and stuff but what you mentioned was something new. Does .Net have something similar?

Look for the Proxy class in java.lang.reflect in the Java Documentation. It will probably explain things much better.

But for just a basic overview. Proxy has something like a factory method called newProxyInstance() which will give you back an object instance. You give newProxyInstance() a list of interfaces to implement and an InvocationHandler. So the object instance that you get back will in effect "implement" the list of interfaces you gave it. You can cast the object to any one of those interfaces and invoke methods on it.

This is where the InvocationHandler comes in. InvocationHandler is an interface with one method... "invoke( Object proxy, Method method, Object[] args )". Everytime you invoke a method on the proxy object, the invoke() method in InvocationHandler is called and the system passes it the object instance on which the method was called (proxy), the method that was called (method) and the arguments passed to the method (args). This is the point where you do something useful.

Here's a simple example - might help to understand.

Here are two basic interfaces.

public interface Interface1
{
    public void method1();
}

public interface Interface2
{
    public void method2();
}


You want your proxy object to implement these interfaces. You pass it to newProxyInstance() along with an InvocationHandler.

Class[] interfaces = new Class[] { Interface1.class, Interface2.class };

InvocationHandler myInvocationHandler = new MyInvocationHandler();

Object proxy = Proxy.newProxyInstance( interfaces, myInvocationHandler );


At this point you have an object (proxy) that implements the interfaces, but doesn't do anything. So we have to write an InvocationHandler that will intercept the method calls and do something useful.

public class MyInvocationHandler : InvocationHandler
{
    public Object invoke( Object proxy, Method method, Object[] args )
    {
        if ( method.getName() == "method1" )
        {
            System.out.println( "method1" );
        }

        if ( method.getName() == "method2" )
        {
            System.out.println( "method2" );
        }

        return null;
    }
}


So here, MyInvocationHandler intercepts the method calls and does something. So if you invoke the methods on the proxy object...

Interface1 if1 = (Interface1) proxy;
if1.method1();

Interface2 if2 = (Interface2) proxy;
if2.method2();


... it will output...

method1
method2


The important thing to realize is that everything is dynamic. There is no concrete class anywhere that implements Interface1 and Interface2. It's all done at runtime. And what I said last post about AOP was that in MyInvocationHandler's invoke method, I could add anything I wanted before and after the method call...

public class MyInvocationHandler : InvocationHandler
{
    public Object invoke( Object proxy, Method method, Object[] args )
    {
        System.out.println( "Rahul Revo codeword" );
        System.out.println( "Rahul Revo codeword" );

        if ( method.getName() == "method1" )
        {
            System.out.println( "method1" );
        }

        if ( method.getName() == "method2" )
        {
            System.out.println( "method2" );
        }

        System.out.println( "Rahul Revo codeword" );
        System.out.println( "Rahul Revo codeword" );

        return null;
    }
}


The output now will be...

Rahul Revo codeword
Rahul Revo codeword
method1
Rahul Revo codeword
Rahul Revo codeword
Rahul Revo codeword
Rahul Revo codeword
method2
Rahul Revo codeword
Rahul Revo codeword


I couldn't find anything similar to Proxy class in .NET. I would be surprised if you couldn't do it though... just have to find out how! I'll look some more and get back to you.

Also I have changed my display name. Simply to get codeword as a link when I search for myself on google. ;). I do get a few hits at ncb.ernet.in and a few linux posts I made, but codeWord would be much better to be linked to.

lol. You realize this is a private blog right? It's not listed in the blogger directory anywhere, so no links pointing to it. Google ranks based on popularity of site. Based on that, I wouldn't get my hopes up. Anyway, I helped you out... gave the search bots something to chase on your behalf.

Re: Quantum Computing??

Welcome back. Good to have both Hrishi and you blogging again.

Is this really possible, I really doubt it. For long scientists have been saying that at the quantum level, causality is broken, that is just too dumb! For example, the whole experiment regarding Schroedinger's Cat is ridiculous! The claim of an event happening only when our eyes fall on it is just too goddamned subjective!!

From what I've read, it looks like the laws of phsyics literally break down at the quantum level. I don't see any reason to dismiss it. There is some sort of incompatibility between quantum mechanics and general relativity. And physists are trying to make sense of it all with "string theory". They are trying to unify everything from the very big to the very small. That's why sometimes it is called "the theory of everything".

True we don't know yet exactly what happens at the quantum level, but to infer from that a truly chaotic view of it is absurd. How can something be two different things at the same instant of time?? I'm perplexed, someone explain!

As you say, unlike digital computers today where a bit is either on or off, qu-bits can be 0 and 1 at the same time. It's based on the fundamental ambiguity inherent at the quantum level. They key to quantum computers is that one would present it with a problem and a way to test the answer. Through some disambiguating process (don't ask me to explain how!) the failing answers cancel each others out and only the one that passes the test remains. This is why it's so effective in cryptography.

I've been wondering for sometime if something like quantum computing is possible, it's very base seems shaky! True I'm no physics expert, but still I can't accept a contradiction as the basis to a scientific theory.

And if you think about it quantum computers will definitely be an eventuality. The transistors are getting pretty damn small already. By 2020 they will reach atom size. And at that point we will have no other choice but to deal with quantum mechanics.

I dunno if that appeased you in anyway (I'm guessing probably not!) but that's how I understand it.

Quantum Computing??

Reading the post on Crypto and the effects quantum computing has on it, I thought I'd write a few things.

The whole concept of quantum computing is based on the rational of what they call a Qubit. A qubit is a bit of memory that is capable of holding of both boolean values 0 and 1 simultaeneously!!

Is this really possible, I really doubt it. For long scientists have been saying that at the quantum level, causality is broken, that is just too dumb! For example, the whole experiment regarding Schroedinger's Cat is ridiculous! The claim of an event happening only when our eyes fall on it is just too goddamned subjective!!

True we don't know yet exactly what happens at the quantum level, but to infer from that a truly chaotic view of it is absurd. How can something be two different things at the same instant of time?? I'm perplexed, someone explain!

I've been wondering for sometime if something like quantum computing is possible, it's very base seems shaky! True I'm no physics expert, but still I can't accept a contradiction as the basis to a scientific theory.

There is one theory though, "The Theory of Elementary Waves" by Lewis Little thats seeks to explain quantum mechanics rationally!

Dinesh.

Long-time No Post

Hey guys!!

It's been a long time since I posted. Was not following the blog either, but it's good to see that Revo and Mohn kept it alive and kicking.

Will write more later. Started on a File System Simulation project, it's pretty interesting. I'll talk about it on another post.

Dinesh.

Friday, November 19, 2004

General Stuff

Firstly great to have Hrishi back on codeWord. Try and post whenever.

Mohn.. could you post more on the Java Proxy classes you mentioned in your recent AOP like post. I am aware of basic Reflection and stuff but what you mentioned was something new. Does .Net have something similar?

Also I have changed my display name. Simply to get codeword as a link when I search for myself on google. ;). I do get a few hits at ncb.ernet.in and a few linux posts I made, but codeWord would be much better to be linked to.

See this link from James Gosling's blog. Follow links to the Solaris guys blog and then the HP's original post. Surely will lift your spirits. Hey Hrishi, what do you think of Solaris getting open sourced?

Tuesday, November 16, 2004

Re: Secret Key Cryptography

I didn't understand this. What dyou mean 25 times in a row?

I'll post the entire unix password encryption process a little later. However, in brief - while storing passwds, unix encodes a block of 64 zeroes using the key got from the passwd entered by the user. The cipher text is again encoded using the key. This process is done 25 times. So when someone is attempting to log on, the system compares the final cipher text got from the passwd entered by the user to the cipher text got from the passwd the user entered when the account was created (or the passwd was last changed). If the cipher text matches, he's in.

This method of encrypting 25 times was done to slow down the passwd cracking process (basically the key search) 25 times. However, with the computing power available today I don't think it's of a whole lot of importance. If you go through the link I'd posted on DES, you'll see that it uses certain tables to encrypt stuff. The unix passwd encryption system tables different from those specified in the standard DES. This is done so that hardware encryption chips for DES can't be used to crack unix passwds.

Yeah Crypto is an interesting topic. I guess lots of maths is involved. I've read that since a lot of it comes down to factoring large numbers, quantum computers would basically shatter all encription. So they need to come up with new methods to secure data.

The security of the cryptosystems basically depends on the computational infeasibility in factoring large *prime* numbers. Their feasibility comes from the fact that it's easy to generate large primes. As computing power increases, larger key sizes are being used. Don't know much about the exact speed of quantum computers but to keep the current systems secure they'll have to use obscenely large key sizes or like you said come up with new encryption algorithms.


Re: Secret Key Cryptography

BTW, the unix password encryption uses DES... 25 times in a row.

I didn't understand this. What dyou mean 25 times in a row?

Yeah Crypto is an interesting topic. I guess lots of maths is involved. I've read that since a lot of it comes down to factoring large numbers, quantum computers would basically shatter all encription. So they need to come up with new methods to secure data.

Secret Key Cryptography

This is typically a 'symmetric' type of cryptosystem. You use the same key for encryption and decryption; which is why the key needs to be kept 'secret'. The most commonly used encryption algorithm is the DES (Data Encryption Standard).

DES is a block cipher. It operates on 64-bit blocks. It has a 56-bit key. The actual encryption algorithm is very complex. I won't go into the details (that's because I don't remember most of it! ;-)). However it uses several permutations; generates 16 keys from the first main key and then does a whole lot of stuff before coming up with the cipher text.

The way to crack DES is to try out all the 2^56 keys! That entails *massive* computing power; however it can be done. May take days and months to crack the key for one message but it can be done. So it's not entirely secure.

So what they do is they use Triple-DES. It uses 3 keys - that gives an effective key size of 168 bits. Now that's almost impossible to crack with the current computing power that we have. Security guru Bruce Schneiner (not sure of the spelling) said - There isn't enough silicon in the galaxy or enough time before the sun burns out to brute-force Triple-DES.

Here's a lot of info about DES.

BTW, the unix password encryption uses DES... 25 times in a row.

Virtual functions called from ctors and dtors

Here's an interesting difference between C++ and C#.

Consider these two classes...

// C++
class Base
{
    public:
        Base()
        {
            cout >> "Base::ctor()\n";
            method();
        }

        ~Base()
        {
            cout >> "Base::dtor()\n";
            method();
        }

        virtual void method()
        {
            cout >> "Base::method()\n";
        }
};

class Derived : public Base
{
    public:
        Derived()
        {
            cout >> "Derived::ctor()\n";
        }

        ~Derived()
        {
            cout >> "Derived::dtor()\n";
        }

        virtual void method()
        {
            cout >> "Derived::method()\n";
        }
};


What dyou think this prints?

{
    Base b* = new Derived;
    b->method();
    delete b;
}


The output is:

Base::ctor()
Base::method()
Derived::ctor()
Derived::method()
Derived::dtor()
Base::dtor()
Base::method()

But if you have the same two classes in C#, the output is:

Base::ctor()
Derived::method()
Derived::ctor()
Derived::method()
Derived::dtor()
Base::dtor()
Derived::method()

The derived method is ALWAYS called.

The reason is that in C++, the type of an object changes. When you create a new Derived object, it's constructor calls Base's constructor automatically. In Base's constructor, the type of the object is "Base". So when you call method() in its constructor, all it knows about is itself. So it calls its own method(). Only after you get to Derived's constructor will the object's type be "Derived". Same thing with destruction. Derived's destructor runs first, destroying itself. It then goes up to Base's destructor. Again at this point Base only knows about itself and calls its own method().

It's different in C#. An object can only ever have one (fixed) type. It doesn't morph like in C++. So when you call Derived's constructor, it calls Base's constructor automatically. But at this point the type of the object has already been set to "Derived" and the system knows that it has overriden method(). So in Base's constructor, method() drops down to Derived's implementation. Same with destruction.

Apparently, the difference in C# has to do with the fact that it has a GC. The GC always needs to know the exact size of the object... it can't change. I dunno if this is the case in Java as well. I would assume so.

Personally, I feel the C++ way makes more sense. It's more logical. What dyou think?

Re: Happy Birthday CODEWORD

Good catch Rahul. I didn't remember myself. And I saw the post on the 14th but took my lazy ass till today to reply.

Overall I've been very happy. Pretty much exactly what I had hoped it would be about. Got a lot out of it and some discussions have been very insightful. Dyou ever go back and read previous posts? Sometimes I'm quite surprised and sometimes laugh at dumb stuff (mostly Rahul's posts ;-).

Here's to another good year... Cheers!

hey!

Hey guys,

I'm back. Not much techie stuff to write right now but I'm back on the blog... good to be back!

Recently I've gotten interested in stuff like crytography, security etc. Uses a lot of number theory and it's fascinating. I'll try and post some related stuff soon.

Sunday, November 14, 2004

Happy Birthday CODEWORD

Well its been one year since Codeword began. Hopefully we'll go on much longer and discuss more tech stuff. And lots of years from now, look back and laugh at some really lame posts, like those from Me!!

Friday, November 12, 2004

AOP

So all this talk of Aspect Oriented Programming going on right now. Still don't have a crystal clear picture of what it involves but I'm beginning to understand what it's about and its benefits.

We had a project in my CS class which helped me understand the potential of AOP. My prof had no clue about AOP so that was not his intent - just something I realized on my own after our discussions on the subject.

We had to create a proxy object. The client would provide a list of interfaces and implementations for the proxy object. Essentially the way it would work is that instead of having a concrete class that implements certain interfaces and has an implementation, you would only have a proxy object and tell it to dynamically create objects that implement the interfaces and have access to classes that provide an implementation for the interfaces.

    Proxy
|-----------|
|If1 | Imp1 |
|If2 | Imp2 |
|If3 | Imp3 |
|-----------|


So if you have a handle to the proxy object, you can call any of the methods on the interfaces it implements, but first you would need to cast it...

Class[] interfaces = new Class[] { Class.forName( "If1" ), Class.forName( "If2" ), Class.forName( "If3" ) };

Class[] implementations = new Class[] { Class.forName( "Imp1" ), Class.forName( "Imp2" ), Class.forName( "Imp3" ) };

Object proxy = ProxyFactory.getNewInstance( interfaces, implementations );

If1 interface1 = (If1) proxy;
interface1.someMethod1(); // 1

If2 interface2 = (If2) proxy;
interface2.someMethod2(); // 2


So the way this works is that the proxy object internally has an InvocationHandler. The InvocationHandler intercepts any calls on the proxy object. So when 1 is executed, you get to the InvocationHandler and the system tells it which method is being called (someMethod1) and on what interface (If1). So at this point, InvocationHandler will create an object of type Imp1 and actually call the method on that object. Same thing when 2 is executed. In this case, an instance of type Imp2 is created and the someMethod2 is called on it.

interface1.someMethod1() --> Proxy --> InvocationHandler --> ( new Imp1() ).someMethod1() --> Return

interface2.someMethod1() --> Proxy --> InvocationHandler --> ( new Imp2() ).someMethod2() --> Return

Basically what this provides is a level of indirection. This is possible in large part because of reflection. The most important part is that your InvocationHandler is the one calling the actual method implementation. Normally we would manually create an object of Imp1 or Imp2 and called the respective methods, but here we don't interact with them directly. So you could executed some additional code before and after the method is called. The most common example for AOP given is Logging. So we could log information before or after the method call is made.

interface1.someMethod1() --> Proxy --> InvocationHandler --> Log --> ( new Imp1() ).someMethod1() --> Log --> Return

interface2.someMethod1() --> Proxy --> InvocationHandler --> Log --> ( new Imp2() ).someMethod2() --> Log --> Return

This ability to inject your own functionality between someone else's method calls is quite interesting.

Saturday, October 30, 2004

BigInt Division

Mohn suggested sharing some of the BigInt code. The most interesting method I wrote was for division. And it is highly in-efficient, but I feel interesting. The general long division learnt in Algebra class if implemented is more efficient.

Assume that the other BigInt operations like <, *, - etc have already been implemented. I'll try to add descriptive comments within the code. (something which I should make a practice!!)

We were asked to implement a BigInt containing upto 100 digits. So I used a vector for that and a bool for sign of the number.

Here goes...

BigInt BigInt::
operator/(const BigInt& rhs) {
//hard code for RHS == zero
BigInt* zeroBigInt = new BigInt(0);
if ( (*this) == (*zeroBigInt) ) {
return *zeroBigInt;
}
//create copies of both lhs and rhs without sign
//set the sign of the answer later
//from now, lhs and rhs are the positive values of the numbers
BigInt* tempThis = new BigInt(*this);
tempThis->sign = true;
BigInt* tempRhs = new BigInt(rhs);
tempRhs->sign = true;
//if lhs < rhs value return 0
if ( (*tempThis)<(*tempRhs) ) {
delete tempThis;
delete tempRhs;
return *zeroBigInt;
}
//int quotient = 0;
//int factor = 0;
BigInt* quotient = new BigInt(0);

//get number of digits of lhs and rhs
int tempThisSize = tempThis->bigInt->size();
int tempRhsSize = tempRhs->bigInt->size();

//the difference between the two sizes should be atleast 2 digits
//tempThisSize will be decremented in the loop
while ( (tempThisSize) > (tempRhsSize+1) ){
//get the greatest factor of ten which is 2 digits smaller than lhs
BigInt* factor = new BigInt(1);
BigInt* tenBigInt = new BigInt(10);
for (int i=0; i<(tempThisSize-tempRhsSize-1); i++){
(*factor) = (*factor) * (*tenBigInt);
}
//cout << "factor " << (*factor) << endl;
(*quotient) = (*quotient) + (*factor);

//factorTemp is a copy of factor
BigInt* factorTemp = new BigInt(*factor);

//now increment the quotient by factor
//assume that we obtained a factor of 100 earlier
//in this step quotient is incremented by values of 100
//as 200, 300, .. till the muliplication tempThis is just exceeded
while( (*tempThis) >= (*zeroBigInt) ) {
(*tempThis) = (*tempThis) - ( (*factorTemp) * (*tempRhs) );
(*quotient) = (*quotient) + (*factor);
//cout << quotient << " " << (*tempThis) << " " << (*factorTemp) << endl;
}
//as a factor just higher than required has been taken
//perform a rollback
(*tempThis) = (*tempThis) + ( (*factorTemp) * (*tempRhs) );
(*quotient) = (*quotient) - (*factor) - (*factor);
delete tenBigInt;
delete factor;
delete factorTemp;
//cout << (*quotient) << " " << (*tempThis) << " " << endl;
//keep reducing tempThisSize
//if the earlier factor was 1000, now 100 can be generated
//so approaching the correct value
tempThisSize--;
}

//cout << "finished factor part!!" << endl;
//now to division by simple subtraction
//in the worst case, we'll have perform subtraction 99 times
BigInt* oneBigInt = new BigInt(1);
while ( (*tempThis) >= (*zeroBigInt) ) {
(*tempThis) = (*tempThis) - (*tempRhs);
(*quotient) = (*quotient) + (*oneBigInt);
//cout << (*quotient) << endl;
}
(*quotient) = (*quotient) - (*oneBigInt);
delete oneBigInt;

//set the sign of the output number
if ( (this->sign)==(rhs.sign) ) {
quotient->sign = true;
} else {
quotient->sign = false;
}

//some housekeeping and return
delete zeroBigInt;
delete tempThis;
delete tempRhs;
return *quotient;
}

Could have obtained a better perf if I created all the temporary BigInt's on the stack. Any more pointers? Any different algo's?

Monday, October 25, 2004

Re: C#

In the current module we are being taught programming in C/C++. C was just 2 sessions. C++ has more sessions but still really fast. And we are covering the new std C++. One of the C++ assignments is to create a Big Integer (BigInt) data type. The BigInt can have upto 100 digits and we are supposed to overload operators for < , > , == , +, - , / , % and few others. Just completed this one and division (/) has been amazing to code!!

I had to create a BigInteger in my "Generic Programming and the STL" class too. I'd be quite interested in seeing your code. I'll send you mine if you want.

I've always wanted to learn more programming languages. I think we agree that to be a complete dev have to know more than a particular lang. Good to appreciate design of each lang. Its just that I've been too lazy. Once you've entered the comfort zone with a lang, its hard to get out.

C++ has been a good experience. Lots to learn still but seen some really cool stuff. I dunno if I'll ever do stuff like python etc..


Yup, I think it's important not only to learn different languages, but also different paradigms like procedural, object oriented, generic etc... Also different types like functional languages (Haskell) and dynamically typed langauges (Python). Maybe you won't use it day to day, but it's always good to have that perspective.

I guess it's a question of time and priority. Learning different languages is fine, but you need to specialize in one or two which will be your bread and butter. And something like C++ it's no easy task. Rest are the "good to know" type.

There's a limit to how much you can understand by reading.

Couldn't have said it any better myself. In reading vs actual coding, there is a world of difference.

Google's browser plans

http://mozillanews.org/?article_date=2004-10-19+01-52-31

All speculation of course. But if they do eventually release something like that it would be the ultimate Windows/Web hybrid application. Seems like they are will be just tieing all their web properties into a client app.

Sunday, October 10, 2004

Re: C# ?

what's Mgpt? So are they going to teach you C/C++ in this next module or just use it for your programming?

Mgpt is one of the assessment methods here at Ncb. You'll get lots of info here . To put it simply, is a sort of a topcoder puzzle solving contest. Complexity of problems varies from "very difficult" to "writing a compiler is easier" levels. We have to clear Mgpt's for two modules, Java programming and Data Structures using Java. Generally only 3-4 students clear both in the entire year.

In the current module we are being taught programming in C/C++. C was just 2 sessions. C++ has more sessions but still really fast. And we are covering the new std C++. One of the C++ assignments is to create a Big Integer (BigInt) data type. The BigInt can have upto 100 digits and we are supposed to overload operators for < , > , == , +, - , / , % and few others. Just completed this one and division (/) has been amazing to code!!

In the next modules we have to create projects and no assignment problems as such. In those we can choose any language of implementation, Java or C++. No prizes for guessing what I'll prefer.


Just for an overview - C# programming

I have a super huge grin on me face. You don't know how happy this made me. Change is good :-)


I've always wanted to learn more programming languages. I think we agree that to be a complete dev have to know more than a particular lang. Good to appreciate design of each lang. Its just that I've been too lazy. Once you've entered the comfort zone with a lang, its hard to get out.

C++ has been a good experience. Lots to learn still but seen some really cool stuff. I dunno if I'll ever do stuff like python etc..


So hope that cleared some stuff up. What are your impressions of .NET from the book. I've been blogging a lot about this stuff so you should be familiar with some of it already - atleast I hope.

I had to switch to C++ so didn't do much. Most of the explanations you gave caused me to wonder.. what happens in Java. So I'll probably research on this for some time.

All your blogs on C# have been super helpful. I still have a few doubts, but those will get fixed only when I start coding. There's a limit to how much you can understand by reading.

Wednesday, October 06, 2004

Google's domain registrations

There was an article on slashdot regarding the registration of gbrowser.com. Some dude who was as early investor in google said that they weren't going to be entering the newly emerged browser wars. Check out these comments...

Now it makes you wonder why Google registered gbrowser.com?

  >> No it doesn't. They also registered googlesucks.com, but I don't
  >> think they feel that way about themselves.

    >> They registered Googlesucks.com? This is clear evidence
    >> they're entering the vacuum cleaner market!

      >> Crap! I was hoping it was an adult content
      >> search engine :(

Saturday, October 02, 2004

Tuesday, September 28, 2004

Re: C# ?

The DS module in Cdac is over and I just flunked in the Mgpt. The next module is C and C++ programming and so I should have lots of questions coming up soon.

Sorry if you've told me this before already but what's Mgpt? So are they going to teach you C/C++ in this next module or just use it for your programming?

Just for a change I got a book from the library which I plan to go through very fast just for an overview - C# Progamming (with the public beta) by Wrox. So got another bunch of questions...

I have a super huge grin on me face. You don't know how happy this made me. Change is good :-)

Enums in c# are value types. So firstly are data types stored on the stack? And even class instances can be stored on the stack.. so what's the syntax for that?

In .NET there are two sets of objects... Reference and Value types. Reference types are Classes, Interfaces, Delegates, Arrays. Value types are Structs and Enums. Basically, all objects in the system have System.Object as the root object. But value types, also derive from System.ValueType. So you CANNOT create class instances on the stack. You can only create struct instances on the stack, which implicitly derive from System.ValueType. And btw, all primitive types in C# like int, double etc... are structs in the BCL (Base Class Library).

On enums i wanted to compare them with java enums. In java 5.0, enums were added which are actually classes. So when you say enum {something} you are actually extending a class. So enums in java are pretty advanced. I'll have to read more to give more info. What about c#? You can get more info on Enums from this Oreilly sample chapter

I guess again the motivation for making enums classes was for backward/forward compatibility. I scanned the article link you sent briefly and you're right... java enums have much more ability since they are full blown classes. Personally, I don't see the point of this. Enums play a very specific role in C++ and C#. They are just defining type-safe constants as a group. Why complicate matters by allowing it to do everything a class can do? Why would you want to define methods on an enum? Dyou see a benefit?

Another thing was the huge number of keywords. I read on const and readonly. Are they really required over final?

I dunno the exact number, but I guess C# must have maybe +20 keywords over java.

Regarding const and readonly - they serve two different purposes. const is evaluated at compile time and is implicitly static. So every instance of the class gets only one copy. readonly is evaluated at runtime and are not static. So every instance gets its own copy.

If you make any fields const in your app, when you compile it, the compiler will replace all the fields with the actual value. For ex.

class Math
{
    public const double PI = 3.14;
}

class Circle
{
    private double circumference( double radius )
    {
        return 2 * Math.PI * radius;
    }
}

Here, it will turn into 2 * 3.14 * radius. The compiler can make this optimization. You can't do it for readonly fields since they are evaluated at runtime.

But readonly fields have a different benefit. What if you shipped this code and then later on realized that PI is wrong or you want to make it more accurate. If you had made it a readonly field, you can just make the change and ship the updated Math class. Your Circle class's Circumference method will automatically use the new updated PI value without re-compiling. Since PI is evaluated at runtime it will automatically use the updated value. But since it's const here, you will have to recompile.

Does final do both?

The main question that propmted this blog was inheritance.virtual override new sealed and abstract... wow so many keywords and complex relationshipsDo you make use of all these keywords?? Isn't the Java style much more simple? Trying to appease c++ programmers, but added awhole extra bunch of words!!

Again, all those keywords serve a purpose just as const vs readonly. They didn't just get keyword happy. I had written a blog regarding polymorphism in C++, Java and C# long back - http://codeword.blogspot.com/2003/12/polymorphism-c-vs-java-vs-c.html. It talks about virtual, override and new. So check it out and see if you think they serve some meaningful purpose.

Java uses one keyword which can be used on classes, methods and fields. They have the same essential meaning for all - preventing things from being changed. C# has taken a different route. They make all methods non-changeable by default and then make you explicitly say if you want to change something. They requires more keywords.

On boxing and unboxing, why the need for a different object class from the default Object class? In Java the wrapper classes for primitives like Integer for int can be cast to Object. Wrt Java 5.0 I think the compiler will be generating casting code which was generally written by hand.

One thing you should understand is that all the types are defined in the .NET BCL. The language itself does NOT define any types. The languages only provide an alias for primitive, string and object types. So in C#, int == System.Int32, long == System.Int64, double == System.Double, string == System.String, object == System.Object etc... Same way VB and Managed C++ provide something similar. So there is no different object class from the default Object class.

So hope that cleared some stuff up. What are your impressions of .NET from the book. I've been blogging a lot about this stuff so you should be familiar with some of it already - atleast I hope.

Btw, check this link out - C# from a Java Developers Perspective. It's a bit old but quite comprehensive and an easy read. He'll have to update it for Tiger. And before you expect some .NET bashing... he's a Microsoft employee.