I cannot stop marvel at the 'gem' of a useful software that I installed today (www.rubyenterpriseedition.com). As its name was enterprise ruby, earlier my impression was that it was some proprietary software (I had heard websites like twitter using it) but to a pleasant surprise, turned out to be an open source software distributed by Phusion (the same guys behind Passenger deployment tool). It is provided for all *NIX systems and ready to run .deb installer for Ubuntu.
Well, to the point itself, Enterprise Ruby features :
# An enhanced garbage collector. This allows one to reduce memory usage of Ruby on Rails applications (or any Ruby application that takes advantage of the feature) by 33% on average.
# An improved memory allocator. This increases Ruby's performance drastically.
# Various developer tools for debugging memory usage and garbage collector behaviour
A lot of this optimization is POSIX dependent, which is why there is no such version for windows yet. The FAQ page (http://www.rubyenterpriseedition.com/faq.html) demonstrated this and a lot more about the improved platform, which is why a lot of production servers are adopting it.
However, the nicest thing in the whole installation was that you got a lot of commonly used gems installed, which is really appreciative because we otherwise have to manually install or repeatedly try to install via the gem repository (a thing that sadly fails most of the time for me).
Hopefully you find this information helpful. If you feel anything contradictory to that posted above, please do comment as I might be wrong in some areas that I haven't checked out yet.
Musings on computers and software as I continue learning and sharing software development knowledge.
Sunday, June 27, 2010
Thursday, June 17, 2010
Towards RESTful Web Services
When we talk about distributed applications, internet is the enabling technology that comes to the mind. Sure, one can think of many other ways and protocols of performing a software feat on more than one computer at the same time, but the fact that internet or the World WideWeb is the biggest network out there remains true and has the appeal of its ubiquity in the implementation of any such distributed application.
You can even think of a website as a distributed application, but here, the client (or you) is a passive person/computer that can only use what is fed to him. Even dynamic websites ultimately generate static content (or for that matter, even Web 2.0, which is built on user experience rather than anything else), which even if it appears as a distributed application, there is no such distribution of computation going on. However, in a truly distributed application, the portions of application reside in different computers and communicate with each other via internet. This interchange of the necessary data and process requests is what is needed in these kinds of applications.
Before I further befuddle you from the idiosyncrasy of complex distributed application technologies that are widely used like WS-* stack, we need to understand the simple phenomenon that on the web, the content generated on the web pages is in form of HTML, that is read by the web browsers on your computers and displayed to you as a web page. However, if we change this format to a different one (still textual, not changing into binary ) like XML or JSON, the end users of our application become different.
So ultimately, there is not much difference between a web site and a web service as the former can be used interchangeably with the latter and what make a web site easier to be used by a human being can be applied to a web service, which can be applied over internet on the same manner, to a program. The common web service protocol has been SOAP, which is essentially an application layer on top of whatever internet it is operating on. This created a lighter way of creating distributed and interoperable applications, but at an increasing cost of complexity. Instead of reinventing the wheel, wouldn't it be simply great if we simply set forth the best of the internet in a mix and obtain something refreshingly easy, elegant and say useful for our purposes.
This is what REST or REpresentational State Transfer is all about. It is not just an alternative method of creating web services, but is arguably the easier way of creating them as it addresses the addressing and discovery concerns based on the established design patterns of internet. The simplicity of this approach is not its weakness as you probably might be thinking, but its strength. Other web service provider technologies can claim maturity and tools to hide complexity, but the changing face of REST is negating this limitation of itself in this regard. In my future posts, I'll explain how the changing face of REST is going to make it a force to reckon with in future.
You can even think of a website as a distributed application, but here, the client (or you) is a passive person/computer that can only use what is fed to him. Even dynamic websites ultimately generate static content (or for that matter, even Web 2.0, which is built on user experience rather than anything else), which even if it appears as a distributed application, there is no such distribution of computation going on. However, in a truly distributed application, the portions of application reside in different computers and communicate with each other via internet. This interchange of the necessary data and process requests is what is needed in these kinds of applications.
Before I further befuddle you from the idiosyncrasy of complex distributed application technologies that are widely used like WS-* stack, we need to understand the simple phenomenon that on the web, the content generated on the web pages is in form of HTML, that is read by the web browsers on your computers and displayed to you as a web page. However, if we change this format to a different one (still textual, not changing into binary ) like XML or JSON, the end users of our application become different.
So ultimately, there is not much difference between a web site and a web service as the former can be used interchangeably with the latter and what make a web site easier to be used by a human being can be applied to a web service, which can be applied over internet on the same manner, to a program. The common web service protocol has been SOAP, which is essentially an application layer on top of whatever internet it is operating on. This created a lighter way of creating distributed and interoperable applications, but at an increasing cost of complexity. Instead of reinventing the wheel, wouldn't it be simply great if we simply set forth the best of the internet in a mix and obtain something refreshingly easy, elegant and say useful for our purposes.
This is what REST or REpresentational State Transfer is all about. It is not just an alternative method of creating web services, but is arguably the easier way of creating them as it addresses the addressing and discovery concerns based on the established design patterns of internet. The simplicity of this approach is not its weakness as you probably might be thinking, but its strength. Other web service provider technologies can claim maturity and tools to hide complexity, but the changing face of REST is negating this limitation of itself in this regard. In my future posts, I'll explain how the changing face of REST is going to make it a force to reckon with in future.
Subscribe to:
Posts (Atom)