Tuesday, December 16, 2008
Wednesday, December 3, 2008
So, Cloud computing is a deployment option for your application which has the following four characteristics:
Your run your stuff on other people machines and infrastructures(IaaS). These resources can be conזsumed either from the the intranet for small large enterprises and government use or reside in the in the Internet for small/midsized companies. But the main issue here is that all you pay for the infrastructure in that case is money.
Using this deployment option should save you money and possibly a lot of money.
- Reduced CAPEX:, meaning money you will need to invest upfront inoder to support the expected and unexpected changes in capacity.
- Reduced OPEX: Let cloud providers do what they do the best, maintain large machine infrastructure. Hopefully (with the right amount of competition) that will reduce your IT OPEX.
- New financial model: All the ?aaS concept is presuming a different(and more fair) revenue model compared to what we are used to in the traditional software industry you only pay for the resources you use no payment upfront and not commitment for long term.
You are able to adjust seamlessly the amount of infrastructure you consume. This helps you handle peaks more efficiently. You no longer need to choose between compromising on your SLA in peak times or let your servers rot in the cellars when you really don't need them.
You as a developer(and I am intentionally focusing on developers) are able to allocate and deallocate resources by yourself through an API no need to sign forms no paperwork. You can write on use software which will do the job for you.
And not to my thoughts:
So I have a crazy vision regarding this new concepts what I see in my mind is a billion USD internet company with a size of 10 people which altogether bring unique (probably patented) value and all their job is to orchestrate outsourced resources to the benefit of the company.
A CEO - orchestrating vision and execution with a set of VAs
VP sales - orchestrate sales operation through an effective network affiliation
VP infrastructure - manages the application in the cloud
VP support - orchestrate outsourced call center and
VP R&D -
4 developers - developing only core business logic consuming open source projects for the other stuff.
I know it is a bit extreem but I don't care I am sure it will happen one day.
Monday, November 24, 2008
I have big expectations toward this summit as I believe that the future is in the cloud.
This is not an affiliation link :-) you are welcome to register I'd love to see you there!
Sunday, November 23, 2008
Tuesday, November 18, 2008
After investigating it a little I want to share with you the following information. https (SSL) requests to the server involve two extra round trips for certificate authentication and key exchange. As a result a short HTTP request will require 3 times the network round trip time (6 times the latency). For example if latency is 90 ms and server time is 20ms https request will be 90*6+20= 560ms. While an http request will require 200ms. Quite impressive difference.
Saying that it is extremely important to reduce number of https requests to a server combining https requests is 3 times more important than regular http requests.
Tuesday, June 24, 2008
The busy Java developer’s Guide to Scala
Performance Tuning a web shop with open source tools
Languages-Oriented Programming: Shifting Paradigms
Computer languages evolution enables us to pave over disturbing problems (e.g. GC as a mechanism to pave over error-prone memory allocation, Neal Ford introduced a very important observation.. One of the most powerful aspects in Java platform is the community and the amount of open source frameworks. This creates a new problem to pave over. Each framework has its own jargon and adapting this to the java language leads to a complicated syntax which usually is very wet (not DRY – Don’t Repeat Yourself). Niel offer the use of DSLs to approach this problem. In other words frameworks will be transformed to a carefully designed DSL. AntLR as a lexical analyzer and environments like Jetbrains MPS can help reaching this with less efforts.
Extreme Transaction Processing, Low latency and performance
John Davis a banking expert gave a startling session on design criteria in the online trading arena. In this world a 1ms delay in processing a message can lead to losses of $100M. As a result for example banks try to locate the trading infrastructure as close as possible the trading backbone (usually in London) in order to reduce latency. In addition GC can be a real problem we are used to think of GC taking 200ms as something reasonable but again it is x200 from the 1ms threshold. You do not want to loose money on GC. This leads to weird solutions such as restarting the VM before the first GC and redirecting to a different cluster member in the meanwhile. Another point mentioned is that traditional RDBMS are not capable of handling tens of thousands of transaction on a reasonable price. The solutions will be to use in memory databases or caching mechanisms (e.g. GigaSpaces, Oracle Coherence, Terracotta etc… )
Concurrency and High Performance
Kirk Pepperdine session had an important punch-line. Processors clock speed is stuck at the 3Ghz boundary and this situation is not likely to change in the near future. CPU vendors are going to achieve Moore’s law by doubling the number of cores every 18 months. This is a fact developers can’t afford to ignore. Leading to the inevitable conclusion (punch-line ahead) Is your application ready to double it’s concurrency in the next 18 months?
Sunday, June 22, 2008
Thursday, June 19, 2008
A: The City.
B: Which city?
B: So why don’t you say
A: Well, since I have got here I have an unstoppable urge to stroll in the beautiful streets.
B: So why don’t you?
I have landed in Prague Yesterday in 08:45 directly into the conference. (Comment: Try to avoid these scenarios as much as you can, I have been dead tired all day long). To the conference I have arrived two hours late and missed the first two sessions :-(.
While I really want to share as much knowledge as I can I am reluctant of writing long descriptions and summaries on sessions. A lecture heard can enlighten one’s mind but it is hard to transfer the essence of this enlightenment to a blog . Therefore, I am going to share with you a single most important piece of information I took with me from sessions.
The two most effective sessions I did attend on day 1 are (Drums !!!!):
Monitoring Management and Troubleshooting in the Java SE6 Platform
Jean-Francois Denis from SUN gave a very interesting session on JDK 6 new JMX abilities and tools. The lecture which started with the very basics did move to more advanced issues. The most useful piece of information from my point of view is the VisualVM. A real open source lightweight Java profiler!!
Java Performance ToolingDr Holly Cummins from IBM is a very colorful person and a funny lecturer. She gave a nice introductory session on performance troubleshooting. Holly exposed me to the term ‘lock-bound’ which is a brilliant terminology for saying ‘Well we use locking mechanism extensively and there is a lot of contention on these locks… this is why our application sucks’. From this lecture I am have learned that IBM have set of nice free tools which one can use even if he is not using IBM JVM.Follow this link for more information.
Wednesday, June 18, 2008
I am on my way to TSSJS. This is the first time I am attending this conference and I am really thrilled as I heard great deal about it.
I will update my blog during the conference whenever I will have something interesting to say :-)
If you attend the conference come and say hello :-)
I have a flight to catch....
Monday, June 9, 2008
So I will summarize my knowledge regarding this issue. While I agree it is a bit shallow it is much better than nothing at all.
Tools like IESieve and IEdrip proven to be inefficient when coming to GWT leaks the code is too big for them and the transformation from Java to JS complicates things.
I will be more that delighted to be proven an idiot, if someone have a better (proven) approach please let me know.
When coming to solve /prevent memory leaks the best way will always be code review. Hence I will give a list of guidelines for reviewing GWT code:
First list of guidelines Stating the obvious:
- Avoid writing JSNI code. Google made quite a good job in writing "almost" leak free it is so easy to ruin it if you do not know what you are doing. Remember every JSNI code you write will lower your productivity.
- Do not use the DOM.* methods (except the setStyle... which are safe) . Manipulating the DOM yourself will lead you directly toward a memory leak.
- Static variables containing(even indirectly) references to widgets and dom objects may cause a leak.
- According to Google it should not happen :-) but in some cases event listeners may leak. unregistered them when window unloads.
Lion in the desert approach
Saying that all you need to do is comment out pieces of code until memory leak is eliminated.
This process is not easy nor fun some tips to improve its effectiveness:
- Memory leaks are ilusive sometimes removing an irellevant piece of code will stop the memroy leak. As a result you should try to reach to the smallest peice of code which is still leaking before eliminating it. This will ensure you do not shoot at the wrong target .
Thats all folks :-)
Friday, May 30, 2008
And one good news to close this post: I opened a bug to Google regarding a memory leak when closing a window 5 months ago. I forgot all about it, buts suddenly out of the blue I got an email. Bug was verified and fixed. Fix will be available for GWT1.5
Tuesday, May 27, 2008
When are we going to see dual core mosquitoes?
So, now I will check again that post title is not empty and....