27.8.14

'detached HEAD'


$ git checkout camel-2.13.1
Checking out files: 100% (2493/2493), done.
Note: checking out 'camel-2.13.1'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at b64d181... [maven-release-plugin] prepare release camel-2.13.1

[dz]: Java 9 Features Announced — What Do You Think?

http://java.dzone.com/articles/java-9-features-announced

[dz]: Traditional Log Management Is Dead. Long Live Autonomic Analytics!

http://java.dzone.com/articles/traditional-log-management
http://java.dzone.com/articles/another-spring-boot

19.8.14


  • http://www.telecomabc.com
  • http://www.cenx.com/technology/standards.html
  • http://4g-lte-world.blogspot.com/2012/05/default-bearer-dedicated-bearer-what.html
    • Default Bearer, Dedicated Bearer... What exactly is bearer ?
  • http://en.wikipedia.org/wiki/Picocell

5.8.14

JDK 1.7.0_40 or newer - Oh yeah!

[INFO] --- maven-enforcer-plugin:1.1:enforce (enforce-java) @ jetty-project ---
[WARNING] Rule 1: org.apache.maven.plugins.enforcer.RequireJavaVersion failed with message:
[ERROR] OLD JDK [1.7.0_09] in use. Jetty 9.2.2.v20140723 requires JDK 1.7.0_40 or newer

https://webtide.com/jetty-9-features/
Java 7 – We have removed some areas of abstraction within jetty in order to take advantage of improved APIs in the JVM regarding concurrency and nio, this leads to a leaner implementation and improved performance.

Test or Verification?

Have been in some discussion of system testing. System testing is important, no doubt. But system testing is not a guarantee of software quality, not even close - that has been proved for a long time, before the Agile movement, mostly because that system testing consumes so many resources while the path coverage was so limited. We could use system testing to make sure all components or units of the system are in place correctly. But if you rely on system testing, such as longevity, Minimal Acceptance Test, and regression tests to guarantee the quality? You are in trouble. In some extreme view angle, if system tests help find a lot of defects in your product - that means you have serious software quality issue.

System test, sometimes calls end-to-end test. In most of the case, we put the system in a certain test environment, with everything set up properly, run a subset of system features in a certain order or have people access the system in a certain way. For a certain period, we look into the system and see whether it is running alright.

  • We need to set up the system. That could be tiresome and time consuming.
  • We need to set up the system properly - that means we can only test the happy path.
  • We need to drive the test in a certain way, either manually or automatically.
Here come the problems:
  • It will take time. Maybe minutes, or hours, or even days. If there are intermittent defects, it may take weeks or months or even longer to have the problem show up. It doesn't look too bad until we put it into the release cycle. How many minutes, hours, or days we have in a release cycle?
  • We could have some edge cases tested in our system, but most of the time we are testing the happy paths. How often that we have problems in the happy path? That means we are waiting our time. More importantly, that also means that coverage of the test is so poor that we can't build up our confidence, many defects would remain unknown and slip to the customers.
  • Manual tests consume a lot of resources.
  • Some team build up automation for system tests. That's a good thing. But automatic end-to-end test is not easy to build. It takes time also. Some teams just simply fail the mission for spending too much time on this area. I was there before.
A better solution would be integration tests. Along with some level of system, end-to-end test, we also test smaller modules of the system. This is a good one and I like it.
  • Having smaller modules that could be testable means the system is decoupled properly. It will foster the idea of decoupling in the long run.
  • We can mock some input, output, or database connection to make sure we have good coverage of the code paths. Those are not likely possible in end-to-end testing.
  • With smaller piece of the software, usually we can build up the test environment quicker and the test cycle will be shorter.
I personally like Integration Test. 

The most important, and most wildly recognized test type is unit test. As much as I like integration test, I rely on unit tests for system quality for the following reasons:
  • It is extremely fast. A typical unit test take milliseconds, by comparing to integration test as seconds or more. It is usually a bad sign if we have unit test that runs for more than a second.
  • It can, and should be done along with the coding cycle by the same engineer, and it will be review later and evolved along. In this way, code path can be covered well.
  • The AAA pattern for unit test makes sure we don't overkill the test. If we don't follow, or find it is hard to follow the AAA pattern, it is a good time to review your code and do some refactorings.
    • A - Arrange
    • A - Assign
    • A - Assert
  • And yeah, unit test will help recognize the chance for refactorings and help verify the refactorings. Some awfully built unit tests add a lot of burden for refactoring the original code, that is obviously a bad sign for bad unit tests, and bad software quality. If you see this, you'd better rebuild the test. Simply get rid of the test and rebuild them could be a solution if you don't see a better one.
  • TDD - I use TDD, but not always. I can see the value of it especially with those teams that  has been using this practice successfully. But my personal style would be that last-mile-driven-development. That means I built up my solution very quick in the way I feel comfortable with, sometimes TDD, sometimes not. For the last mile, I begin building some tests, not necessary unit tests, to drive my implementation. After that, I will build more unit tests to cover my code along with some necessary refactorings.
    • Software engineering is a balance of idea and engineering. TDD is a good thing for ensure the engineering part. But constantly switch from business code and test code will create unnecessary interruption and that is not good for the idea part. So I would rather have the idea flow out smoothly until I hit the engineering necessity.
    • Sometimes I simply begin with a Hello World program, build up all the happy path, along with some edge cases if I can come up with. Then I review my implementation and make sure they work well. Like I said, I don't want to interrupt myself when focusing on the "idea" part. Reviewing the implementation will guarantee the finally quality, and refactorings and unit tests will help assure that.
Other than End-to-End test/System tests, Integration Tests, and Unit Tests, I also like the quick test for some unknown issue. 
It is almost a common sense that we should test a certain solution with minimal effort but thorough test. However, sometimes people don't do that. For example, how can we make sure a new JVM parameter work for our system? Some teams might put the new JVM parameter into the system and have it run for a while. If the system is happy, then use it. Oh, that doesn't sound right, do it? We should build up a simulation environment and verify the parameter to make sure how it works before we even put it in place. There will be a good chance that your happy path test doesn't reveal anything and the problem slips to your customer.

When my team had a big problem with JMS bus, I built up a SWAT team of three to find out a valid solution. That was a great journey and we successfully deliver our solution and helped the product for its first GA. The secret of my SWAT team was that we simulate our communication model and built a very simple ActiveMQ problem to verify those configuration options, based on the messaging theories. To be honest, none of us were JMS experts at that time. But a week of thorough tests, we found out the solution that the team needed and deliver it with high confidence.

But why some teams don't do that? Here are the reasons I can think about:

  • Not able to abstract the test goal into a simulation.
  • Lack of confidence. Usually caused by low quality of the products.
  • Politics of the development team - afraid of making decision based abstraction or simulation environment.
That the way I value different kinds of tests.