30.11.11

29.11.11

Stupid performance trick in Java | Javalobby


This class defines six categories of operations upon byte buffers:
  • Absolute and relative get and put methods that read and write single bytes;
  • Relative bulk get methods that transfer contiguous sequences of bytes from this buffer into an array;
  • Relative bulk put methods that transfer contiguous sequences of bytes from a byte array or some other byte buffer into this buffer;
  • Absolute and relative get and put methods that read and write values of other primitive types, translating them to and from sequences of bytes in a particular byte order;
  • Methods for creating view buffers, which allow a byte buffer to be viewed as a buffer containing values of some other primitive type; and
  • Methods for compactingduplicating, and slicing a byte buffer.
Byte buffers can be created either by allocation, which allocates space for the buffer's content, or by wrapping an existing byte array into a buffer.

Other Useful Information

  • Direct vs. non-direct buffers
  • Access to binary data
  • Invocation chaining

ByteBuffer.compact()


public abstract ByteBuffer compact()
Compacts this buffer  (optional operation).The bytes between the buffer's current position and its limit, if any, are copied to the beginning of the buffer. That is, the byte at index p = position() is copied to index zero, the byte at index p + 1 is copied to index one, and so forth until the byte at index limit() - 1 is copied to index n = limit() - 1 - p. The buffer's position is then set to n+1 and its limit is set to its capacity. The mark, if defined, is discarded.
The buffer's position is set to the number of bytes copied, rather than to zero, so that an invocation of this method can be followed immediately by an invocation of another relative put method.
Invoke this method after writing data from a buffer in case the write was incomplete. The following loop, for example, copies bytes from one channel to another via the buffer buf:
 buf.clear();          // Prepare buffer for use
 for (;;) {
     if (in.read(buf) < 0 && !buf.hasRemaining())
         break;        // No more bytes to transfer
     buf.flip();
     out.write(buf);
     buf.compact();    // In case of partial write
 }

Buffer.clear()

public final Buffer clear()
Clears this buffer. The position is set to zero, the limit is set to the capacity, and the mark is discarded.Invoke this method before using a sequence of channel-read or put operations to fill this buffer. For example:
 buf.clear();     // Prepare buffer for reading
 in.read(buf);    // Read data
This method does not actually erase the data in the buffer, but it is named as if it did because it will most often be used in situations in which that might as well be the case.
Returns:
This buffer

Comparing byte[], direct ByteBuffer, and heap ByteBuffer

Direct ByteBuffers provide very efficient I/O, but getting data into and out of them is more expensive than byte[] arrays. Thus, the fastest choice is going to be application dependent. Amazingly, in my tests, if the buffer size is at least 2048 bytes, it is actually faster to fill a byte[] array, copy it into a direct ByteBuffer, then write that, then to write the byte[] array directly. However for small writes (512 bytes or less), writing the byte[] array using OutputStream is slightly faster. Generally, using NIO can be a performance win, particularly for large writes. You want to allocate a single direct ByteBuffer, and reuse it for all I/O to and from a particular channel. However, you should serialize and deserialize your data using byte[] arrays, since accessing individual elements from a ByteBuffer is slow.

Strangely, these results also seem to suggest that it could be faster to provide an implementation of FileOutputStream that is implemented on top of FileOutputChannel, rather than using the native code that it currently uses. It also seems like it may be possible to provide a JNI library for non-blocking I/O that uses byte[] arrays instead of ByteBuffers, which could be faster. While GetByteArrayElements always makes a copy (see DEFINE_GETSCALARARRAYELEMENTS in JDK7 jni.cpp), GetPrimitiveArrayCritical obtains a pointer, which could then be used for non-blocking I/O. This would trade the overhead of copying for the overhead of pinning/unpinning the garbage collector, so it is unclear if this will be faster, particularly for small writes. It also would introduce the pain of dealing with your own JNI code, and all the portability issues that come with it. However, if you have a very I/O intensive Java application, this could be worth investigating.

Where to Get Sample Java Webapps | Javalobby

Where to Get Sample Java Webapps | Javalobby
Where to Get Sample Java Webapps « The Holy Java

I like this list.


Where to Get Sample Java Webapps

Posted by Jakub Holý on November 23, 2011
I was unsuccessfuly looking for some decent, neither too simple nor to complex Java web application for Iterate hackaton “War of Web Frameworks”. I want to record the demo apps and options I’ve found in the case I’ll need it ever again. Tips are welcome.
  • Hibernate CaveatEmptor – 2006, no UI, richer domain model (+- 15), auction site, created for the book Hibernate in Action, uses EJB3
  • EclipseLink Distributed Collatz Solver – 2011, 6 entities, JPA 2.0, EJB 3.1, JSF 2.0, JAX-RS, intended to test distribution and JPA behavior under load, primitive UI, computation-intensive app (multiple SE clients + 1 EE server)
  • Alfresco CMS – stable, complex application with REST API (CMIS compatible) based on its “web scripts”
  • NetBeans Samples including 1-entity Pet Catalog – usually too primitive (1-few classes, ..)
  • Spring samples (svn) – including jpetstore (5 DAOs, Struts & S. MVC UI), petcare, petclinic[-groovy], spring-mvc-showcase; another example isSpring Greenhouse (a Spring reference app., “App Catalog that allows Developers to develop new client apps for which users may establish Account->App connections”), the functionality of the UI seems to be quite primitive
  • Demos of JSF component libraries (RchFaces, PrimeFaces, …) – likely tightly coupled to JSF
  • A. Bien’s x-ray project – minimal UI, developed for the book “Real World Night Hacks”
  • Pebble Blog – lot of files and servlets even though the use cases are quite simple (post blog, view posts, ..)
  • Jersey Samples: Bookstore (download links) – a simple webapp built on JAX-RS (REST) and Jersey’s implicit views in the form of simple JSPs. 4 resource classes. Highly resuable with other frameworks that support REST service layers.



InfoQ: Scala+GWT Brings Scala to the Browser, New Documentation Site and Scala Days 2012 Announced

The Scala+GWT project aims to compile Scala code for the browser via the GWT toolchain.

Just mark it here for further references.

InfoQ: JDK Enhancement Process

InfoQ: JDK Enhancement Process

JDK Enhancement Proposal


  • to allow OpenJDK committers to submit ideas and extensions to improve the OpenJDK ecosystem
JEPs transition through various states:
  • Draft: Work in progress with open discussions
  • Posted: Entered into the JEP archive
  • Submitted: Declared to be ready for evaluation
  • Active: Approved for publication
  • Candidate: Accepted for inclusion in the OpenJDK roadmap
  • Funded: Judged by a group/area lead to be fully funded
  • Completed: Finished and delivered
  • Withdrawn: Taken out of circulation (perhaps for future re-inclusion)
  • Rejected: Not worth pursuing now or in the future

InfoQ: BndTools provides OSGi Development in Eclipse

InfoQ: BndTools provides OSGi Development in Eclipse
http://bndtools.org/doc/tutorials/

BND 

Bnd is an extremely powerful but low-level tool for building and analysing OSGi bundles. It was developed by Peter Kriens (the OSGi Alliance's Technical Director) and is used by the OSGi Alliance to build their own suite of API, compatibility test and reference implementation bundles. As a low-level tool it is easily embeddable and can be called directly from the command line, used an ANT task, or embedded in Maven and IDEs.

Bndtools

Bndtools uses bnd as its "engine". All of the smarts are in bnd essentially, and Bndtools just figures out when it should call bnd and presents the results nicely. Because many other tools embed bnd, the descriptor files used by bnd have almost become a de-facto standard, meaning that it is easy for a Bndtools developer to collaborate with developers using other tools, or to migrate permanently to another tool if they choose.

Eclipse PDE

Eclipse PDE is another OSGi development environment based on Eclipse.

PDE follows a different philosophy to bnd and Bndtools, called "manifest first". In PDE you directly edit the MANIFEST.MF file that goes straight into the bundle without any post-processing. Our philosophy is that the MANIFEST.MF should be treated almost like a compiler output: i.e., it should be generated from a simpler source artefact. This is important because a full MANIFEST.MF contains quite a lot of duplicate information or information that should be derived directly from the Java code, for example the list of packages-level dependencies. Manually editing such information is laborious and error-prone.

JRebel

  • JRebel is a powerful tool for speeding up redeployment of code during development, but it does not provide any kind of module system, either at runtime or at build time. 
  • JRebel is about getting your code out of the IDE and into the Java EE application server as quickly as possible.
  • JRebel is not particularly useful when developing for OSGi.

maven

For Maven users the most popular approach to OSGi development is to use the Maven Bundle Plugin, which is another tool that embeds bnd. Bndtools integrates with Maven via this plugin and via M2Eclipse. In this scenario, M2Eclipse subsumes responsibility for managing the build depdendencies (in the POM, obviously) and for actually building bundles, but Bndtools continues to add value by providing a way to edit and analyse bundle descriptors and dependencies, and a way to setup and execute run configurations.

InfoQ: Scrum Extensions Update - 4th Quarter 2011

InfoQ: Scrum Extensions Update - 4th Quarter 2011
Scrum is Open for Modification and Extension - News - Scrum.org
scrum extension proposal process

Extension NameProposed ByDate Proposed
Scrum BasicDavid Starr10/05/11
ATDD ( Acceptance Test Driven Development ) Sprint PlanRalph Jocham10/30/11
Integration ScrumCaesar Ramos & Kate Terlecka11/25/11

I think the Scrum guys just went to far away from the original place where Scrum and Agile were born.

28.11.11

Fit: Framework for Integrated Test

Wiki: Welcome Visitors

I got this web site from JUnit Recipes.

Manning: JUnit Recipes


It is a spread-sheet - or table-based way to write Customer Tests and may be the next big wave in testing.

Customer Tests?

Using Customer Tests to Drive Development
What is Extreme Programming? | xProgramming.com

To be honest, this concept is new to me so far.


As part of presenting each desired feature, the XP Customer defines one or more automated acceptance tests to show that the feature is working. The team builds these tests and uses them to prove to themselves, and to the customer, that the feature is implemented correctly. Automation is important because in the press of time, manual tests are skipped. That’s like turning off your lights when the night gets darkest.
The best XP teams treat their customer tests the same way they do programmer tests: once the test runs, the team keeps it running correctly thereafter. This means that the system only improves, always notching forward, never backsliding.
--
Customer test-driven development (CTDD), also known as story test-driven development (SDD), consists of driving projects with tests and examples that illustrate the requirements and business rules. What do I mean when I say ‘customer tests’? Basically, anything beyond unit and integration (or contract) testing, which are done by and for the programmers, testing small units of code and their interaction. I use the term ‘customer’ in the XP sense, meaning product owners and people on the business side who specify features to be delivered. Customer tests may include functional, system, end-to-end, performance, load, stress, security, and usability testing, among others. These tests show the customers whether the delivered code meets their expectations.

Some Useful Links for JUnit 4: Parameterized Test

JUnit 4 Tutorial 6 – Parameterized Test
Abhi On Java: Unit Testing with JUnit 4.0
Writing a parameterized JUnit test « Our Craft

1. @RunWith(Parameterized.class)
2. Constructor with parameter(s)
3. @Parameters for a static method which returns Collection<Object[]>

Parameterized Unit Test is the same as the Data-Driven Test Suite idea in JUnit Recipes (4.8)

Biased Locking, OSR, and Benchmarking Fun | Javalobby

Biased Locking, OSR, and Benchmarking Fun | Javalobby

Java Lock Implementations
Dr. Cliff Click's Blog | Azul Systems
What the heck is OSR and why is it Bad (or Good)? by Dr. Cliff Click | Azul Systems: Blogs
A short conversation on Biased Locking by Dr. Cliff Click | Azul Systems: Blogs

  • In the last post I concluded, based on my experiments, that biased locking was no longer necessary on modern CPUs. 
  • it was not valid because the experiment did not take account of some JVM warm up behaviour that I was unaware of.
  • Cliff Click pointed out that it is much harder for a runtime to optimise a loop part way through, and especially difficult if nested.  For example, bounds checking within the loop may not be possible to eliminate.
  • Dave Dice pointed out that Hotspot does not enable objects for biased locking in the first few seconds (4s at present) of JVM startup.
  • -XX:BiasedLockingStartupDelay=0
My (Martin's) tests in the last post are invalid for the testing of an un-contended biased lock, because the lock was not actually biased.  If you are designing code following the single writer principle, and therefore having un-contended locks when using 3rd party libraries, then having biased locking enabled is a significant performance boost.

Upgraded to Xtext 2.1: first impressions | Javalobby

Upgraded to Xtext 2.1: first impressions | Javalobby

Xtext
Xtext success story at Google | EclipseCon North America 2012

With Xtext 2.1 crafting a feature rich domain specific language for the JVM is a matter of only implementing two small scripts. 


Xtext 2.x, part of the Eclipse Indigo release, provides a solid framework for creating Domain-Specific Languages. With only a few clicks, Xtext is capable of generating language interpreters and full-blown editors, all from a single grammar definition.

27.11.11

InfoQ: Paul Clements appointed Vice President of BigLever

  • One of the few companies focusing on product line engineering

Dr. Paul Clements


Paul Clements

Senior Member of Technical Staff

Key Responsibilities

Areas of interest include (1) software architecture, and the selection, evaluation, representation, and documentation of software architectures, and (2) software product lines, and their creation, sustainment and evolution, and the strategic capabilities they bring to an enterprise.

Professional Background

Before coming to the SEI in 1994, Dr. Clements worked for the U. S. Naval Resarch Laboratory in Washington. There, he participated in (and eventually led) the Software Cost Reduction or "A-7" project. SCR produced and validated a methodology for hard-real-time embedded software development for systems with long life-cycles by re-designing and re-implementing the avionics software for the Navys A-7E aircraft. SCR pioneered techniques in modular software design, requirements engineering and specification, software architecture and architectural structures, interface specification and documentation, and real-time performance engineering.

Software Architecture in Practice (2003)
Software Architecture in Practice, 2nd Edition
Constructing Superior Software (1999)
Constructing Superior Software (Software Quality Institute Series)
Software Product Lines: Practices and Patterns (2001)
Software Product Lines: Practices and Patterns
Evaluating Software Architectures: Methods and Case Studies (2001)
Evaluating Software Architectures: Methods and Case Studies
Documenting Software Architectures: Views and Beyond (2002)
Documenting Software Architectures: Views and Beyond

ATAM (Architecture Tradeoff Analysis Method)

The Architecture Tradeoff Analysis Method (ATAM) is a method for evaluating software architectures relative to quality attribute goals. ATAM evaluations expose architectural risks that potentially inhibit the achievement of an organization's business goals. The ATAM gets its name because it not only reveals how well an architecture satisfies particular quality goals, but it also provides insight into how those quality goals interact with each other—how they trade off against each other.

Software Product Line Engineering

SPLC.net l Software Product Line Conferences
Software product line - Wikipedia, the free encyclopedia

Software product lines, or software product line development, refers to software engineering methods, tools and techniques for creating a collection of similar software systems from a shared set of software assets using a common means of production.


Why is that of interest for the software architecture community? Software Product Lines are increasingly gaining momentum within the industry as they foster systematic re-use in program families. So called product families comprise products or solutions that share a common domain, address common markets, and reveal a lot of commonalities. PLE might not be interesting for one-off application development, but it definitely  is for many industrial systems or Common-off-the-Shelf products.

Java技术-反射获得List中的String class

同主题-Java技术-反射获得List<String>中的String class



Refactoring Spikes as a Learning Tool and How a Scheduled Git Reset Can Help | Javalobby

Refactoring Spikes as a Learning Tool and How a Scheduled Git Reset Can Help | Javalobby

Home of the Mikado Method
Large and Small Scale Refactoring « Paul Dyson’s Blog

It is very good to know the problem of "just a little further" and the way to deal with this type of problems. I think many people experience these problems.

Mikado Method

When large portions of the code is changed the whole team will have to be engaged, and everyone needs to know the goal to be able to help out and understand. The team also has to communicate outside of its immediate vicinity, especially if the change is based on an external goal.

The History of Mikado Method


I like this story pretty much, at least the description statements within the story.
  • The Technical Debt was everywhere.
  • All of a sudden we were supposed to deliver to a new client and the interest on our loan went through the roof.
  • This would double our debt, but decrease the interest temporarily, until the next client would drop in. 
  • So we went on, and every day on the daily stand-up we said that “We just have a couple more errors to fix, but we will probably be done today or tomorrow”. Yeah, right…
  • We had realized that we had been walking with the refactoring dependencies, i.e. we tried to refactor a part of the code that was depending on that another refactoring already had been done, which it hadn’t.
  • To do that, you naively start with the thing you want to achieve and draw that on your Mikado Graph, typically a business goal.

Refactoring or Rewriting?

Many people like to use "refactoring", even though they are doing rewriting, because of one of the two reasons, or maybe both:
  • Refactoring sounds cool as a term, while rewriting sounds scary to your boss or your colleagues.
  • They do things based on the code, without any high level design or discussion.
I got a rule of thumb:
  • If something you change on the code based would cause compilation errors and you can't fix it in a minute, you are doing rewriting, instead of refactoring.
According to my rule, the Mikado guys are doing rewriting.

Why the terms are important?

Using correct words to name the work you are doing or you are going to do is very important, because the correct words would possibly lead you to correct information, better planning, and more useful resources. However, if you name it incorrectly, you might probably get loss.

Naming the work as refactoring, you will turn to Kent Beck's book for some help, while you won't get much useful information from it. You might think it won't take long, just like the Mikado guys said, "will probably be done today or tomorrow". You don't think you need a plan, which lead to the "Software Hydra" problem.

How should we handle re-writing?

Unfortunately, there has not been a book name "Rewriting". But at least, we know it is not refactoring. So we need these things:
  • Preparation for a long term work. For example, two week to one month. You know, refactoring should only take you a couple of minutes.
  • Preparation for risks. There are no risks in refactoring. The system will work exactly the same as before if you only apply refactoring out there. While doing rewriting, you are going to have risks: break the compilation, break some features, or even bring some unknown issues. However, you can't hide the risks. Before you can handle these risks, you need to know them.
  • You need a plan.
    • Mikado Graph could be a good tool for planning. I just do really understand what "Business Goal" is. We all know we want a better code base, less duplicated code, better structure, good code patterns, clean calling paths. But we just can't define them.
    • You need to take the "old" system architecture and "new" system architecture into account. With refactoring, system architecture won't be your business. You should never touch it at all. But if the code is a ball of mud, dude, you need to operate on the system architecture sometimes.
    • You need to identify the problems of the original code base, and prioritize them.
    • You need to write down which parts you are going to change. Here, Mikado Graph could be a tool, because you need to find out the leaves.
  • Naive start: I think the Mikado guys' idea of "Naive Start" is good. Even though you need to plan, don't plan too much. Or, in another word, don't think too much. Do it, touch it, change it, and see what would happen. There are some risks you must take. However, software development is something "soft", you can revert it back to the starting point if you like. The biggest risk is timing. Don't spend too much time before you realize it would be a mistakes. And planning to much is a big mistake, which you should know before hand.
  • Add more test, and try your best to reduce to test cycle. Unit testing is not always something you can choose. But apply unit testing whenever you can. If you can't, make the tests automatic. If you can't, just test it.
  • Apply the refactoring techniques whenever possible. I believe refactoring techniques could handle 60% of work.

InfoQ: Debate: The Annoying Detail

 Depending on your viewpoint, Simon is either a software architect who codes or a software developer who understands architecture. When he's not developing software with .NET or Java, Simon can usually be found consulting, coaching or training. Simon has also written books about Java, presented at industry events and has put together a training course called Software architecture for developers, which is based upon his software architecture writing at Coding the Architecture

Twitter: @simonbrown 
E-mail: simon.brown at codingthearchitecture.com 

Screaming Architecture

Architectures are not (or should not) be about frameworks. Architectures should not be supplied by frameworks. Frameworks are tools to be used, not architectures to be conformed to. If your architecture is based on frameworks, then it cannot be based on your use cases.

I don't really like Uncle Bob's idea. His thought about architecture above is not wrong, but not right, at least, only partially right.

When thinking about how to build the software, I would usually think from two different angles:
  • Use cases based, or feature driven
  • Libraries based, or framework driven
Both these two angles make sense for me, and I don't think I should abandon either of them. In fact, usually, I would begin with the Use Cases angle, and end with the framework thought.

Software development has a lot of characteristics as an industry. I would name two of them here:
  • Software, including its source code and libraries, could be replicated in some way.
  • The industry is far from mature.
These two industry characteristics make Software Development very different from Civil Engineering. In Civil Engineering, only top architectures would think about their frameworks, while in software development, most of software need to choose their frameworks. Even worse, these frameworks are dynamic, they change themselves all the time. And so as libraries, or even development platforms.

An Annoying Detail

I talked about how there are a number of "classic" software design techniques from the pre-agile era that are being used less and less. For example, things like UML, class-responsibility-collaboration cards and component-based design. This is a shame because some of these techniques can complement an agile way of working and would perhaps prevent some wheels from being reinvented. If people don't know about these techniques though, how will they adopt them? 

It's quite interesting. UML, and CRC Cards are my favorites. And I actually share the same idea with Mr. Brown. Hey, I am going to be bias from here now.

You can plug-in the delivery mechanism to the application

Architecture is about the big picture

That's right, the annoying detail is actually a large chunk of the system and, for me, architecture is about more than just what's contained within "the application". Structure is very important, but what about that tricky stuff like non-functional requirements, the actual delivery mechanism (technologies, frameworks, tools, APIs, etc), infrastructure services (e.g. logging, exception handling, configuration, etc), integration services (internal and external), satisfying any environmental constraints (e.g. operations and support), etc. For me, this is what "architecture" is all about and *that's* "the whole enchilada".

How to Fail With Drools or Any Other Tool/Framework/Library | Javalobby

How to Fail With Drools or Any Other Tool/Framework/Library | Javalobby
InfoQ: Simple Made Easy
Programming Isn't Fun Any More
Do I still hate SOA? on Vimeo

About Drools


Drools - The Business Logic integration Platform
Drools 5 introduces the Business Logic integration Platform which provides a unified and integrated platform for Rules,Workflow and Event Processing. It's been designed from the ground up so that each aspect is a first class citizen, with no compromises.

Purpose of C. Dannevig's Team (Known IT)
They decided to switch to the Drools rule management system (a.k.a. JBoss Rules) v.4 from their homegrown rules implementation to centralize all the rules code at one place, to get something simpler and easier to understand, and to improve the time to market by not requiring a redeploy when a rule is added

In a word: the team trust Drools much more than their homegrown one, with barely knowledge of the tool other than something they learn from JBoss's ads.

Problems and Reasons

However Drools turned out to be more of a burden than help for the following reasons:

  • Too little time and resources were provided for learning Drools, which has a rather steep learning curve due to being based on declarative programming and rules matching (some background), which is quite alien to the normal imperative/OO programmers.
  • Drools’ poor support for development and operations  – IDE only for Eclipse, difficult debugging, no stacktrace upon failure
  • Their domain model was not well aligned with Drools and required lot of effort to make it usable by the rules
  • The users were used to and satisfied with the current system and wanted to keep the parts facing them such as the rules management UI instead of Drools’ own UI thus decreasing the value of the software (while increasing the overall complexity, we could add)

The Result

At the end they’ve removed Drools and refactored their code to get all rules to one place, using only plain old Java – which works pretty well for them.

Lessons

  • Their experience has a lot in common with many other cases when a tool, a framework, or a library are introduced to solve some tasks and problems but turn out to be more of a problem themselves.
    • Think twice - or three or four times - before introducing a heavyweight tool or framework. Especially if it requires a new and radically different way of thinking or working.
    • Using an out of the box solution sounds very *easy* – especially at sales meetings – but it is in fact usually pretty *complex*.
    • (Rich Hickey:  InfoQ: Simple Made Easy)We should strive to minimize complexity instead of prioritizing the relative and misleading easiness (in the sense of “easy to approach, to understand, to use”).
    • “I’ll do it all for you, be happy and relax” tool turns into a major obstacle and source of pain

Cost of introducing a new library, framework, or tool

  • Complexity
  • Competence
  • Development
  • Operations
  • Defects
  • Longevity
  • Dependencies

My Thought

Basically, this is a good post, and I like its analysis. However, I don't like the conclusion. Especially, I don't like the idea of "Thinking" and returning back to the original tools.

The problems of Known IT team met could be generalized into these two:
  • Unexpected results
  • Changing costs.
These problems could be resolved by introducing a prototype development team with two to five senior developers, focusing on the new tool for about two months. Ideally, they could develop a real project using the new tool. After that, the team would gain a lot of knowledge about the new tool without wasting too much money and time, and would make a better decision based on their real experience within the team/company.

My suggestion is: don't think, do it, with minimal cost.

Java’s missing unsigned integer types | Javalobby

Java’s missing unsigned integer types | Javalobby
Is there a Java library for unsigned number type wrappers? - Stack Overflow
Primitives - Apache Commons Primitives

All this is unnecessary if you do the calculations using the next-bigger signed type and cut off the upper part


long x=42, y, m=5, t=18
y = (x*m + t) & 0xFFFFFFFF;

.. emulating them in Java with wrapper types would have been more problematic than dealing with the problem directly on each single occasion.


Two Generally Useful Guava Annotations | Javalobby

The Guava project contains several of Google's core libraries that we rely on in our Java-based projects: collections, caching, primitives support, concurrency libraries, common annotations, string processing, I/O, and so forth.
The latest release is 10.0.1, released October 10, 2011.

Guava Common Annotations


Annotation Types Summary
BetaSignifies that a public API (public class, method or field) is subject to incompatible changes, or even removal, in a future release.
GwtCompatibleThe presence of this annotation on a type indicates that the type may be used with the Google Web Toolkit (GWT).
GwtIncompatibleThe presence of this annotation on a method indicates that the method may not be used with the Google Web Toolkit (GWT), even though its type is annotated as GwtCompatible and accessible in GWT.
VisibleForTestingAn annotation that indicates that the visibility of a type or member has been relaxed to make the code testable.

From Two Generally Useful Guava Annotations | Javalobby

  • The last two are specific to use with GWT (GwtCompatible and GwtIncompatible)
  • But the former two can be useful in a more general context. (Beta, and VisibleForTesting)

Conclusion by Dustin Marx

Guava provides two annotations that are not part of the standard Java distribution, but cover situations that we often run into during Java development. The @Beta annotation indicates a construct in a public API that may be changed or removed. The @VisibleForTesting annotation advertises to other developers (or reminds the code's author) when a decision was made for relaxed visibility to make testing possible or easier.

Clojure: expectations - scenarios | Javalobby

Clojure: expectations - scenarios | Javalobby

Expectations


expectations is a minimalist's testing framework
  • simply require expectations and your tests will be run on JVM shutdown.
  • what you are testing is inferred from the expected and actual types
  • stacktraces are trimmed of clojure library lines and java.lang lines
  • focused error & failure messages

Clojure

Clojure - home

Clojure is a dynamic programming language that targets the Java Virtual Machine (and the CLR, and JavaScript). It is designed to be a general-purpose language, combining the approachability and interactive development of a scripting language with an efficient and robust infrastructure for multithreaded programming. Clojure is a compiled language - it compiles directly to JVM bytecode, yet remains completely dynamic. Every feature supported by Clojure is supported at runtime. Clojure provides easy access to the Java frameworks, with optional type hints and type inference, to ensure that calls to Java can avoid reflection.

Clojure is a dialect of Lisp, and shares with Lisp the code-as-data philosophy and a powerful macro system. Clojure is predominantly a functional programming language, and features a rich set of immutable, persistent data structures. When mutable state is needed, Clojure offers a software transactional memory system and reactive Agent system that ensure clean, correct, multithreaded designs.