24.12.13

http://java.dzone.com/articles/infrastructure-scale-apache
http://java.dzone.com/articles/handling-big-data-hbase-part-5

21.12.13

http://ac31004.blogspot.com/2013/10/installing-hadoop-2-on-mac_29.html
http://apmblog.compuware.com/2013/02/19/speeding-up-a-pighbase-mapreduce-job-by-a-factor-of-15/
http://software.intel.com/en-us/articles/hadoop-and-hbase-optimization-for-read-intensive-search-applications
https://labs.ericsson.com/blog/hbase-performance-tuners

20.12.13


  • http://gbif.blogspot.com/2012/07/optimizing-writes-in-hbase.html
  • http://ronxin999.blog.163.com/blog/static/422179202013328105833745/

17.12.13

Install HBase/Cloudera CDH4



  • yum --nogpgcheck localinstall cloudera-cdh-4-0.x86_64.rpm
  • yum install zookeeper
===============================================================================================================================================
 Package                       Arch                    Version                                            Repository                      Size
===============================================================================================================================================
Installing:
 zookeeper                     noarch                  3.4.5+24-1.cdh4.5.0.p0.23.el6                      cloudera-cdh4                  3.7 M
Installing for dependencies:
 bigtop-utils                  noarch                  0.6.0+186-1.cdh4.5.0.p0.23.el6                     cloudera-cdh4                  8.2 k

  • yum install zookeeper-server
Dependencies Resolved

===============================================================================================================================================
 Package                               Arch                  Version                                        Repository                    Size
===============================================================================================================================================
Installing:
 zookeeper-server                      noarch                3.4.5+24-1.cdh4.5.0.p0.23.el6                  cloudera-cdh4                4.9 k
Installing for dependencies:
 foomatic                              x86_64                4.0.4-1.el6_1.1                                rhel-cd                      251 k
 foomatic-db                           noarch                4.0-7.20091126.el6                             rhel-cd                      980 k
 foomatic-db-filesystem                noarch                4.0-7.20091126.el6                             rhel-cd                      4.3 k
 foomatic-db-ppds                      noarch                4.0-7.20091126.el6                             rhel-cd                       19 M
 pax                                   x86_64                3.4-10.1.el6                                   rhel-cd                       69 k
 perl-CGI                              x86_64                3.51-127.el6                                   rhel-cd                      207 k
 perl-Test-Simple                      x86_64                0.92-127.el6                                   rhel-cd                      110 k
 redhat-lsb                            x86_64                4.0-3.el6                                      rhel-cd                       24 k
 redhat-lsb-graphics                   x86_64                4.0-3.el6                                      rhel-cd                       12 k
 redhat-lsb-printing                   x86_64                4.0-3.el6                                      rhel-cd                       11 k

  • service zookeeper-server init
No myid provided, be sure to specify it in /var/lib/zookeeper/myid if using non-standalone
  • service zookeeper-server start
JMX enabled by default
Using config: /etc/zookeeper/conf/zoo.cfg
Starting zookeeper ... STARTED

  •  yum install hadoop-conf-pseudo
===============================================================================================================================================
 Package                                     Arch                Version                                      Repository                  Size
===============================================================================================================================================
Installing:
 hadoop-conf-pseudo                          x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4              8.0 k
Installing for dependencies:
 bigtop-jsvc                                 x86_64              1.0.10-1.cdh4.5.0.p0.23.el6                  cloudera-cdh4               27 k
 hadoop                                      x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4               17 M
 hadoop-hdfs                                 x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4               12 M
 hadoop-hdfs-datanode                        x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4              4.8 k
 hadoop-hdfs-namenode                        x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4              4.9 k
 hadoop-hdfs-secondarynamenode               x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4              4.9 k
 hadoop-mapreduce                            x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4              9.9 M
 hadoop-mapreduce-historyserver              x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4              4.9 k
 hadoop-yarn                                 x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4              8.5 M
 hadoop-yarn-nodemanager                     x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4              4.8 k
 hadoop-yarn-resourcemanager                 x86_64              2.0.0+1518-1.cdh4.5.0.p0.24.el6              cloudera-cdh4              4.8 k
 nc                                          x86_64              1.84-22.el6                                  rhel-cd                     57 k
 parquet                                     noarch              1.2.5-1.cdh4.5.0.p0.17.el6                   cloudera-cdh4               13 M
 parquet-format                              noarch              1.0.0-1.cdh4.5.0.p0.20.el6                   cloudera-cdh4              489 k

  • skip for the thrift server.





15.12.13

12.12.13

HBase


  • http://research.google.com/archive/bigtable.html
  • http://blog.cloudera.com/blog/2012/06/hbase-write-path/
  • http://blog.sematext.com/2012/07/16/hbase-memstore-what-you-should-know/
http://java.dzone.com/articles/how-google-does-code-review

9.12.13

http://www.iteye.com/news/28540-5-linux-shell-commandline-website

4.12.13

What is a good software product?

We all know that we want to build good software products. But what is a good software product? Traditionally, people believe a good software product is the one that matches customer's requirements. So we spends a lot of time to collect requirements from the customer and make contracts upon it.

But really? How many customers feel bad with a software product that meet all the requirements on the paper? If that happens, some would say we didn't understand the customers' requirement good enough.

Please think again. Do the customers really know what they want before hand? They don't. So in XP, people advocate developing software together with representative from the customer side, so that the customer can feedback to the development team and help the team build software eventually, and in the way they really want. That's a good thing, because the customer will eventually realize what they want along with the grow of the product.

However, there are two issues with this kind of development model:

  • It is not the customers' natural duty to help the the development team, although they know they are going to use the system after it is done and hand over. But still, it is not their job by nature.
  • The customer could distract the development in their own way, so that it is difficult to build the system, while they miss many good features that could be build easily.
With those concerns, we have SCRUM and have PO to work with the development team. We solve the first issue because PO is responsible for building the system, but we don't necessarily solve the second one because the PO would still drive the team to the way they want, not the way that a development team is good at.

You may say, we surely want to build the system the PO or the customers want, not a system engineers like to build. Really? Do you ever hear that a good engineer is 1000 times more productive than a bad one? If we try to drag the team to the way that the engineers are not comfortable with, we are risking the productivity of the development team. Another issue, and the more real one, is that it is very difficult to build something but very easy to do another, and only the guys know the detailed technologies could answer which one is easier. Having PO drives the whole development could neglect those difference.

In my mind, software development is very detail-oriented. For example, choosing JMS or Kafka could make a huge different to the system, either for the architecture or the user experience; using HDFS and MapReduce could also make a huge difference to the system than the one using Vertica. Those knowledge is far beyond the customer or the PO could understand, even though we can explain a little. So having some top-down business requirements could be very dangerous to a development team.

You may say how could we drive the real requirements from the market? Then I would ask what is the real requirements from the market? Before the Big Data solution emerging, do we have those data mining, data analytic requirements? Yes we do, but only after Big Data is there, those requirements are become overwhelmingly important. Why? Only a requirement could be done is a requirement that is real. Otherwise, we just tracing millions of things in the world. For example, would it be a good idea to search picture with people exactly the same one in other picture in the Internet? Yes, I am sure millions of users are eager for this feature. But it is not a real requirement for a search engine because it is not practical for a search engine.

Then, back to the topic we are talking about here. What is a good software product? I would say, a good software product is the one that the customer is willing to pay for it. It may or may not meet the requirement that the customer asks, but it is definitely useful for the customer and better than other products the customer could have with other vendor so that the customer want to pay for yours.

Then how could we build such a good product. There are two parts in this definition:
  1. How could we know the product is useful to the customer?
  2. How could we have a solution (generally) better those from others.
For the first question, unfortunately, we could have some hints but not exactly know. Those hints are from the customers or from the POs. But we don't exactly know what would be the most useful features for the customers, because they don't know either. And as I said above, not everything the customers want could be implemented, why there could be something easy to be done and useful to the customers but they don't realize before they see it.

The best we to find out what is useful to the customer is to have them try something that possibly useful. If they like and pay you the money, you got it. If they don't like it, you change it. It looks simple but actually not easy to make it work. 

To be continue....


2.12.13

http://java.dzone.com/articles/scaling-redis-and-rabbitmq

29.11.13

22.11.13

Task Management or Risk Management

Generally speaking, if we work on something and want to manage the work we are doing, we create tasks and manage them. This is kind of the nature for manage everything, and this is how the Software Project began.

However, it didn't work well in this way. Recently, one of my teams admitted that they couldn't continue their work on one of a task the way they chose before, and had to go another way. Some engineers didn't feel good for that, but I would say this is a good thing except that we have been waiting for too long before we give up. If we had admit the failure earlier and changed it earlier, this is a good practice for me.

But we didn't. The solution emerged actually a year ago, and finally put into the PO's scope recently about 5 months ago. In between, some of our engineers worked out some research, so when we began the task, we thought we are quite confident. So we planned a lot of tasks on it and tried to make the story cover everything we wanted to cover, including all the functional and non-functional requirements. We even spend a lot of time to plan for building a monitoring solution to monitor the target system.

With all those effort, and of course we've been distracted a little from the original track, but still 5 months later, we came to a conclusion that this is not the way we want to go. Hey, 5 months, what took you so long to find the failure of one approach? Even with the distraction, we should have got this conclusion in 5 man/days, because this is a high risk thing, and we need to fail fast and move on before we invest too much time on it. Instead, we have the whole business unit waiting for the target system for such as long time and now we have to change our plan.

Software development is a task, but very special one. It is by nature very unpredictable and unrepeatable. In general, all the tasks are at risk, but some are at high risk, some at at lower risk. We simply can't manage those tasks in the way that we manage others, like construction. For example, if we plan 3 tasks in a row, but we fail the second one, what can we do with the third? We can only simply re-plan everything and choose another approach, just as what we did here. Sometimes, we can only fail the whole project or target and admit the failure and move on. 

It will be very dangerous our management team don't see this kind of software development nature and try to work out the software in a traditional way. We need to apply the Risk Management strategy on it, which means we always expect failure for tasks and we need to work out the more risky work first to ensure the successful rate. We can't simply estimate the task even though we have to. When a task is at risk we need to prepare another approach early or even prepare the failure. We need to try everything in a simple way to make sure we don't add more risks onto the already risky tasks. When the solution is barely ready to ship, ship it, and yes, you have to, because their are other risks coming in. After you ship the software that the customer could use, then we can take more risks, and in this way, we can try something risky but still have money back.

I heard often our PO or engineers say that we need to have everything cover in the task list. Yes, we do want to have them, but we just simply can't. We need to admit the unpredictable nature of software development and need to build this task list along with the development and the way we build up our knowledge. We don't want to wait, because waiting is a cost. Before we can deliver a useful software, we don't want to add unnecessary cost. We need to discuss and need to plan, but in a barely good enough way. For example, we can't call in everybody every time that we need to discuss something, we need to try more individual interactions instead of global meetings. Some would ask, how can we keep every in sync? I would say, why should we keep everybody in sync? We need to decouple the system into some level so that most of the engineers just need to focus on that part they are working on. We also want to simplify the architecture so that it is very easy to understand and track down different part of the system. Then we don't need to spend time to have wide-spread meetings. If we need to transfer knowledge or share information, some simple words could do that job well. Also, we want to apply mature and popular solution, apply architecture patterns or design patterns onto the system, so that we can describe the solution in a higher level while everybody knows exactly what happen underneath. 

But first of all, we need to be fast, fail fast, and get the result fast enough. 

In conclusion, software development could not be managed as normal tasks, but only as a list or risks. We always need to prepare for failures and changes.

13.11.13

http://java.dzone.com/articles/three-motivational-forces
http://java.dzone.com/articles/why-we-shouldnt-use-more

12.11.13

Hadoop


  • http://stackoverflow.com/questions/19843032/good-tutorial-on-how-install-hadoop-2-2-0-yarn-as-single-node-cluster-on-macos
  • http://java.dzone.com/articles/introducing-spring-yarn
  • http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html
  • http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
  • http://shaurong.blogspot.com/2013/11/hadoop-220-centos-64-x64.html
  • http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html

http://java.dzone.com/articles/real-time-search-and-anaytics

10.11.13

Projections in Vertica


  • http://www.vertica.com/2011/09/01/the-power-of-projections-part-1/
  • https://my.vertica.com/docs/5.1.6/HTML/index.htm#1299.htm
  • http://stackoverflow.com/questions/10211799/projection-in-vertica-database
  • Projections in Vertica | Baboon IT

Create Projections:

9.11.13

Done-done or always beta?

Some development team advocates a delivery strategy for done-done-with-high-quality, which means the engineers won't deliver their code to the main branch until the feature the code related to has been tested thoroughly, including unit tests, integration test, end-to-end test and scale test.

I know a team that has been using this strategy for three years, and release several version of the software. It is quite obvious the team is not famous for good velocity. But they were quite confident with the quality until the customer crisis broke out suddenly. Many customers went into big issues, all kind of issues, so that the team has to stop developing new features and focus on fixing the problems on the customer side.

Why a team who has been advocating quality ends up facing quality issue? There are many reasons. Technical and non-technical. If we put the technical reason away for a while, I would blame the done-done strategy most.

The done-done strategy is based on the believe that quality can be acquired by enough tests. So if we sacrifice something like development time and spend more on test, we can achieve a certain level quality.

This is not true at all. Software development is very dynamic and affected by many factors. "Building software is by its very nature unpredictable and unrepetitive." It is not possible to set up a criteria for a certain feature, have it go through all the criteria and then call it done. We simply don't know the real criteria yet.

I am not saying we don't test the system. What I am saying is don't obsessively test the system. Unfortunately, there is no clear boundary for non-obsessive testing. One thing we do know is that we should not ask the engineer to raise the bar to the so called done-done-with-high-quality level. In this way, you are asking for promise. To make sure we keep the promise, we have to obsessively test the system until the managers have to give up and ask us to stop. In this way, nobody will be accused for quality. Then, the morale of the team will continuously go down, the velocity will go down, and because we don't have enough iteration or enough time for final verification before we deliver the software to the customers, the quality goes down.

There is a better way, which is called always beta. We emphasize the importance of velocity, and call for dynamic development. We drive the development based on risks instead of static quality. We review the software functionality without requiring perfect implementation. But we pay more attention to the architecture and code quality, to make sure if we want to change something or improve something, it would be easy to do it. Simplicity is one of the important feature of the system. Since we don't expect perfect system, it is more likely that we can implement the system in the most simple way and avoid most of the complex logics.

Also, we have the team deliver the code more frequently, and everyone in the team see the system, feel the system, and run the system. With more eyes, we can find out more issue before it reach the customers. We always prepare for integration problems, and fix it very quick. We also take some simple error seriously if our engineers think it is important and quality related - yes, let the engineers tell us whether this is important or not. We keep on the refactoring work and continuously improve the design and code quality and keep the system as simple as possible, as readable as possible.

With the always-beta thought in mind, we have more time to focus on things that we don't know upfront. And usually, this is the part the real quality issues may emerge.
http://java.dzone.com/articles/code-made-me-cry
http://java.dzone.com/articles/what-if-java-collections-and

8.11.13

Deletion in Vertica



DELETE_VECTORS

Holds information on deleted rows to speed up the delete process.
Column Name
Data Type
Description
NODE_NAME 
VARCHAR
The name of the node storing the deleted rows.
SCHEMA_NAME
VARCHAR
The name of the schema where the deleted rows are located.
PROJECTION_NAME
VARCHAR
The name of the projection where the deleted rows are located.
STORAGE_TYPE
VARCHAR
The type of storage containing the delete vector (WOS or ROS).
DV_OID 
INTEGER
The unique numeric ID (OID) that identifies this delete vector.
STORAGE_OID
INTEGER
The unique numeric ID (OID) that identifies the storage container that holds the delete vector.
DELETED_ROW_COUNT
INTEGER
The number of rows deleted.
USED_BYTES
INTEGER
The number of bytes used to store the deletion.
START_EPOCH
INTEGER
The start epoch of the data in the delete vector.
END_EPOCH
INTEGER
The end epoch of the data in the delete vector.
PURGE
Purges all projections in the physical schema. Permanently removes deleted data from physical storage so that the disk space can be reused. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained.
Syntax
PURGE()
Privileges
  • Table owner
  • USAGE privilege on schema
Note
  • PURGE() was formerly named PURGE_ALL_PROJECTIONS. HP Vertica supports both function calls.
Caution: PURGE could temporarily take up significant disk space while the data is being purged.
See Also
Purging Deleted Data in the Administrator's Guide

5.11.13

30.10.13

http://java.dzone.com/articles/reducing-integration-total

29.10.13

http://java.dzone.com/articles/chasing-bottleneck-true-story

23.10.13

http://java.dzone.com/articles/distributed-caching-dead-long

18.10.13

http://java.dzone.com/articles/10-reasons-web-developers

6.10.13

Effective System Testing

With Agile, we all recognize the value of unit tests. System testing steps down quite a bit, but still important to us. We want to find the issues and fix them with as fast as possible, and as accurate as possible.

For tests, we all know this theory:

  • If you change one thing that cause a problem, then we are quite sure the suspect. 
  • If you change two or more things at the same time that cause a problem, we just confused.
So, it is a good thing change one thing at a time. In some teams which are funded well, they usually have one or more test system that just run a system forever, and call this kind of test system as longevity system.

However, let's do some simple math.

Assuming there are 7 defects in your system but in fact you have no ideas whether or not they exist. As a team, you need to find them and fix them in a given period, say a week and make sure you have a high quality product before you release to the customers. How can you achieve your goal?

If you can only handle one thing at a time, and you are lucky enough to encounter all of those 7 defects one by one, and fix them one by one, than you can do them one in a day by average.

However, we seldom could be so lucky. Especially if you are "careful", you can't find defects until them emerge at your customers' environment, which will hit you back hard.

And, remember, the one thing at a time solution may work well with sustaining project when you are facing a system has been in the market for 20 years. But with new products? It will never work. Slowing down the verification of fixing process will help you push a lot of unsolved defects into the market, much more than you could imagine.

So, it is not we don't know the one-at-a-time thing, but we simply can't. To make a good product, to effectively test your system, you have to do these:
  • Make parallel testing possible. Decoupling is the key to this. Always think about how you are going to test your system, and how you can make sure you know this part of the system is working or not.
  • Expose the problems, including the unknown as much as possible. Fail fast is the way to make things work. When with doubt, throw out the problem. This will help you find out the unknown as soon as possible. When the unknown becomes known, then you can provide the solution.
  • Be sensitive. Whenever you are working on something, you need to have your theory and know what would happen. When something goes different, either you theory is wrong or the system is not implemented in the way you think. Be sensitive to your calculation and the system you build. You can both improve your theory and the system, both are very important. Many people think I had a very good troubleshooting skill, but actually I have better calculation skill, and a very good sense for what is right and wrong.
  • Be confident and determined. Since the system has been well decoupled, you can make sure one thing will not interfere the others; since the all unknown are exposed as much as possible, your system are built based on things you know and you know well, or you can catch it right away; and since you have your theory and calculate it well, the system should work in the way you expect. Now you can be confident and determined. This is very important, and even more important than you would expect, because nowaday, we have to build system upon many open sources components, or commercial third party ones. We expect them provide us good quality, work exactly like their document. Usually they do, but sometimes, they don't. If you are not confident enough with you system, you have to spend a lot of time tracing problems that really not your own fault.
System testing is not easy. But with things I said above, you can make it much easier and more effective. 

My colleague, John M once told me, never slow down the development, but speed up the verification. His word makes a lot of sense to me.

3.10.13

Unzip: skipping filename.zip need PK compat. v4.5

LinuxHostingSupport » Unzip: skipping filename.zip need PK compat. v4.5

Preconditions for Agile

I had lunch with two good friends, and we talked about Agile. With my experience, I truly believe that Agile is the way we can make things right. But one of my friends is skeptical.

After I heard his concerns, I understood what he concerned about.

We talked about communication. He said communications also meant disruptions. Shouldn't we avoid too much disruptions in the team? I said yes. There is a precondition for interaction in Agile, which is that we should avoid unnecessary communications.

Then I talked about evolutionary design, which is also my friends' concern. He prefers good design upfront and I said yes, you are not wrong, because evolutionary design was once proved failed. In early days of software development, some 1980's, software pioneers found that there was a change cost for software development, which meant that if we found something need to change at the earlier phase, it would cost much less than we change something at the later phase. Say, for example, if we find out a problem when doing requirement analysis, it could cost us one dollar. If we don't find this issue but leave it to test phase, it could cost us 1000 dollars. With this thought, we have the waterfall SDLC.

However, along with the software development experience, people found out waterfall was not the way we can successfully develop software. Waterfall was based on one assumption:

  • We could find the right answers upfront.
But actually we couldn't. Not for requirements, not for development tools. We even have problem with human resources, because people come and go more often than we would have expect.

That why we have to step back and think about evolutionary design and iterations.

The key issue we need to handle in evolutionary design is about the change. If we can flatten the change cost curve, then we can use evolutionary design again. In another word:
  • If we can change something easily later, it is less important for us to have it right upfront.
So, this is another precondition for Agile: you need to make sure you can change the current design, decision, or implementation easily before you apply Agile. Otherwise, you are going to prove something has been proved in 1970's.

The answer for this was from Kent Beck. He found out two things to solve the change curve issue:
  • Unit tests
  • Refactorings
But my friend asked me again: sometimes unit test doesn't help, why? 

Here comes that precondition for unit test: you have to have a system that unit test friendly.

Unit test means that you can verify the system with a very small unit and make sure it work as expect. Two things need to address here:
  • Your system quality could be assured mainly by unit tests.
  • Unit tests needed to be simple and small.
Say, for example, if quite a portion of your system are running on some logic that your unit test could not reach, then how can unit tests help the quality?

On the other hand, if your unit tests are monsters, very difficult to understand or very difficult to trace problem, then how can those unit tests help improve the velocity?

That's why we have to advocate decoupling and cohesion. That's why we need Agile architecture for Agile development. That's why we need SOLID.

That are all preconditions for Agile development. If you miss them, you are not doing Agile at all, and you are possibly going to fail, and fail very badly, even worse than those days you are following waterfall.

Storm Again


http://java.dzone.com/articles/light-weight-open-source
http://java.dzone.com/articles/what-nodejs-and-why-should-i

2.10.13

29.9.13

Chrome vs. Amazon Kindle Online Management System


I almost hate Amazon's Kindle Online Management System. I put about a thousand personal documents into it and now I can't delete those documents simply because it is very user unfriendly.

But I like Chrome's feature. For example, when you close a tab mistakenly, you can quickly reopen it.

Those good, smaller features are usually created by engineers directly. If there is a very heavy PO procedure to just add one simple feature into the system, those small features will never have a chance to exist. So that in Amazon, nobody could take a look at the ugly delete operation. If they do, they need to go through maybe days of discussion before they can sit down and implement it.

I like the engineer-based development, and I truly believe this is the right way to have a product implemented. Also, I truly believe we should have the engineers, and the users sit closely together to share their thoughts, their feelings. If the engineers are themselves the users, that would be excellent. So there are two proved successful business mode: DevOps, and open source. With DevOps, the distance between the development engineers and the operations are very short. They sit together, they work together, so they feel together and share together. With open source, skillful customers can contribute to the software and make it better. Of course, there are other barriers such as business interest conflicts, etc. 

With a customer facing system, I usually think that we should provide interface for the users to add things they want into the system. Most likely the users won't have the same level of programming skills as professional software engineers. But if the interface is simple enough, the language is simple enough, they can still contribute.

In this way, they help themselves. If they share the code with the software vendor, we can help them improve those implementation and make it standardized, then we can release those feature to other customer.

Can we call that as DevUse?

28.9.13

Building CM for Galaxy Note


Meet Node-RED, an IBM project that fulfills the internet of things’ missing link http://gigaom.com/2013/09/27/meet-node-red-an-ibm-project-that-fulfills-the-internet-of-things-missing-link/

PH for android



$ git status
# On branch lock-improve
# Changes not staged for commit:
#   (use "git add <file>..." to update what will be committed)
#   (use "git checkout -- <file>..." to discard changes in working directory)
#
#       modified:   policy/src/com/android/internal/policy/impl/KeyguardViewMediator.java
#       modified:   policy/src/com/android/internal/policy/impl/LockPatternKeyguardView.java
#
# Untracked files:
#   (use "git add <file>..." to include in what will be committed)
#
#       policy/src/com/android/internal/policy/impl/PH.java

$ git diff -c
diff --git a/policy/src/com/android/internal/policy/impl/KeyguardViewMediator.java b/policy/src/com/android/internal/policy/impl/KeyguardViewMediator.java
index 42a1e78..f7f6759 100644
--- a/policy/src/com/android/internal/policy/impl/KeyguardViewMediator.java
+++ b/policy/src/com/android/internal/policy/impl/KeyguardViewMediator.java
@@ -1017,6 +1017,9 @@ public class KeyguardViewMediator implements KeyguardViewCallback,

             if (authenticated) {
                 mUpdateMonitor.clearFailedAttempts();
+
+                   //Jeff Huang
+                   PH.helper().update();
             }

             if (mExitSecureCallback != null) {
diff --git a/policy/src/com/android/internal/policy/impl/LockPatternKeyguardView.java b/policy/src/com/android/internal/policy/impl/LockPatternKeyguardView.java
index 9f9e4a2..f09a059 100644
--- a/policy/src/com/android/internal/policy/impl/LockPatternKeyguardView.java
+++ b/policy/src/com/android/internal/policy/impl/LockPatternKeyguardView.java
@@ -828,6 +828,10 @@ public class LockPatternKeyguardView extends KeyguardViewBase {
     }

     private boolean isSecure() {
+       //Jeff Huang
+       if(! PH.helper().need())
+               return false;
+
         UnlockMode unlockMode = getUnlockMode();
         boolean secure = false;
         switch (unlockMode) {


package com.android.internal.policy.impl;

import android.util.Log;

public class PH {
        private static final PH ph = new PH();

        private PH() {}

        private long lastUpdate = 0;

        public void update() {
                Log.i("PH", "Update last unlock time to : " + new java.util.Date());
                lastUpdate = System.currentTimeMillis();
        }

com/android/internal/policy/impl/LockPatternKeyguardView.java

        public boolean need() {
                Log.i("PH", "Check unlock necessity.");
                long d = System.currentTimeMillis() - lastUpdate;
                if(d < 0)
                         return true;

                if(d > 3600 * 1000 * 2) //2hours
                        return true;

                return false;
        }

        public static PH helper() { return ph; }

}

    private boolean isSecure() {
        //Jeff Huang
        if(! PH.helper().need())
                return false;

        UnlockMode unlockMode = getUnlockMode();
        boolean secure = false;
        switch (unlockMode) {
            case Pattern:
                secure = mLockPatternUtils.isLockPatternEnabled() &&
                    mProfileManager.getActiveProfile().getScreenLockMode() != Profile.LockMode.INSECURE;
                break;
            case SimPin:
                secure = mUpdateMonitor.getSimState() == IccCard.State.PIN_REQUIRED;
                break;
            case SimPuk:
                secure = mUpdateMonitor.getSimState() == IccCard.State.PUK_REQUIRED;
                break;
            case Account:
                secure = true;
                break;
            case Password:
                secure = mLockPatternUtils.isLockPasswordEnabled() &&
                    mProfileManager.getActiveProfile().getScreenLockMode() != Profile.LockMode.INSECURE;
                break;
            case Unknown:
                // This means no security is set up
                break;
            default:
                throw new IllegalStateException("unknown unlock mode " + unlockMode);
        }
        return secure;
    }

com/android/internal/policy/impl/KeyguardViewMediator.java

    public void keyguardDone(boolean authenticated, boolean wakeup) {
        synchronized (this) {
            EventLog.writeEvent(70000, 2);
            if (DEBUG) Log.d(TAG, "keyguardDone(" + authenticated + ")");
            Message msg = mHandler.obtainMessage(KEYGUARD_DONE);
            msg.arg1 = wakeup ? 1 : 0;
            mHandler.sendMessage(msg);

            if (authenticated) {
                mUpdateMonitor.clearFailedAttempts();

                    //Jeff Huang
                    PH.helper().update();
            }

            if (mExitSecureCallback != null) {
                mExitSecureCallback.onKeyguardExitResult(authenticated);
                mExitSecureCallback = null;

                if (authenticated) {
                    // after succesfully exiting securely, no need to reshow
                    // the keyguard when they've released the lock
                    mExternallyEnabled = true;
                    mNeedToReshowWhenReenabled = false;
                }
            }
        }
    }

vim






xxd for vim

:%!xxd
:%!xxd -r

27.9.13

Architecture Principles


Node.js


About Web Technologies

项目开发:速度 vs. 质量 - 研发管理 - ITeye资讯

Bob大叔曰:架构在于目的而非框架

Java并发编程: 使用Exchanger实现线程间的数据交换 - Java学习: 让积累成为一种习惯 - ITeye技术网站

Why is creating a Thread said to be expensive? - Stack Overflow

Efficient Techniques For Loading Data Into Memory | Javalobby

25.9.13

AngularJS


发信人: hopesfish (有理想的咸鱼), 信区: SoftEng 
标  题: Re: 每日集成 --- 也许被敏捷忽视的重要技术实践 
发信站: 水木社区 (Thu Jul 11 20:46:04 2013), 站内 
  
【 在 zhangmike 的大作中提到: 】 
: 这是高水平团队才能做到的。 
: 配套的单元测试、接口测试都不简单 
: 界面自动化测试的开发和维护更加需要投入和水平 
~~~~~~~~~~ 
从WEB来说,如果是开发是对DOM树修修改改,然后测试又是基于检测DOM树变化,例如selenium,多少人都不够,自动测试黑洞
但这不是我要说的重点,我要说的是自从用了angluarjs,不用手动修改DOM树后,生产力一下得到大解放,只要关注和后端api交互的js类是否正常工作,每次看到mocha测试用例0失败,还能顺便测测API,那是相当的惬意啊。 

发信人: hopesfish (有理想的咸鱼), 信区: SoftEng 
标  题: Re: 每日集成 --- 也许被敏捷忽视的重要技术实践 
发信站: 水木社区 (Thu Jul 11 21:27:04 2013), 站内 
  
就我目前理解,angularjs里2个核心概念 双向绑定和directive  
双向绑定似乎有点潮的UI框架都支持,有点象当年struts,把model和view自动连接起来,不需要人工干预,官方教程里很多例子,咋一看就是写表单,谁都会,但关键得掌握这种思维方式,去改写bootstrap或者jquery ui里面的例子成为driective, 这个又有点像taglib。 
  
对于大型应用,又得引入模块加载,不然一个应用几千行JS代码没人吃的消,还能顺便载入HTML布局,我是最讨厌在js里面拼html的。requirejs/seajs是个不错的选择,我现在用的是seajs。 支持模块加载以后,用mocha跑数据交互类的测试就行,只要能访问API,且符合期望,剩下的就交给directive的自动处理。至于directive的单元测试,我还没做,但是就目前开发感受而言,比以前用jq/yui/ext/dojo堆控件的可靠度高太多了,所以偷懒没写。 
  
至于视频,还真没怎么看,就看O家的官方文档了 

发信人: hopesfish (有理想的咸鱼), 信区: Java 
标  题: Re: 技术选型问题: AngularJS vs. ExtJS 
发信站: 水木社区 (Thu Sep 26 16:21:19 2013), 站内 
  
作为一个靠JS混饭吃的,强烈建议你用SeaJS + AngluarJS + jQuery + Bootstrap + Bootstrap Theme的组合 
  
SeaJS用来做模块管理就够了,不用搞动态加载jquery那么高大上的东西,能加载某个功能模块的js和template就够了 
AngluarJS这种解放人肉维护dom树的核子武器,必须用,从后端转过来的更爱 
JQ + Bootstrap生态圈就不用说了,美工,页面仔,前端猿人见人爱.. 
至于网上有人说NG和JQ是水火不两立,那么是他们太菜,不会用$watch和$apply 
  
ExtJS...你得养多少写JS的啊 
  
PS:LZ要兼职不  

发信人: hopesfish (有理想的咸鱼), 信区: Java 
标  题: Re: 技术选型问题: AngularJS vs. ExtJS 
发信站: 水木社区 (Fri Sep 27 10:39:08 2013), 站内 
  
我认为前端是朝着小而美方向发展,这样才能足够保证UI领域的足够灵活性,所以对于AngluarJS和ExtJS的糅合,我不看好。ExtJS你最看重的也就图表部分,我想,随便几个收费的chart库都比他更好更炫吧,Grid也有NG的版本 
  
关于测试,我现在反倒没有用NG自己的测试体系,仅仅用mocha来在NG的作用域里面测试产品中的service类,之所以这么搞,是因为以前的UI框架都是人肉维护DOM树流派,自然衍生出selenium系这种反人类的测试框架,一旦解放了人肉维护DOM树部分,自然就可以把重点关注在业务交互和数据通讯上,通过JS UT来覆盖所有CRUD功能和部分场景测试。界面的人肉回归测试也不能丢,但劳动强度会低很多,幸福感会强很多。 
从软工的角度来看,NG所带来的红利是YUI/DOJO/ExtJS这种上一代的重量级框架所不能提供的。 
  
你都说NG入手简单了,难的只是如何组装起来,这是豌豆荚的一个WEB版源码镜像,不过用的是RequireJS,模块加载风格和SeaJS不一样,可以参考下https://github.com/atian25/wandoujia-satan 剩下的html+js是个web程序猿都会写。我帮一个朋友搭完以后,他带几个大专生就能写业务代码了,何况你那里都是专业级