26.8.13

理解本真的REST架构风格

Vertica deletes records from tuple_mover_operations table?

I can see the running record while it is running:

=> select operation_start_timestamp, operation_name, table_name,ros_count, total_ros_used_bytes,plan_type,is_executing from tuple_mover_operations where operation_start_timestamp >'2013-08-26 11:00:00' ;
   operation_start_timestamp   |   operation_name   |       table_name        | ros_count | total_ros_used_bytes |     plan_type      | is_executing
-------------------------------+--------------------+-------------------------+-----------+----------------------+--------------------+--------------
 2013-08-26 11:45:17.221133-04 | Moveout            | test_load               |       805 |            790757376 | Moveout            | t

But after it finishes running, I can't see it anymore. Why?

《天空之城》助Twitter刷新纪录,新架构功不可没

25.8.13

Vertica System Tables and Parameters



select parameter_name, current_value, default_value, change_requires_restart from configuration_parameters where parameter_name in ('ActivePartitionCount', 'MergeOutInterval', 'MoveOutInterval', 'MoveOutMaxAgeTime', 'MoveOutSizePct');
    parameter_name    | current_value | default_value | change_requires_restart
----------------------+---------------+---------------+-------------------------
 ActivePartitionCount | 1             | 1             | f
 MergeOutInterval     | 600           | 600           | f
 MoveOutInterval      | 300           | 300           | f
 MoveOutMaxAgeTime    | 1800          | 1800          | f
 MoveOutSizePct       | 0             | 0             | f
(5 rows)


Vertica - ROS Container

Vertica Online Doc

24.8.13

Path.resolve(Path) has similar performance as new File(x, y)

So it doesn't matter which one you choose regarding performance concerns.

Using File.renameTo() instead of Files.move(xxx)


Not sure why JCP designs Files.move() in this way. Using IOException for missing source file doesn't look a good idea.

File.renameTo() looks much more simple.

Use Path.toFile.createNewFile() instead of Files.createFile(Path p) for performance sensitive system.

According to my own test with Files.createFile(Path p) on existing file, for 1 million files:

Using Files.createFile(Path p) with IOException:
  24,632,879,688 nano seconds
But using Path.toFile().createNewFile():
  7,836,679,463 nano seconds.
The former solution spends 3 times of the process time.

JMS 2.0的新变化 - 知之 - ITeye技术网站

14.8.13

Java 7: Fork and Join


Optimization Concerns - Funny Comments


If there was one bit of wisdom on "performance" I learned in that time, is your bottleneck is likely in THE LAST PLACE, the LEAST obvious place you would expect. Everything is conjecture till you measure, measure, measure. Your better off with a clean design, then attack the slow parts.

5.8.13

Performance of Spring Integration

Very simple test: one Gateway, two service activators.

<int:channel id="channel-in"/>
<int:gateway id="helloGateway" service-interface="simple.demo.springintegration.demo.chapter5.SimpleGateway"/>
<int:service-activator input-channel="channel-in" output-channel="channel2" expression="'result = ' + payload"/>
<int:service-activator input-channel="channel2" expression="'r2 :=' + payload"/>

Time to process 1 million messages: 8,919 milliseconds, in my i7 PC within Eclipse. Basically, it could handle 100K message in 1 second.

But if I directly use channel-in to send the message and channel2 to receive the message, the process time will be cut down to 3,821 milliseconds.

It seems that Gateway is kind of heavy.

@Test
public void test2() throws Exception {
final CountDownLatch latch = new CountDownLatch(1000 * 1000);

channel2.subscribe(new MessageHandler() {

@Override
public void handleMessage(Message<?> message) throws MessagingException {
latch.countDown();
}
});

Message<String> msg = MessageBuilder.withPayload("hello world").build();
long begin = System.currentTimeMillis();
for(int k=0; k<1000 * 1000; ++k) {
inputChannel.send(msg);
}
latch.await();
long end = System.currentTimeMillis();

System.out.println("end - begin " + (end - begin));

}

---------------------
Update on 8/5/2013

I replace the UUID generator:


With Default UUID Generator
With Customed UUID Generator
Using Gateway
8,919
6,519
Using Direct Channel
3,821
2,545