Tuesday, November 6, 2007

Code coverage on the web site

I am using Cobertura:
http://diymiddleware.sourceforge.net/coverage/index.html

Further decoupling



After some intensive refactoring, I finally arrived to the architecture where server and client do not depend on transport level. They communicate to the network via Transport object, which maintains remote connections, reads data from the peer, packages them into frame objects, and sends frame objects to the remote peer. In fact, both server and client can use the same transport implementation. Transport needs some source of connection though. In the case of server, connections come from the Acceptor, who's job is accepting incoming connections on the ServerSocket and feeding them to the Transport. In the case of client, Connector object takes direct calls from the Client in order to initiate connection. Transport is normally unaware if it works for the server or for the client.
Another interesting feature of this design is the communication media between almost all components. They do not use callbacks, this prevents them from interfering with each other's threading model. Instead, they communicate through queues. These queues can be created and configured/customized in the spring configuration file. At present, they are all LinkedBlockingQueue without limit, but any other implementation/configuration is easily possible.

Wednesday, October 17, 2007

Unit test fixed

In order to fix my first unit test, I had to introduce acknowledgments to the subscription requests. Server would simply reply to the "REC" message with the identical one. Having received this message back, client can be sure that subscription is in place and messages will be delivered (this is exactly what this unit test is testing).
However, client code gets messy because of callback architecture. It will be re-engineered in a same way as for the server - decoupling actual client from connector. Theoretically, connectors could even be the same for both server and client. Only session feed would differ - on the server sessions are created upon accepting client connections, whereas on the client they are user-initiated.

Monday, October 15, 2007

First unit test for the server

I have written a Junit test for the server distribution functionality (DistributionTest). It has been quite easy after decoupling work done before.
First test simply connects a session to the server.
Second test tries to connect two sessions and subscribe both of them to all incoming messages (using regex pattern ".*"). Then, it sends one message from each session and excepts it to be received by both. This tests shows (it actually fails for now) that it is necessary to have a way to acknowledge the completion of subscription ("REC" command). Test needs it to know when to proceed with sending messages, so would many applications do.

Thursday, October 11, 2007

Decouple connector from Server

Before we move on to upgrading messaging server to using NIO, it make sense to decouple the part of functionality that deals with I/O into separate component that can be easily swapped. Therefore I have introduced Connector. Communication between connector and server is done via two queues: one queue is input queue for the server and output queue for the connector and another one - output queue for the server and input queue for the connector. These queues are holding SessionCommand objects. At the moment, two types of SessionCommands are supported:
1) QUEUE (connector->server). When a new client connection is made, connector creates a queue that will pass data frames from the server via connector to the client. This queue is specified in the QUEUE command that connector sends to the server. This command also included unique session ID, generated by the connector.
2) QUEUE (server->connector). Once the server has received the QUEUE command from the connector, it creates another queue (or re-uses previously created one) that will pass data frames from the client via connector to the server. This queue is specified in the QUEUE command that the server sends back to the connector.
3) CLOSE (connector->server). If client closes its connection, or an I/O exception happens during reading/writing on the client connection, connector sends CLOSE command to the server, informing that session has been closed. If the cause was an exception, it will be passed in the command.
4) CLOSE (server->connector). Server may decide to close session, so it asks connector to close connection by sending CLOSE command. At the moment, this use case is not present in the code.

Decoupling means that the server now works only with SessionCommands and data frames (class Frame). There are no callbacks between server and connector either, so there is no thread-dependencies.

QueueSizeMonitor is now injected in both server and connector. They update counters for the distribution queue and output queue whenever they put/take values from them.

Thursday, October 4, 2007

NIO based client

I have refactored the client program (now still named Client1), used in the tests. It is now based on non-blocking I/O and uses it features such as buffers, socket channels, selectors and charset encoders/decoders.
Client program has a single thread doing I/O and three groups of other threads:
1) Encoder threads. Since all messages are strings, they have to be converted to byte arrays before being sent to the sockets. Encoder threads take stings as input and produce byte buffers ready to inject into sockets (this injection is done by the I/O thread). Encoder threads are implemented in the form of a thread pool executor with specified number of threads.
2) Decoder threads. Once data (in form of byte buffers) have been read from the sockets by I/O threads, they need to be converted to strings to be able to process by the application. This work is done by decoder threads.
3) Callback threads. When an application is subscribing to messages (using a filter), it passes the callback to the client program. This callback gets called by callback threads (implemented as a thread pool). Callbacks are also called to propagate session exceptions within the client.

Sunday, September 30, 2007

Spring container

In order to make server more modular, I have put it into a spring container. It only contains one bean at the moment, but I am planning to make queueSizeMonitor and connector (based on blocking I/O) separate pluggable modules. This way, it would be possible:
1) To easily turn off queue size monitoring if necessary (by creating a dummy monitor, which does not do anything)
2) To create non-blocking I/O based connector and compare gain in performance and/or scalability