dd.db
Class HashStorageManager
java.lang.Object
|
+--bamboo.util.StandardStage
|
+--dd.db.HashStorageManager
- All Implemented Interfaces:
- EventHandlerIF, SingleThreadedEventHandlerIF
- public class HashStorageManager
- extends StandardStage
- implements SingleThreadedEventHandlerIF
An inline/in memory database backed by java hashtables..
Data is stored on disk in a table with the following fields:
Bytes | Data |
0-7 | put time since the epoch (microseconds) |
8-11 | ttl interval after put time (seconds) |
12-31 | guid |
32-51 | data hash |
52 | whether this is a put (1) or remove (0) |
53- | data |
The data hash is needed to guarentee a consistent scan order and for
removes. The primary key is bytes 0-52, and the secondary key is the guid
concatenated with the data hash.
- Version:
- $Id: HashStorageManager.java,v 1.6 2004/05/28 17:45:39 hweather Exp $
- Author:
- Hakim Weatherspoon
Methods inherited from class bamboo.util.StandardStage |
BUG, BUG, BUG, config_get_boolean, config_get_double, config_get_int, config_get_string, destroy, dispatch, enqueue, handleEvents, lookup_stage, now_ms, timer_ms |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
zero_guid
protected static BigInteger zero_guid
MIN_HASH
public static final byte[] MIN_HASH
MAX_HASH
public static final byte[] MAX_HASH
MIN_KEY
public static final StorageManager.Key MIN_KEY
HashStorageManager
public HashStorageManager()
init
public void init(ConfigDataIF config)
throws Exception
- Specified by:
init
in interface EventHandlerIF
- Overrides:
init
in class StandardStage
Exception
handleEvent
public void handleEvent(QueueElementIF item)
- Specified by seda.sandStorm.api.EventHandlerIF
- Specified by:
handleEvent
in interface EventHandlerIF
handle_put_req
protected void handle_put_req(StorageManager.PutReq req)
handle_put_req
does exactly that. That is,
there should be at most one put and one remove stored in the
database for this (guid, data_hash) pair. We need to check for
them first, then put this new datum in, and then return
whichever existing datum it overrides, if any.
- Parameters:
req
- StorageManager.PutReq
.
key_expired
protected boolean key_expired(StorageManager.Key k)
handle_get_by_key_req
protected void handle_get_by_key_req(StorageManager.GetByKeyReq req)
handle_iterate_by_guid_req
protected void handle_iterate_by_guid_req(StorageManager.IterateByGuidReq req)
handle_iterate_by_guid_cont
protected void handle_iterate_by_guid_cont(StorageManager.IterateByGuidCont req)
do_iterate_by_guid
protected void do_iterate_by_guid(HashStorageManager.IBGCont cont,
SinkIF comp_q,
Object user_data)
handle_get_by_guid_req
protected void handle_get_by_guid_req(StorageManager.GetByGuidReq req)
handle_get_by_guid_cont
protected void handle_get_by_guid_cont(StorageManager.GetByGuidCont req)
do_get_by_guid
protected void do_get_by_guid(HashStorageManager.GBGCont cont,
SinkIF comp_q,
Object user_data)
handle_get_by_time_req
protected void handle_get_by_time_req(StorageManager.GetByTimeReq req)
handle_get_by_time_cont
protected void handle_get_by_time_cont(StorageManager.GetByTimeCont req)
do_get_by_time
protected void do_get_by_time(HashStorageManager.GBTCont cont,
SinkIF comp_q,
Object user_data)
handle_discard_req
protected void handle_discard_req(StorageManager.DiscardReq req)
application_enqueue
protected void application_enqueue(SinkIF sink,
QueueElementIF item)
application_enqueue
calls
Classifier.dispatch_later(seda.sandStorm.api.QueueElementIF, long)
, even in
simulation
to prevent
StackOverflowError
when the DataManager
is creating a
MerkleTree
.
main
public static void main(String[] args)