CONAN ZONE

你越挣扎我就越兴奋

BlogJava 首页 新随笔 联系 聚合 管理
  0 Posts :: 282 Stories :: 0 Comments :: 0 Trackbacks

Deleting data from an index using DIH incremental indexing, on Solr wiki, is residually treated as something that works similarly to update the records. Similarly, in a previous article, I used this shortcut, the more that I have given an example of indexing wikipedia data that does not need to delete data.

Having at hand a sample data of the albums and performers, I decided to show my way of dealing with such cases. For simplicity and clarity, I assume that after the first import, the data can only decrease.

Test data

My test data are located in the PostgreSQL database table defined as follows:

Table "public.albums"
Column |  Type   |                      Modifiers
--------+---------+-----------------------------------------------------
id     | integer | not null default nextval('albums_id_seq'::regclass)
name   | text    | not null
author | text    | not null
Indexes:
"albums_pk" PRIMARY KEY, btree (id)

The table has 825,661 records.

Test installation

For testing purposes I used the Solr instance having the following characteristics:

Definition at schema.xml:

<fields>
 
<field name="id" type="string" indexed="true" stored="true" required="true" />
 
<field name="album" type="text" indexed="true" stored="true" multiValued="true"/>
 
<field name="author" type="text" indexed="true" stored="true" multiValued="true"/>
</fields>
<uniqueKey>id</uniqueKey>
<defaultSearchField>album</defaultSearchField>

 

Definition of DIH in solrconfig.xm
<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
 
<lst name="defaults">
  
<str name="config">db-data-config.xml</str>
 
</lst>
</requestHandler>

And the file DIH db-data-config.
<dataConfig>
 
<dataSource driver="org.postgresql.Driver" url="jdbc:postgresql://localhost:5432/shardtest" user="solr" password="secret" />
 
<document>
  
<entity name="album" query="SELECT * from albums">
   
<field column="id" name="id" />
   
<field column="name" name="album" />
   
<field column="author" name="author" />
  
</entity>
 
</document>
</dataConfig>


Deleting Data

Looking at the table shows that when we remove the record, he is deleted without leaving a trace, and the only way to update our index would be to compare the documents identifiers in the index to the identifiers in the database and deleting those that no longer exist in the database. Slow and cumbersome. Another way is adding a column deleted_at: instead of physically deleting the record, only add information to this column. DIH can then retrieve all records from the set date later than the last crawl. The disadvantage of this solution may be necessary to modify the application to take such information into consideration.

I apply a different solution, transparent to applications. Let’s create a new table:

1 CREATE TABLE deletes
2 (
3 id serial NOT NULL,
4 deleted_id bigint,
5 deleted_at timestamp without time zone NOT NULL,
6 CONSTRAINT deletes_pk PRIMARY KEY (id)
7 );

This table will automagically add an identifier of those items that were removed from the table albums and information when they were removed.

Now we add the function:

01 CREATE OR REPLACE FUNCTION insert_after_delete()
02 RETURNS trigger AS
03 $BODY$BEGIN
04 IF tg_op = 'DELETE' THEN
05 INSERT INTO deletes(deleted_id, deleted_at)
06 VALUES (old.id, now());
07 RETURN old;
08 END IF;
09 END$BODY$
10 LANGUAGE plpgsql VOLATILE;

and a trigger:

1 CREATE TRIGGER deleted_trg
2 BEFORE DELETE
3 ON albums
4 FOR EACH ROW
5 EXECUTE PROCEDURE insert_after_delete();

How it works

Each entry deleted from the albums table should result in addition to the table deletes. Let’s check it out. Remove a few records:

1 => DELETE FROM albums where id < 37;
2 DELETE 2
3 => SELECT * from deletes;
4 id | deleted_id |         deleted_at
5 ----+------------+----------------------------
6 26 |         35 | 2010-12-23 13:53:18.034612
7 27 |         36 | 2010-12-23 13:53:18.034612
8 (2 rows)

So the database part works.

We fill up the DIH configuration file so that the entity has been defined as follows:

1 <entity name="album" query="SELECT * from albums"
2   deletedPkQuery="SELECT deleted_id as id FROM deletes WHERE deleted_at > '${dataimporter.last_index_time}'">

This allows the import DIH incremental import to use the deletedPkQuery attribute to get the identifiers of the documents which should be removed.

A clever reader will probably begin to wonder, are you sure we need the column with the date of deletion. We could delete all records that are found in the table deletes and then delete the contents of this table. Theoretically this is true, but in the event of a problem with the Solr indexing server we can easily replace it with another – the degree of synchronization with the database is not very important – just the next incremental imports will sync with the database. If we would delete the contents of the deletes table such possibility does not exist.

We can now do the incremental import by calling the following address:  /solr/dataimport?command=delta-import
In the logs you should see a line similar to this:
INFO: {delete=[35, 36],optimize=} 0 2
Which means that DIH properly removed from the index the documents, which were previously removed from the database.











posted on 2012-05-30 14:11 CONAN 阅读(568) 评论(0)  编辑  收藏 所属分类: Solr