Deleting data from an index using DIH incremental indexing, on Solr wiki, is residually treated as something that works similarly to update the records. Similarly, in a previous article, I used this shortcut, the more that I have given an example of indexing wikipedia data that does not need to delete data.
Having at hand a sample data of the albums and performers, I decided to show my way of dealing with such cases. For simplicity and clarity, I assume that after the first import, the data can only decrease.
Test data
My test data are located in the PostgreSQL database table defined as follows:
Table "public.albums"
Column | Type | Modifiers
--------+---------+-----------------------------------------------------
id | integer | not null default nextval('albums_id_seq'::regclass)
name | text | not null
author | text | not null
Indexes:
"albums_pk" PRIMARY KEY, btree (id)
The table has 825,661 records.
Test installation
For testing purposes I used the Solr instance having the following characteristics:
Definition at schema.xml:
<fields>
<field name="id" type="string" indexed="true" stored="true" required="true" />
<field name="album" type="text" indexed="true" stored="true" multiValued="true"/>
<field name="author" type="text" indexed="true" stored="true" multiValued="true"/>
</fields>
<uniqueKey>id</uniqueKey>
<defaultSearchField>album</defaultSearchField>
Definition of DIH in solrconfig.xm
<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
<lst name="defaults">
<str name="config">db-data-config.xml</str>
</lst>
</requestHandler>
And the file DIH db-data-config.
<dataConfig>
<dataSource driver="org.postgresql.Driver" url="jdbc:postgresql://localhost:5432/shardtest" user="solr" password="secret" />
<document>
<entity name="album" query="SELECT * from albums">
<field column="id" name="id" />
<field column="name" name="album" />
<field column="author" name="author" />
</entity>
</document>
</dataConfig>
Deleting Data
Looking at the table shows that when we remove the record, he is deleted without leaving a trace, and the only way to update our index would be to compare the documents identifiers in the index to the identifiers in the database and deleting those that no longer exist in the database. Slow and cumbersome. Another way is adding a column deleted_at: instead of physically deleting the record, only add information to this column. DIH can then retrieve all records from the set date later than the last crawl. The disadvantage of this solution may be necessary to modify the application to take such information into consideration.
I apply a different solution, transparent to applications. Let’s create a new table:
5 |
deleted_at timestamp without time zone NOT NULL , |
6 |
CONSTRAINT deletes_pk PRIMARY KEY (id) |
This table will automagically add an identifier of those items that were removed from the table albums and information when they were removed.
Now we add the function:
01 |
CREATE OR REPLACE FUNCTION insert_after_delete() |
04 |
IF tg_op = 'DELETE' THEN |
05 |
INSERT INTO deletes(deleted_id, deleted_at) |
06 |
VALUES (old.id, now()); |
10 |
LANGUAGE plpgsql VOLATILE; |
and a trigger:
1 |
CREATE TRIGGER deleted_trg |
5 |
EXECUTE PROCEDURE insert_after_delete(); |
How it works
Each entry deleted from the albums table should result in addition to the table deletes. Let’s check it out. Remove a few records:
1 |
=> DELETE FROM albums where id < 37; |
3 |
=> SELECT * from deletes; |
4 |
id | deleted_id | deleted_at |
6 |
26 | 35 | 2010-12-23 13:53:18.034612 |
7 |
27 | 36 | 2010-12-23 13:53:18.034612 |
So the database part works.
We fill up the DIH configuration file so that the entity has been defined as follows:
1 |
< entity name = "album" query = "SELECT * from albums" |
2 |
deletedPkQuery="SELECT deleted_id as id FROM deletes WHERE deleted_at > '${dataimporter.last_index_time}'"> |
This allows the import DIH incremental import to use the deletedPkQuery attribute to get the identifiers of the documents which should be removed.
A clever reader will probably begin to wonder, are you sure we need the column with the date of deletion. We could delete all records that are found in the table deletes and then delete the contents of this table. Theoretically this is true, but in the event of a problem with the Solr indexing server we can easily replace it with another – the degree of synchronization with the database is not very important – just the next incremental imports will sync with the database. If we would delete the contents of the deletes table such possibility does not exist.
We can now do the incremental import by calling the following address: /solr/dataimport?command=delta-import
In the logs you should see a line similar to this:
INFO: {delete=[35, 36],optimize=} 0 2
Which means that DIH properly removed from the index the documents, which were previously removed from the database.