Notice: This Wiki is now read only and edits are no longer possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.
SMILA/Documentation/Crawler
Contents
Overview
A crawler gathers information about resources, both content and metadata of interest like size or MIME type. SMILA currently comes with three types of crawlers, each adequate for a different datasource type, namely Web crawler, JDBC Database crawler, and File System crawler, to allow gathering information from the internet, databases, or files from a hard disk. Furthermore, the Connectivity Framework provides an API for developers which allows them to create their own crawlers.
API
A crawler has to implement two interfaces: Crawler and CrawlerCallback. The easiest way to achieve this is to extend the abstract base class AbstractCrawler located in bundle org.eclipse.smila.connectivity.framework. This class already contains handling for the crawlers Id and an OSGI service activate method. The crawler method getNext() is designed to support an array of Datareference objects, as this reduces the number of method calls. In general there are no restrictions on the size of the array, in fact the size could vary on multiple method calls. This allows a crawler to internally implement a Producer/Consumer pattern. A Crawler implementation that is restricted to work as an iterator only can also enforce this by always returning an array of size one.
Javadoc: org.eclipse.smila.connectivity.framework.Crawler
Architecture
Crawlers are managed and instantiated by the CrawlerController. The CrawlerController communicates with the crawler via interface Crawler, only. The crawler's getNext() method returns DataReference objects to the CrawlerController. DataReference is also an interface implemented by class org.eclipse.smila.connectivity.framework.util.internal.DataReferenceImpl. A DataReference, as the name suggests, is only a reference to data provided by the crawler. This is mainly a performance issue, as due to the use of DeltaIndexing it may not be neccessary to transfer all the data from the crawler to the CrawlerController and to ConnectivityManager. Therefore a DataReference contains only the minumum data needed to perform DeltaIndexing: an Id and a hash token. To access the whole object it provideds method getRecord() that returns a complete Record object containing Id, attributes, annotations and attachments. To create the Record object, the DataReference communicates with the crawler via interface CrawlerCallback, as each DataReference has a reference to the crawler that created it.
The following chart shows the crawler architecture and how data is shared with the CrawlerController:
Package org.eclipse.smila.connectivity.framework.util provides some factory classes for crawlers to create Ids, hashes and DataReference objects. More utility classes are planned to be implemented, that allow easy realization of crawlers using an iterator or producer/consumer pattern.
Configuration
A crawler is started with a specific, named configuration, that defines what information is to be crawled (e.g. content, kinds of metadata) and where to find that data (e.g. file system path, JDBC Connection String). See each crawler documentation for details on configuration options.
Each crawler can define its own configuration because crawlers need different information to execute specifc crawl jobs. As example a JDBC crawler needs information about which database and which table should be crawled and which columns should be returned.
Therefore the crawler developer defines a schema that contains all interesting information. This schema is based on a root schema that is supported by the SMILA framework. It declares the generic framework/frame which has to be used to send DataSourceConnectionConfigs (a crawl task) to the SMILA framework. The root-schema can be found in: configuration\org.eclipse.smila.connectivity.framework.schema/schemas/RootDataSourceConnectionConfigSchema.xsd.
The root schema looks like as follows:
- DataSourceID
- A description string that is used in the whole framework to separate and address information that apply to the same crawl job
- SchemaID
- The SchemaID contains the whole bundle name of the crawler (e.g. File System crawler: org.eclipse.smila.connectivity.framework.crawler.filesystem).
The SMILA Framework uses this information to gather the schema for the validation of the DataSourceConnectionConfig that should be executed.
- DataConnectionID
- This tag describes if an agent or crawler should be used. It contains either of the following tags:
- Agent
- Crawler
- The name that is used in these tags is the Service name of the agent/crawler.
- RecordBuffer
- Here you can specify settings to optimize record transfer to ConnectivityManager
- Size - the number of records to be send to ConnectivityyManager in one block. Default is 1.
- FlushInterval - a time interval in milliseconds after which to send the current elements of the RecordBuffer to ConnectivityManager. Default is 1000.
- DeltaIndexing
- Configuration options for delta indexing that are to be interpreted by the CrawlerController. The following values are supported:
- full - delta indexing is fully activated. Records are checked if they need to be updated, entries for new/updated records are added to the deltaIndexingManager, delta-delete is executed if no error occured
- additive - as full but delta-delete is not executed
- initial - For an initial import in an empty index or a new source in an existing index performance can be optimized by NOT checking if a record needs to be updated (we know that all records are new) but adding an entry in the DeltaIndexingManager for each Record. This allows later runs using full or additive to make use of DeltaIndexing infformation.
- disabled - delta indexing is fully disabled. No checks are done, no entries are created/updated, no Delta-Delete is executed. Later runs cannot benefit from DeltaIndexing
- CompoundHandling
- Configuration options for CompoundHandling. See CompoundManagement for details.
- Attributes
- Placeholder for each crawler's attribute definition.
Each crawler can define here which attributes it can return. An attribute is a specific information of an entry in the datasource that is crawled by the crawler (E.g. In a filesystem an entry is a file, and attributes of an file are Size, Content, etc.)
- Process
- Placeholder for Tags that the crawler developer can define.
In this Tag all information can be transferred for a crawl task that are necessary to start a crawling process. These information are maybe: starting urls/folder, and which entries should be crawled ( e.g. queries/wildcards/include/excludes).
Further Information:
- See for each crawler attributes and process tags
- How to implement a crawler
Crawler lifecycle
The CrawlerController manages the life cycle of the crawler (e.g. start, stop, abort) and may instantiate multiple crawlers concurrently, even of the same type. This is realised by using OSGi ComponentFactories. Each crawler does not automatically start an OSGi service, but registers only a crawler ComponentFactory with the CrawlerController. Via the ComponentFactory the CrawlerController can instantiate crawlers on demand.
Here is a template for a crawler OSGi component definition:
<component name="%CRAWLER_TYPE%" immediate="false" factory="CrawlerFactory"> <implementation class="%CRAWLER_IMPLEMENTATION_CLASS%" /> <service> <provide interface="org.eclipse.smila.connectivity.framework.Crawler"/> </service> </component>
See also
More information about the different crawlers can be found here: