FOR ARCHIVAL PURPOSES ONLY

The information in this wiki hasn't been maintained for a good while. Some of the projects described have since been deprecated.

In particular, the "Ushahidi Platform v3.x" section contains information that is often misleading. Many details about this version of Platform have changed since.

This website is an extraction of the original Ushahidi wiki into a static form. Because of that, functions like logging in, commenting or searching will not work.

For more documentation, please refer to https://docs.ushahidi.com

Skip to end of metadata
Go to start of metadata

Overview

The application is comprised of the following components:

ComponentDescription
Content fetchers/crawlersA set of background applications/daemons that fetch drops from the various sources e.g. RSS, Twitter, Email etc.
Metadata extractorsThese perform semantic (named entity extraction & subsequent geocoding of any place names that are encountered) and media extraction (links and images) once the drops have been fetched and structured by the content fetchers
Drop queue processor

Keeps track of drops as they come in from the content fetchers and forwards them to the various pre-processing stages - semantic extraction, media extraction, rules processing. Once a drop has gone through all pre-processing stages, it is reassembled and posted to the API for final storage in the DB

NOTE: While drops are undergoing pre-processing, they're maintained in a persistent RabbitMQ queue

APIPosts and retrieves data to/from the database, handles user authentication and authorization, updates the search index
Search ServerHandles all full text and geo search functions and is periodically updated (by default, every 30s) with any new data
UI (web) clientA web application for interacting with the API; fetches data and presents it to the user

 

The relationship/interaction between these components is shown in the diagram below:

SwiftRiver Architecture

Technical Architecture

The current version of SwiftRiver runs off mixed stack that is primarily powered Java, PHP and Python; Version 1 of the software was based on a LAMP stack.

ComponentUnderlying Architecture
APIJava - Spring Framework
UI ClientPHP, JavaScript & CSS - Kohana MVC  (v3.3) and Backbone JS for the UI
Content ExtractorsPython - custom applications and are not based on a particular application development framework
Dropqueue ProcessorJava - Spring Framework
DatabaseMySQL Database Server
Search

Java - Apache Solr Server

Messaging & QueueingRabbitMQ