commit | 97557a81bd4f1feee3d05e60148d89f5fa00f052 | [log] [tgz] |
---|---|---|
author | Ali Alsuliman <ali.al.solaiman@gmail.com> | Tue Nov 05 14:09:17 2019 -0800 |
committer | Ali Alsuliman <ali.al.solaiman@gmail.com> | Thu Nov 07 19:12:50 2019 +0000 |
tree | 8736a226c424d8cfe663a9aaecbca78100f6a954 | |
parent | 43b223da08a43bd74ed4fb20642415e770c5fe32 [diff] |
[ASTERIXDB-2672][API] Change the valid values for "format" request parameter - user model changes: yes - storage format changes: no - interface changes: no Details: - Allowed values for "format" request parameter: json, csv, adm. - Recognizable format values in "Accept": application/x-adm application/json application/json;lossless=true/false text/csv text/csv;header=present/absent Test framework changes: - ResultExtractor: if the OutputFormat is json/lossless-json, print the "result" field of the response similar to how adm would be printed, one json value per line (and using same spacing). - Changed some queries that use "EXPLAIN SELECT..." and specify OutputFormat as JSON. The queries extension is ".plans.sqlpp". "param optimized-logical-plan:string=true" is specified in those queries to print the logical plan in the "plans" field of the response. - added "// compareunorderedarray=true" for test quries that use .regexjson to compare one json value against another where the order of elements in a json array is not deterministic. - TestExecutor: OutputFormat.LOSSLESS_JSON & OutputFormat.CSV_HEADER formats are set in the "Accept". Otherwise, the desired format is set in the "format" request parameter as usual. - TestHelper: changed "equalJson()" to allow comparing json array in two modes. - Removed some test cases that used to set mime types in the "format" request parameter since now it's not allowed to do so. Change-Id: Ie3c7a35446322c2d97679e7e724b9778e2a4ba83 Reviewed-on: https://asterix-gerrit.ics.uci.edu/c/asterixdb/+/4043 Contrib: Jenkins <jenkins@fulliautomatix.ics.uci.edu> Tested-by: Jenkins <jenkins@fulliautomatix.ics.uci.edu> Integration-Tests: Jenkins <jenkins@fulliautomatix.ics.uci.edu> Reviewed-by: Ali Alsuliman <ali.al.solaiman@gmail.com> Reviewed-by: Murtadha Hubail <mhubail@apache.org>
AsterixDB is a BDMS (Big Data Management System) with a rich feature set that sets it apart from other Big Data platforms. Its feature set makes it well-suited to modern needs such as web data warehousing and social data storage and analysis. AsterixDB has:
Data model
A semistructured NoSQL style data model (ADM) resulting from extending JSON with object database ideas
Query languages
Two expressive and declarative query languages (SQL++ and AQL) that support a broad range of queries and analysis over semistructured data
Scalability
A parallel runtime query execution engine, Apache Hyracks, that has been scale-tested on up to 1000+ cores and 500+ disks
Native storage
Partitioned LSM-based data storage and indexing to support efficient ingestion and management of semistructured data
External storage
Support for query access to externally stored data (e.g., data in HDFS) as well as to data stored natively by AsterixDB
Data types
A rich set of primitive data types, including spatial and temporal data in addition to integer, floating point, and textual data
Indexing
Secondary indexing options that include B+ trees, R trees, and inverted keyword (exact and fuzzy) index types
Transactions
Basic transactional (concurrency and recovery) capabilities akin to those of a NoSQL store
Learn more about AsterixDB at its website.
To build AsterixDB from source, you should have a platform with the following:
Instructions for building the master:
Checkout AsterixDB master:
$git clone https://github.com/apache/asterixdb.git
Build AsterixDB master:
$cd asterixdb $mvn clean package -DskipTests
Here are steps to get AsterixDB running on your local machine:
Start a single-machine AsterixDB instance:
$cd asterixdb/asterix-server/target/asterix-server-*-binary-assembly/apache-asterixdb-*-SNAPSHOT $./opt/local/bin/start-sample-cluster.sh
Good to go and run queries in your browser at:
http://localhost:19001
Read more documentation to learn the data model, query language, and how to create a cluster instance.