blob: b13dac8df411f96e21ff40c48553fdb73ac7deee [file] [log] [blame]
Ian Maxond00eca82018-10-05 17:29:55 -07001<!DOCTYPE html>
2<!--
Ian Maxon41b806c2019-03-07 15:58:20 -08003 | Generated by Apache Maven Doxia Site Renderer 1.8.1 from src/site/markdown/aql/externaldata.md at 2019-03-07
Ian Maxond00eca82018-10-05 17:29:55 -07004 | Rendered using Apache Maven Fluido Skin 1.7
5-->
6<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
7 <head>
8 <meta charset="UTF-8" />
9 <meta name="viewport" content="width=device-width, initial-scale=1.0" />
Ian Maxon41b806c2019-03-07 15:58:20 -080010 <meta name="Date-Revision-yyyymmdd" content="20190307" />
Ian Maxond00eca82018-10-05 17:29:55 -070011 <meta http-equiv="Content-Language" content="en" />
12 <title>AsterixDB &#x2013; Accessing External Data in AsterixDB</title>
13 <link rel="stylesheet" href="../css/apache-maven-fluido-1.7.min.css" />
14 <link rel="stylesheet" href="../css/site.css" />
15 <link rel="stylesheet" href="../css/print.css" media="print" />
16 <script type="text/javascript" src="../js/apache-maven-fluido-1.7.min.js"></script>
17
18 </head>
19 <body class="topBarDisabled">
20 <div class="container-fluid">
21 <div id="banner">
22 <div class="pull-left"><a href=".././" id="bannerLeft"><img src="../images/asterixlogo.png" alt="AsterixDB"/></a></div>
23 <div class="pull-right"></div>
24 <div class="clear"><hr/></div>
25 </div>
26
27 <div id="breadcrumbs">
28 <ul class="breadcrumb">
Ian Maxon41b806c2019-03-07 15:58:20 -080029 <li id="publishDate">Last Published: 2019-03-07</li>
30 <li id="projectVersion" class="pull-right">Version: 0.9.4</li>
Ian Maxond00eca82018-10-05 17:29:55 -070031 <li class="pull-right"><a href="../index.html" title="Documentation Home">Documentation Home</a></li>
32 </ul>
33 </div>
34 <div class="row-fluid">
35 <div id="leftColumn" class="span2">
36 <div class="well sidebar-nav">
37 <ul class="nav nav-list">
38 <li class="nav-header">Get Started - Installation</li>
39 <li><a href="../ncservice.html" title="Option 1: using NCService"><span class="none"></span>Option 1: using NCService</a></li>
40 <li><a href="../ansible.html" title="Option 2: using Ansible"><span class="none"></span>Option 2: using Ansible</a></li>
41 <li><a href="../aws.html" title="Option 3: using Amazon Web Services"><span class="none"></span>Option 3: using Amazon Web Services</a></li>
42 <li class="nav-header">AsterixDB Primer</li>
Ian Maxon41b806c2019-03-07 15:58:20 -080043 <li><a href="../sqlpp/primer-sqlpp.html" title="Option 1: using SQL++"><span class="none"></span>Option 1: using SQL++</a></li>
44 <li><a href="../aql/primer.html" title="Option 2: using AQL"><span class="none"></span>Option 2: using AQL</a></li>
Ian Maxond00eca82018-10-05 17:29:55 -070045 <li class="nav-header">Data Model</li>
46 <li><a href="../datamodel.html" title="The Asterix Data Model"><span class="none"></span>The Asterix Data Model</a></li>
Ian Maxon41b806c2019-03-07 15:58:20 -080047 <li class="nav-header">Queries - SQL++</li>
Ian Maxond00eca82018-10-05 17:29:55 -070048 <li><a href="../sqlpp/manual.html" title="The SQL++ Query Language"><span class="none"></span>The SQL++ Query Language</a></li>
49 <li><a href="../sqlpp/builtins.html" title="Builtin Functions"><span class="none"></span>Builtin Functions</a></li>
Ian Maxon41b806c2019-03-07 15:58:20 -080050 <li class="nav-header">Queries - AQL</li>
51 <li><a href="../aql/manual.html" title="The Asterix Query Language (AQL)"><span class="none"></span>The Asterix Query Language (AQL)</a></li>
52 <li><a href="../aql/builtins.html" title="Builtin Functions"><span class="none"></span>Builtin Functions</a></li>
Ian Maxond00eca82018-10-05 17:29:55 -070053 <li class="nav-header">API/SDK</li>
54 <li><a href="../api.html" title="HTTP API"><span class="none"></span>HTTP API</a></li>
55 <li><a href="../csv.html" title="CSV Output"><span class="none"></span>CSV Output</a></li>
56 <li class="nav-header">Advanced Features</li>
Ian Maxon41b806c2019-03-07 15:58:20 -080057 <li><a href="../aql/fulltext.html" title="Support of Full-text Queries"><span class="none"></span>Support of Full-text Queries</a></li>
Ian Maxond00eca82018-10-05 17:29:55 -070058 <li class="active"><a href="#"><span class="none"></span>Accessing External Data</a></li>
Ian Maxon41b806c2019-03-07 15:58:20 -080059 <li><a href="../feeds/tutorial.html" title="Support for Data Ingestion"><span class="none"></span>Support for Data Ingestion</a></li>
Ian Maxond00eca82018-10-05 17:29:55 -070060 <li><a href="../udf.html" title="User Defined Functions"><span class="none"></span>User Defined Functions</a></li>
Ian Maxon41b806c2019-03-07 15:58:20 -080061 <li><a href="../aql/filters.html" title="Filter-Based LSM Index Acceleration"><span class="none"></span>Filter-Based LSM Index Acceleration</a></li>
62 <li><a href="../aql/similarity.html" title="Support of Similarity Queries"><span class="none"></span>Support of Similarity Queries</a></li>
Ian Maxond00eca82018-10-05 17:29:55 -070063</ul>
64 <hr />
65 <div id="poweredBy">
66 <div class="clear"></div>
67 <div class="clear"></div>
68 <div class="clear"></div>
69 <div class="clear"></div>
70<a href=".././" title="AsterixDB" class="builtBy"><img class="builtBy" alt="AsterixDB" src="../images/asterixlogo.png" /></a>
71 </div>
72 </div>
73 </div>
74 <div id="bodyColumn" class="span10" >
75<!--
76 ! Licensed to the Apache Software Foundation (ASF) under one
77 ! or more contributor license agreements. See the NOTICE file
78 ! distributed with this work for additional information
79 ! regarding copyright ownership. The ASF licenses this file
80 ! to you under the Apache License, Version 2.0 (the
81 ! "License"); you may not use this file except in compliance
82 ! with the License. You may obtain a copy of the License at
83 !
84 ! http://www.apache.org/licenses/LICENSE-2.0
85 !
86 ! Unless required by applicable law or agreed to in writing,
87 ! software distributed under the License is distributed on an
88 ! "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
89 ! KIND, either express or implied. See the License for the
90 ! specific language governing permissions and limitations
91 ! under the License.
92 !-->
93<h1>Accessing External Data in AsterixDB</h1>
94<div class="section">
95<h2><a name="Table_of_Contents"></a><a name="toc" id="toc">Table of Contents</a></h2>
96<ul>
97
98<li><a href="#Introduction">Introduction</a></li>
99<li><a href="#IntroductionAdapterForAnExternalDataset">Adapter for an External Dataset</a></li>
100<li><a href="#BuiltinAdapters">Builtin Adapters</a></li>
101<li><a href="#IntroductionCreatingAnExternalDataset">Creating an External Dataset</a></li>
102<li><a href="#WritingQueriesAgainstAnExternalDataset">Writing Queries against an External Dataset</a></li>
103<li><a href="#BuildingIndexesOverExternalDatasets">Building Indexes over External Datasets</a></li>
104<li><a href="#ExternalDataSnapshot">External Data Snapshots</a></li>
105<li><a href="#FAQ">Frequently Asked Questions</a></li>
106</ul></div>
107<div class="section">
108<h2><a name="Introduction_.5BBack_to_TOC.5D"></a><a name="Introduction" id="Introduction">Introduction</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h2>
109<p>Data that needs to be processed by AsterixDB could be residing outside AsterixDB storage. Examples include data files on a distributed file system such as HDFS or on the local file system of a machine that is part of an AsterixDB cluster. For AsterixDB to process such data, an end-user may create a regular dataset in AsterixDB (a.k.a. an internal dataset) and load the dataset with the data. AsterixDB also supports &#x2018;&#x2018;external datasets&#x2019;&#x2019; so that it is not necessary to &#x201c;load&#x201d; all data prior to using it. This also avoids creating multiple copies of data and the need to keep the copies in sync.</p>
110<div class="section">
111<h3><a name="Adapter_for_an_External_Dataset_.5BBack_to_TOC.5D"></a><a name="IntroductionAdapterForAnExternalDataset" id="IntroductionAdapterForAnExternalDataset">Adapter for an External Dataset</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h3>
112<p>External data is accessed using wrappers (adapters in AsterixDB) that abstract away the mechanism of connecting with an external service, receiving its data and transforming the data into ADM objects that are understood by AsterixDB. AsterixDB comes with built-in adapters for common storage systems such as HDFS or the local file system.</p></div>
113<div class="section">
114<h3><a name="Builtin_Adapters_.5BBack_to_TOC.5D"></a><a name="BuiltinAdapters" id="BuiltinAdapters">Builtin Adapters</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h3>
115<p>AsterixDB offers a set of builtin adapters that can be used to query external data or for loading data into an internal dataset using a load statement or a data feed. Each adapter requires specifying the <tt>format</tt> of the data in order to be able to parse objects correctly. Using adapters with feeds, the parameter <tt>output-type</tt> must also be specified.</p>
116<p>Following is a listing of existing built-in adapters and their configuration parameters:</p>
117<ol style="list-style-type: decimal">
118
119<li><b><i>localfs</i></b>: used for reading data stored in a local filesystem in one or more of the node controllers
120<ul>
121
122<li><tt>path</tt>: A fully qualified path of the form <tt>host://absolute_path</tt>. Comma separated list if there are multiple directories or files</li>
123<li><tt>expression</tt>: A <a class="externalLink" href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html">regular expression</a> to match and filter against file names</li>
124</ul>
125</li>
126<li><b><i>hdfs</i></b>: used for reading data stored in an HDFS instance
127<ul>
128
129<li><tt>path</tt>: A fully qualified path of the form <tt>host://absolute_path</tt>. Comma separated list if there are multiple directories or files</li>
130<li><tt>expression</tt>: A <a class="externalLink" href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html">regular expression</a> to match and filter against file names</li>
131<li><tt>input-format</tt>: A fully qualified name or an alias for a class of HDFS input format</li>
132<li><tt>hdfs</tt>: The HDFS name node URL</li>
133</ul>
134</li>
135<li><b><i>socket</i></b>: used for listening to connections that sends data streams through one or more sockets
136<ul>
137
138<li><tt>sockets</tt>: comma separated list of sockets to listen to</li>
139<li><tt>address-type</tt>: either IP if the list uses IP addresses, or NC if the list uses NC names</li>
140</ul>
141</li>
142<li><b><i>socket_client</i></b>: used for connecting to one or more sockets and reading data streams
143<ul>
144
145<li><tt>sockets</tt>: comma separated list of sockets to connect to</li>
146</ul>
147</li>
148<li><b><i>twitter_push</i></b>: used for establishing a connection and subscribing to a twitter feed
149<ul>
150
151<li><tt>consumer.key</tt>: access parameter provided by twitter OAuth</li>
152<li><tt>consumer.secret</tt>: access parameter provided by twitter OAuth</li>
153<li><tt>access.token</tt>: access parameter provided by twitter OAuth</li>
154<li><tt>access.token.secret</tt>: access parameter provided by twitter OAuth</li>
155</ul>
156</li>
157<li><b><i>twitter_pull</i></b>: used for polling a twitter feed for tweets based on a configurable frequency
158<ul>
159
160<li><tt>consumer.key</tt>: access parameter provided by twitter OAuth</li>
161<li><tt>consumer.secret</tt>: access parameter provided by twitter OAuth</li>
162<li><tt>access.token</tt>: access parameter provided by twitter OAuth</li>
163<li><tt>access.token.secret</tt>: access parameter provided by twitter OAuth</li>
164<li><tt>query</tt>: twitter query string</li>
165<li><tt>interval</tt>: poll interval in seconds</li>
166</ul>
167</li>
168<li><b><i>rss</i></b>: used for reading RSS feed
169<ul>
170
171<li><tt>url</tt>: a comma separated list of RSS urls</li>
172</ul>
173</li>
174</ol></div>
175<div class="section">
176<h3><a name="Creating_an_External_Dataset_.5BBack_to_TOC.5D"></a><a name="IntroductionCreatingAnExternalDataset" id="IntroductionCreatingAnExternalDataset">Creating an External Dataset</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h3>
177<p>As an example we consider the Lineitem dataset from the <a class="externalLink" href="http://www.openlinksw.com/dataspace/doc/dav/wiki/Main/VOSTPCHLinkedData/tpch.sql">TPCH schema</a>. We assume that you have successfully created an AsterixDB instance following the instructions at <a href="../install.html">Installing AsterixDB Using Managix</a>. <i>For constructing an example, we assume a single machine setup..</i></p>
178<p>Similar to a regular dataset, an external dataset has an associated datatype. We shall first create the datatype associated with each object in Lineitem data. Paste the following in the query textbox on the webpage at <a class="externalLink" href="http://127.0.0.1:19001">http://127.0.0.1:19001</a> and hit &#x2018;Execute&#x2019;.</p>
179
180<div>
181<div>
182<pre class="source"> create dataverse ExternalFileDemo;
183 use dataverse ExternalFileDemo;
184
185 create type LineitemType as closed {
186 l_orderkey:int32,
187 l_partkey: int32,
188 l_suppkey: int32,
189 l_linenumber: int32,
190 l_quantity: double,
191 l_extendedprice: double,
192 l_discount: double,
193 l_tax: double,
194 l_returnflag: string,
195 l_linestatus: string,
196 l_shipdate: string,
197 l_commitdate: string,
198 l_receiptdate: string,
199 l_shipinstruct: string,
200 l_shipmode: string,
201 l_comment: string}
202</pre></div></div>
203
204<p>Here, we describe two scenarios.</p>
205<div class="section">
206<h4><a name="a1.29_Data_file_resides_on_the_local_file_system_of_a_host"></a>1) Data file resides on the local file system of a host</h4>
207<p>Prerequisite: The host is a part of the ASTERIX cluster.</p>
208<p>Earlier, we assumed a single machine ASTERIX setup. To satisfy the prerequisite, log-in to the machine running ASTERIX.</p>
209<ul>
210
211<li>Download the <a href="../data/lineitem.tbl">data file</a> to an appropriate location. We denote this location by SOURCE_PATH.</li>
212</ul>
213<p>ASTERIX provides a built-in adapter for data residing on the local file system. The adapter is referred by its alias- &#x2018;localfs&#x2019;. We create an external dataset named Lineitem and use the &#x2018;localfs&#x2019; adapter.</p>
214
215<div>
216<div>
217<pre class="source"> create external dataset Lineitem(LineitemType)
218 using localfs
219</pre></div></div>
220
221<p>Above, the definition is not complete as we need to provide a set of parameters that are specific to the source file.</p>
222
223<table border="0" class="table table-striped">
224
225<tr class="a">
226
227<td> Parameter </td>
228
229<td> Description </td>
230</tr>
231
232<tr class="b">
233
234<td> path </td>
235
236<td> A fully qualified path of the form <tt>host://&lt;absolute path&gt;</tt>.
237 Use a comma separated list if there are multiple files.
238 E.g. <tt>host1://&lt;absolute path&gt;</tt>, <tt>host2://&lt;absolute path&gt;</tt> and so forth. </td>
239</tr>
240
241<tr class="a">
242
243<td> format </td>
244
245<td> The format for the content. Use 'adm' for data in ADM (ASTERIX Data Model) or <a class="externalLink" href="http://www.json.org/">JSON</a> format. Use 'delimited-text' if fields are separated by a delimiting character (eg., CSV). </td></tr>
246
247<tr class="b">
248
249<td>delimiter</td>
250
251<td>The delimiting character in the source file if format is 'delimited text'</td>
252</tr>
253</table>
254
255<p>As we are using a single single machine ASTERIX instance, we use 127.0.0.1 as host in the path parameter. We <i>complete the create dataset statement</i> as follows.</p>
256
257<div>
258<div>
259<pre class="source"> use dataverse ExternalFileDemo;
260
261 create external dataset Lineitem(LineitemType)
262 using localfs
263 ((&quot;path&quot;=&quot;127.0.0.1://SOURCE_PATH&quot;),
264 (&quot;format&quot;=&quot;delimited-text&quot;),
265 (&quot;delimiter&quot;=&quot;|&quot;));
266</pre></div></div>
267
268<p>Please substitute SOURCE_PATH with the absolute path to the source file on the local file system.</p></div>
269<div class="section">
270<h4><a name="Common_source_of_error"></a>Common source of error</h4>
271<p>An incorrect value for the path parameter will give the following exception message when the dataset is used in a query.</p>
272
273<div>
274<div>
275<pre class="source"> org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: org.apache.hyracks.api.exceptions.HyracksDataException: org.apache.hyracks.api.exceptions.HyracksDataException: Job failed.
276</pre></div></div>
277
278<p>Verify the correctness of the path parameter provided to the localfs adapter. Note that the path parameter must be an absolute path to the data file. For e.g. if you saved your file in your home directory (assume it to be /home/joe), then the path value should be</p>
279
280<div>
281<div>
282<pre class="source"> 127.0.0.1:///home/joe/lineitem.tbl.
283</pre></div></div>
284
285<p>In your web-browser, navigate to 127.0.0.1:19001 and paste the above to the query text box. Finally hit &#x2018;Execute&#x2019;.</p>
286<p>Next we move over to the the section <a href="#Writing_Queries_against_an_External_Dataset">Writing Queries against an External Dataset</a> and try a sample query against the external dataset.</p></div>
287<div class="section">
288<h4><a name="a2.29_Data_file_resides_on_an_HDFS_instance"></a>2) Data file resides on an HDFS instance</h4>
289<p>rerequisite: It is required that the Namenode and HDFS Datanodes are reachable from the hosts that form the AsterixDB cluster. AsterixDB provides a built-in adapter for data residing on HDFS. The HDFS adapter can be referred (in AQL) by its alias - &#x2018;hdfs&#x2019;. We can create an external dataset named Lineitem and associate the HDFS adapter with it as follows;</p>
290
291<div>
292<div>
293<pre class="source"> create external dataset Lineitem(LineitemType)
294 using hdfs((&#x201c;hdfs&#x201d;:&#x201d;hdfs://localhost:54310&#x201d;),(&#x201c;path&#x201d;:&#x201d;/asterix/Lineitem.tbl&#x201d;),...,(&#x201c;input- format&#x201d;:&#x201d;rc-format&#x201d;));
295</pre></div></div>
296
297<p>The expected parameters are described below:</p>
298
299<table border="0" class="table table-striped">
300
301<tr class="a">
302
303<td> Parameter </td>
304
305<td> Description </td>
306</tr>
307
308<tr class="b">
309
310<td> hdfs </td>
311
312<td> The HDFS URL </td>
313</tr>
314
315<tr class="a">
316
317<td> path </td>
318
319<td> The absolute path to the source HDFS file or directory. Use a comma separated list if there are multiple files or directories. </td></tr>
320
321<tr class="b">
322
323<td> input-format </td>
324
325<td> The associated input format. Use 'text-input-format' for text files , 'sequence-input-format' for hadoop sequence files, 'rc-input-format' for Hadoop Object Columnar files, or a fully qualified name of an implementation of org.apache.hadoop.mapred.InputFormat. </td>
326</tr>
327
328<tr class="a">
329
330<td> format </td>
331
332<td> The format of the input content. Use 'adm' for text data in ADM (ASTERIX Data Model) or <a class="externalLink" href="http://www.json.org/">JSON</a> format, 'delimited-text' for text delimited data that has fields separated by a delimiting character, 'binary' for other data.</td>
333</tr>
334
335<tr class="b">
336
337<td> delimiter </td>
338
339<td> The delimiting character in the source file if format is 'delimited text' </td>
340</tr>
341
342<tr class="a">
343
344<td> parser </td>
345
346<td> The parser used to parse HDFS objects if the format is 'binary'. Use 'hive- parser' for data deserialized by a Hive Serde (AsterixDB can understand deserialized Hive objects) or a fully qualified class name of user- implemented parser that implements the interface org.apache.asterix.external.input.InputParser. </td>
347</tr>
348
349<tr class="b">
350
351<td> hive-serde </td>
352
353<td> The Hive serde is used to deserialize HDFS objects if format is binary and the parser is hive-parser. Use a fully qualified name of a class implementation of org.apache.hadoop.hive.serde2.SerDe. </td>
354</tr>
355
356<tr class="a">
357
358<td> local-socket-path </td>
359
360<td> The UNIX domain socket path if local short-circuit reads are enabled in the HDFS instance</td>
361</tr>
362</table>
363
364<p><i>Difference between &#x2018;input-format&#x2019; and &#x2018;format&#x2019;</i></p>
365<p><i>input-format</i>: Files stored under HDFS have an associated storage format. For example, TextInputFormat represents plain text files. SequenceFileInputFormat indicates binary compressed files. RCFileInputFormat corresponds to objects stored in a object columnar fashion. The parameter &#x2018;input-format&#x2019; is used to distinguish between these and other HDFS input formats.</p>
366<p><i>format</i>: The parameter &#x2018;format&#x2019; refers to the type of the data contained in the file. For example, data contained in a file could be in json or ADM format, could be in delimited-text with fields separated by a delimiting character or could be in binary format.</p>
367<p>As an example. consider the <a href="../data/lineitem.tbl">data file</a>. The file is a text file with each line representing a object. The fields in each object are separated by the &#x2018;|&#x2019; character.</p>
368<p>We assume the HDFS URL to be <a class="externalLink" href="hdfs://localhost:54310">hdfs://localhost:54310</a>. We further assume that the example data file is copied to HDFS at a path denoted by &#x201c;/asterix/Lineitem.tbl&#x201d;.</p>
369<p>The complete set of parameters for our example file are as follows. ((&#x201c;hdfs&#x201d;=&#x201c;<a class="externalLink" href="hdfs://localhost:54310”,(“path”=“/asterix/Lineitem.tbl”),(“input-format”=“text-">hdfs://localhost:54310&#x201d;,(&#x201c;path&#x201d;=&#x201c;/asterix/Lineitem.tbl&#x201d;),(&#x201c;input-format&#x201d;=&#x201c;text-</a> input-format&#x201d;),(&#x201c;format&#x201d;=&#x201c;delimited-text&#x201d;),(&#x201c;delimiter&#x201d;=&#x201c;|&#x201d;))</p></div>
370<div class="section">
371<h4><a name="Using_the_Hive_Parser"></a>Using the Hive Parser</h4>
372<p>if a user wants to create an external dataset that uses hive-parser to parse HDFS objects, it is important that the datatype associated with the dataset matches the actual data in the Hive table for the correct initialization of the Hive SerDe. Here is the conversion from the supported Hive data types to AsterixDB data types:</p>
373
374<table border="0" class="table table-striped">
375
376<tr class="a">
377
378<td> Hive </td>
379
380<td> AsterixDB </td>
381</tr>
382
383<tr class="b">
384
385<td>BOOLEAN</td>
386
387<td>Boolean</td>
388</tr>
389
390<tr class="a">
391
392<td>BYTE(TINY INT)</td>
393
394<td>Int8</td>
395</tr>
396
397<tr class="b">
398
399<td>DOUBLE</td>
400
401<td>Double</td>
402</tr>
403
404<tr class="a">
405
406<td>FLOAT</td>
407
408<td>Float</td>
409</tr>
410
411<tr class="b">
412
413<td>INT</td>
414
415<td>Int32</td>
416</tr>
417
418<tr class="a">
419
420<td>LONG(BIG INT)</td>
421
422<td>Int64</td>
423</tr>
424
425<tr class="b">
426
427<td>SHORT(SMALL INT)</td>
428
429<td>Int16</td>
430</tr>
431
432<tr class="a">
433
434<td>STRING</td>
435
436<td>String</td>
437</tr>
438
439<tr class="b">
440
441<td>TIMESTAMP</td>
442
443<td>Datetime</td>
444</tr>
445
446<tr class="a">
447
448<td>DATE</td>
449
450<td>Date</td>
451</tr>
452
453<tr class="b">
454
455<td>STRUCT</td>
456
457<td>Nested Object</td>
458</tr>
459
460<tr class="a">
461
462<td>LIST</td>
463
464<td>OrderedList or UnorderedList</td>
465</tr>
466</table>
467</div>
468<div class="section">
469<h4><a name="Examples_of_dataset_definitions_for_external_datasets"></a>Examples of dataset definitions for external datasets</h4>
470<p><i>Example 1</i>: We can modify the create external dataset statement as follows:</p>
471
472<div>
473<div>
474<pre class="source"> create external dataset Lineitem('LineitemType)
475 using hdfs((&quot;hdfs&quot;=&quot;hdfs://localhost:54310&quot;),(&quot;path&quot;=&quot;/asterix/Lineitem.tbl&quot;),(&quot;input-format&quot;=&quot;text- input-format&quot;),(&quot;format&quot;=&quot;delimited-text&quot;),(&quot;delimiter&quot;=&quot;|&quot;));
476</pre></div></div>
477
478<p><i>Example 2</i>: Here, we create an external dataset of lineitem objects stored in sequence files that has content in ADM format:</p>
479
480<div>
481<div>
482<pre class="source"> create external dataset Lineitem('LineitemType)
483 using hdfs((&quot;hdfs&quot;=&quot;hdfs://localhost:54310&quot;),(&quot;path&quot;=&quot;/asterix/SequenceLineitem.tbl&quot;),(&quot;input- format&quot;=&quot;sequence-input-format&quot;),(&quot;format&quot;=&quot;adm&quot;));
484</pre></div></div>
485
486<p><i>Example 3</i>: Here, we create an external dataset of lineitem objects stored in object-columnar files that has content in binary format parsed using hive-parser with hive ColumnarSerde:</p>
487
488<div>
489<div>
490<pre class="source"> create external dataset Lineitem('LineitemType)
491 using hdfs((&quot;hdfs&quot;=&quot;hdfs://localhost:54310&quot;),(&quot;path&quot;=&quot;/asterix/RCLineitem.tbl&quot;),(&quot;input-format&quot;=&quot;rc-input-format&quot;),(&quot;format&quot;=&quot;binary&quot;),(&quot;parser&quot;=&quot;hive-parser&quot;),(&quot;hive- serde&quot;=&quot;org.apache.hadoop.hive.serde2.columnar.ColumnarSerde&quot;));
492</pre></div></div>
493</div></div></div>
494<div class="section">
495<h2><a name="Writing_Queries_against_an_External_Dataset_.5BBack_to_TOC.5D"></a><a name="WritingQueriesAgainstAnExternalDataset" id="WritingQueriesAgainstAnExternalDataset">Writing Queries against an External Dataset</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h2>
496<p>You may write AQL queries against an external dataset in exactly the same way that queries are written against internal datasets. The following is an example of an AQL query that applies a filter and returns an ordered result.</p>
497
498<div>
499<div>
500<pre class="source"> use dataverse ExternalFileDemo;
501
502 for $c in dataset('Lineitem')
503 where $c.l_orderkey &lt;= 3
504 order by $c.l_orderkey, $c.l_linenumber
505 return $c
506</pre></div></div>
507</div>
508<div class="section">
509<h2><a name="Building_Indexes_over_External_Datasets_.5BBack_to_TOC.5D"></a><a name="BuildingIndexesOverExternalDatasets" id="BuildingIndexesOverExternalDatasets">Building Indexes over External Datasets</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h2>
510<p>AsterixDB supports building B-Tree and R-Tree indexes over static data stored in the Hadoop Distributed File System. To create an index, first create an external dataset over the data as follows</p>
511
512<div>
513<div>
514<pre class="source"> create external dataset Lineitem(LineitemType)
515 using hdfs((&quot;hdfs&quot;=&quot;hdfs://localhost:54310&quot;),(&quot;path&quot;=&quot;/asterix/Lineitem.tbl&quot;),(&quot;input-format&quot;=&quot;text-input- format&quot;),(&quot;format&quot;=&quot;delimited-text&quot;),(&quot;delimiter&quot;=&quot;|&quot;));
516</pre></div></div>
517
518<p>You can then create a B-Tree index on this dataset instance as if the dataset was internally stored as follows:</p>
519
520<div>
521<div>
522<pre class="source"> create index PartkeyIdx on Lineitem(l_partkey);
523</pre></div></div>
524
525<p>You could also create an R-Tree index as follows:</p>
526
527<div>
528<div>
529<pre class="source"> &#xfffc;create index IndexName on DatasetName(attribute-name) type rtree;
530</pre></div></div>
531
532<p>After building the indexes, the AsterixDB query compiler can use them to access the dataset and answer queries in a more cost effective manner. AsterixDB can read all HDFS input formats, but indexes over external datasets can currently be built only for HDFS datasets with &#x2018;text-input-format&#x2019;, &#x2018;sequence-input-format&#x2019; or &#x2018;rc-input-format&#x2019;.</p></div>
533<div class="section">
534<h2><a name="External_Data_Snapshots_.5BBack_to_TOC.5D"></a><a name="ExternalDataSnapshots" id="ExternalDataSnapshots">External Data Snapshots</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h2>
535<p>An external data snapshot represents the status of a dataset&#x2019;s files in HDFS at a point in time. Upon creating the first index over an external dataset, AsterixDB captures and stores a snapshot of the dataset in HDFS. Only objects present at the snapshot capture time are indexed, and any additional indexes created afterwards will only contain data that was present at the snapshot capture time thus preserving consistency across all indexes of a dataset. To update all indexes of an external dataset and advance the snapshot time to be the present time, a user can use the refresh external dataset command as follows:</p>
536
537<div>
538<div>
539<pre class="source"> refresh external dataset DatasetName;
540</pre></div></div>
541
542<p>After a refresh operation commits, all of the dataset&#x2019;s indexes will reflect the status of the data as of the new snapshot capture time.</p></div>
543<div class="section">
544<h2><a name="Frequently_Asked_Questions_.5BBack_to_TOC.5D"></a><a name="FAQ" id="FAQ">Frequently Asked Questions</a> <font size="4"><a href="#toc">[Back to TOC]</a></font></h2>
545<p>Q. I added data to my dataset in HDFS, Will the dataset indexes in AsterixDB be updated automatically?</p>
546<p>A. No, you must use the refresh external dataset statement to make the indexes aware of any changes in the dataset files in HDFS.</p>
547<p>Q. Why doesn&#x2019;t AsterixDB update external indexes automatically?</p>
548<p>A. Since external data is managed by other users/systems with mechanisms that are system dependent, AsterixDB has no way of knowing exactly when data is added or deleted in HDFS, so the responsibility of refreshing indexes are left to the user. A user can use internal datasets for which AsterixDB manages the data and its indexes.</p>
549<p>Q. I created an index over an external dataset and then added some data to my HDFS dataset. Will a query that uses the index return different results from a query that doesn&#x2019;t use the index?</p>
550<p>A. No, queries&#x2019; results are access path independent and the stored snapshot is used to determines which data are going to be included when processing queries.</p>
551<p>Q. I created an index over an external dataset and then deleted some of my dataset&#x2019;s files in HDFS, Will indexed data access still return the objects in deleted files?</p>
552<p>A. No. When AsterixDB accesses external data, with or without the use of indexes, it only access files present in the file system at runtime.</p>
553<p>Q. I submitted a refresh command on a an external dataset and a failure occurred, What has happened to my indexes?</p>
554<p>A. External Indexes Refreshes are treated as a single transaction. In case of a failure, a rollback occurs and indexes are restored to their previous state. An error message with the cause of failure is returned to the user.</p>
555<p>Q. I was trying to refresh an external dataset while some queries were accessing the data using index access method. Will the queries be affected by the refresh operation?</p>
556<p>A. Queries have access to external dataset indexes state at the time where the queries are submitted. A query that was submitted before a refresh commits will only access data under the snapshot taken before the refresh; queries that are submitted after the refresh commits will access data under the snapshot taken after the refresh.</p>
557<p>Q. What happens when I try to create an additional index while a refresh operation is in progress or vice versa?</p>
558<p>A. The create index operation will wait until the refresh commits or aborts and then the index will be built according to the external data snapshot at the end of the refresh operation. Creating indexes and refreshing datasets are mutually exclusive operations and will not be run in parallel. Multiple indexes can be created in parallel, but not multiple refresh operations.</p></div>
559 </div>
560 </div>
561 </div>
562 <hr/>
563 <footer>
564 <div class="container-fluid">
565 <div class="row-fluid">
566<div class="row-fluid">Apache AsterixDB, AsterixDB, Apache, the Apache
567 feather logo, and the Apache AsterixDB project logo are either
568 registered trademarks or trademarks of The Apache Software
569 Foundation in the United States and other countries.
570 All other marks mentioned may be trademarks or registered
571 trademarks of their respective owners.
572 </div>
573 </div>
574 </div>
575 </footer>
576 </body>
577</html>