Change Java package from edu.uci.ics to org.apache
Change-Id: I2f01d2b5614e9e9c94fda4bf1294a8eba6a26c54
Reviewed-on: https://asterix-gerrit.ics.uci.edu/309
Reviewed-by: Till Westmann <tillw@apache.org>
Tested-by: Jenkins <jenkins@fulliautomatix.ics.uci.edu>
diff --git a/asterix-doc/src/site/markdown/aql/externaldata.md b/asterix-doc/src/site/markdown/aql/externaldata.md
index 3d46cc3..7b1cb3b 100644
--- a/asterix-doc/src/site/markdown/aql/externaldata.md
+++ b/asterix-doc/src/site/markdown/aql/externaldata.md
@@ -102,7 +102,7 @@
An incorrect value for the path parameter will give the following exception message when the dataset is used in a query.
- edu.uci.ics.hyracks.algebricks.common.exceptions.AlgebricksException: edu.uci.ics.hyracks.api.exceptions.HyracksDataException: edu.uci.ics.hyracks.api.exceptions.HyracksDataException: Job failed.
+ org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: org.apache.hyracks.api.exceptions.HyracksDataException: org.apache.hyracks.api.exceptions.HyracksDataException: Job failed.
Verify the correctness of the path parameter provided to the localfs adapter. Note that the path parameter must be an absolute path to the data file. For e.g. if you saved your file in your home directory (assume it to be /home/joe), then the path value should be
@@ -148,7 +148,7 @@
</tr>
<tr>
<td> parser </td>
- <td> The parser used to parse HDFS records if the format is 'binary'. Use 'hive- parser' for data deserialized by a Hive Serde (AsterixDB can understand deserialized Hive objects) or a fully qualified class name of user- implemented parser that implements the interface edu.uci.ics.asterix.external.input.InputParser. </td>
+ <td> The parser used to parse HDFS records if the format is 'binary'. Use 'hive- parser' for data deserialized by a Hive Serde (AsterixDB can understand deserialized Hive objects) or a fully qualified class name of user- implemented parser that implements the interface org.apache.asterix.external.input.InputParser. </td>
</tr>
<tr>
<td> hive-serde </td>
diff --git a/asterix-doc/src/site/markdown/feeds/tutorial.md b/asterix-doc/src/site/markdown/feeds/tutorial.md
index c6ad73c..4894862 100644
--- a/asterix-doc/src/site/markdown/feeds/tutorial.md
+++ b/asterix-doc/src/site/markdown/feeds/tutorial.md
@@ -311,12 +311,12 @@
A Java UDF in AsterixDB is required to implement an prescribe interface. We shall next write a basic UDF that extracts the hashtags contained in the tweet's text and appends each into an unordered list. The list is added as an additional attribute to the tweet to form the augment version - ProcessedTweet.
- package edu.uci.ics.asterix.external.library;
+ package org.apache.asterix.external.library;
- import edu.uci.ics.asterix.external.library.java.JObjects.JRecord;
- import edu.uci.ics.asterix.external.library.java.JObjects.JString;
- import edu.uci.ics.asterix.external.library.java.JObjects.JUnorderedList;
- import edu.uci.ics.asterix.external.library.java.JTypeTag;
+ import org.apache.asterix.external.library.java.JObjects.JRecord;
+ import org.apache.asterix.external.library.java.JObjects.JString;
+ import org.apache.asterix.external.library.java.JObjects.JUnorderedList;
+ import org.apache.asterix.external.library.java.JTypeTag;
public class HashTagsFunction implements IExternalScalarFunction {
@@ -377,7 +377,7 @@
<name>addFeatures</name>
<arguments>Tweet</arguments>
<return_type>ProcessedTweet</return_type>
- <definition>edu.uci.ics.asterix.external.library.AddHashTagsFactory
+ <definition>org.apache.asterix.external.library.AddHashTagsFactory
</definition>
</libraryFunction>
</libraryFunctions>
diff --git a/asterix-doc/src/site/site.xml b/asterix-doc/src/site/site.xml
index 673f4ac..37a0e53 100644
--- a/asterix-doc/src/site/site.xml
+++ b/asterix-doc/src/site/site.xml
@@ -21,7 +21,7 @@
<bannerLeft>
<name>AsterixDB</name>
<src>images/asterixlogo.png</src>
- <href>http://asterixdb.ics.uci.edu/</href>
+ <href>http://asterixdb.apache.org/</href>
</bannerLeft>
<version position="right"/>