mirror of
https://gitee.com/dolphinscheduler/DolphinScheduler.git
synced 2024-11-29 18:58:05 +08:00
[CI] [Hotfix] Fix docs dead link error (#16708)
This commit is contained in:
parent
83e6a8c56e
commit
7e0095dca1
@ -22,7 +22,7 @@ Make sure that your node version is 10+, docsite does not yet support versions h
|
||||
|
||||
1. Run `npm install` in the root directory to install the dependencies.
|
||||
|
||||
2. Run commands to collect resources 2.1.Run `export PROTOCOL_MODE=ssh` tells Git clone resource via SSH protocol instead of HTTPS protocol. 2.2.Run `./scripts/prepare_docs.sh` prepare all related resources, for more information you could see [how prepare script work](https://github.com/apache/dolphinscheduler-website/blob/master/HOW_PREPARE_WOKR.md).
|
||||
2. Run commands to collect resources 2.1.Run `export PROTOCOL_MODE=ssh` tells Git clone resource via SSH protocol instead of HTTPS protocol. 2.2.Run `./scripts/prepare_docs.sh` prepare all related resources, for more information you could see [how prepare script work](https://github.com/apache/dolphinscheduler-website/blob/master/HOW_PREPARE_WORK.md).
|
||||
|
||||
3. Run `npm run start` in the root directory to start a local server, you will see the website in 'http://localhost:8080'.
|
||||
|
||||
|
@ -17,7 +17,7 @@
|
||||
|
||||
## Native Supported
|
||||
|
||||
- No, read section example in [datasource-setting](../howto/datasource-setting.md) `DataSource Center` section to activate this datasource.
|
||||
- No, read section example in [datasource-setting](../installation/datasource-setting.md) `DataSource Center` section to activate this datasource.
|
||||
- JDBC driver configuration reference document [athena-connect-with-jdbc](https://docs.amazonaws.cn/athena/latest/ug/connect-with-jdbc.html)
|
||||
- Driver download link [SimbaAthenaJDBC-2.0.31.1000/AthenaJDBC42.jar](https://s3.cn-north-1.amazonaws.com.cn/athena-downloads-cn/drivers/JDBC/SimbaAthenaJDBC-2.0.31.1000/AthenaJDBC42.jar)
|
||||
|
||||
|
@ -18,5 +18,5 @@
|
||||
|
||||
## Native Supported
|
||||
|
||||
No, read section example in [datasource-setting](../howto/datasource-setting.md) `DataSource Center` section to activate this datasource.
|
||||
No, read section example in [datasource-setting](../installation/datasource-setting.md) `DataSource Center` section to activate this datasource.
|
||||
|
||||
|
@ -18,6 +18,6 @@
|
||||
|
||||
## Native Supported
|
||||
|
||||
No, read section example in [datasource-setting](../howto/datasource-setting.md) `DataSource Center` section to activate
|
||||
No, read section example in [datasource-setting](../installation/datasource-setting.md) `DataSource Center` section to activate
|
||||
this datasource.
|
||||
|
||||
|
@ -18,5 +18,5 @@
|
||||
|
||||
## Native Supported
|
||||
|
||||
No, read section example in [datasource-setting](../howto/datasource-setting.md) `DataSource Center` section to activate this datasource.
|
||||
No, read section example in [datasource-setting](../installation/datasource-setting.md) `DataSource Center` section to activate this datasource.
|
||||
|
||||
|
@ -18,5 +18,5 @@
|
||||
|
||||
## Native Supported
|
||||
|
||||
No, read section example in [datasource-setting](../howto/datasource-setting.md) `DataSource Center` section to activate this datasource.
|
||||
No, read section example in [datasource-setting](../installation/datasource-setting.md) `DataSource Center` section to activate this datasource.
|
||||
|
||||
|
@ -19,6 +19,6 @@
|
||||
|
||||
## Native Supported
|
||||
|
||||
No, you need to import the OceanBase jdbc driver [oceanbase-client](https://mvnrepository.com/artifact/com.oceanbase/oceanbase-client) first, refer to the section example in [datasource-setting](../howto/datasource-setting.md) `DataSource Center` section.
|
||||
No, you need to import the OceanBase jdbc driver [oceanbase-client](https://mvnrepository.com/artifact/com.oceanbase/oceanbase-client) first, refer to the section example in [datasource-setting](../installation/datasource-setting.md) `DataSource Center` section.
|
||||
|
||||
The compatible mode of the datasource can be 'mysql' or 'oracle', if you only use OceanBase with 'mysql' mode, you can also treat OceanBase as MySQL and manage the datasource referring to [mysql datasource](mysql.md)
|
||||
|
@ -18,5 +18,5 @@
|
||||
|
||||
## Native Supported
|
||||
|
||||
No, you need to import Mysql jdbc driver first, read section example in [datasource-setting](../howto/datasource-setting.md) `DataSource Center` section to import Mysql JDBC Driver.
|
||||
No, you need to import Mysql jdbc driver first, read section example in [datasource-setting](../installation/datasource-setting.md) `DataSource Center` section to import Mysql JDBC Driver.
|
||||
|
||||
|
@ -19,7 +19,7 @@ Start all services of dolphinscheduler according to your deployment method. If y
|
||||
|
||||
### Database Configuration
|
||||
|
||||
Initializing the workflow demo needs to store metabase in other database like MySQL or PostgreSQL, they have to change some configuration. Follow the instructions in [datasource-setting](howto/datasource-setting.md) `Standalone Switching Metadata Database Configuration` section to create and initialize database.
|
||||
Initializing the workflow demo needs to store metabase in other database like MySQL or PostgreSQL, they have to change some configuration. Follow the instructions in [datasource-setting](installation/datasource-setting.md) `Standalone Switching Metadata Database Configuration` section to create and initialize database.
|
||||
|
||||
### Tenant Configuration
|
||||
|
||||
|
@ -4,9 +4,9 @@
|
||||
|
||||
Spark task type for executing Spark application. When executing the Spark task, the worker will submits a job to the Spark cluster by following commands:
|
||||
|
||||
(1) `spark submit` method to submit tasks. See [spark-submit](https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit) for more details.
|
||||
(1) `spark submit` method to submit tasks. See [spark-submit](https://archive.apache.org/dist/spark/docs/3.2.1/#running-the-examples-and-shell) for more details.
|
||||
|
||||
(2) `spark sql` method to submit tasks. See [spark sql](https://spark.apache.org/docs/3.2.1/sql-ref-syntax.html) for more details.
|
||||
(2) `spark sql` method to submit tasks. See [spark sql](https://archive.apache.org/dist/spark/docs/3.2.1/api/sql/index.html) for more details.
|
||||
|
||||
## Create Task
|
||||
|
||||
|
@ -6,7 +6,7 @@ SQL task type used to connect to databases and execute SQL.
|
||||
|
||||
## Create DataSource
|
||||
|
||||
Refer to [datasource-setting](../howto/datasource-setting.md) `DataSource Center` section
|
||||
Refer to [datasource-setting](../installation/datasource-setting.md) `DataSource Center` section
|
||||
|
||||
## Create Task
|
||||
|
||||
|
@ -22,7 +22,7 @@ DolphinScheduler 网站由 [docsite](https://github.com/chengshiwen/docsite-ext)
|
||||
|
||||
1. 在根目录中运行 `npm install` 以安装依赖项。
|
||||
|
||||
2. 运行命令收集资源:2.1.运行 `export PROTOCOL_MODE=ssh` 告诉Git克隆资源,通过SSH协议而不是HTTPS协议。 2.2.运行 `./scripts/prepare_docs.sh` 准备所有相关资源,关更多信息,您可以查看[how prepare script work](https://github.com/apache/dolphinscheduler-website/blob/master/HOW_PREPARE_WOKR.md)。
|
||||
2. 运行命令收集资源:2.1.运行 `export PROTOCOL_MODE=ssh` 告诉Git克隆资源,通过SSH协议而不是HTTPS协议。 2.2.运行 `./scripts/prepare_docs.sh` 准备所有相关资源,关更多信息,您可以查看[how prepare script work](https://github.com/apache/dolphinscheduler-website/blob/master/HOW_PREPARE_WORK.md)。
|
||||
|
||||
3. 在根目录下运行 `npm run start` 启动本地服务器,其将允许在 http://localhost:8080。
|
||||
|
||||
|
@ -13,7 +13,7 @@
|
||||
|
||||
## 是否原生支持
|
||||
|
||||
- 否,使用前需请参考 [数据源配置](../howto/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
- 否,使用前需请参考 [数据源配置](../installation/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
- JDBC驱动配置参考文档 [athena-connect-with-jdbc](https://docs.amazonaws.cn/athena/latest/ug/connect-with-jdbc.html)
|
||||
- 驱动下载链接 [SimbaAthenaJDBC-2.0.31.1000/AthenaJDBC42.jar](https://s3.cn-north-1.amazonaws.com.cn/athena-downloads-cn/drivers/JDBC/SimbaAthenaJDBC-2.0.31.1000/AthenaJDBC42.jar)
|
||||
|
||||
|
@ -14,4 +14,4 @@
|
||||
|
||||
## 是否原生支持
|
||||
|
||||
否,使用前需请参考 [数据源配置](../howto/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
否,使用前需请参考 [数据源配置](../installation/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
|
@ -14,4 +14,4 @@
|
||||
|
||||
## 是否原生支持
|
||||
|
||||
否,使用前需请参考 [数据源配置](../howto/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
否,使用前需请参考 [数据源配置](../installation/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
|
@ -14,4 +14,4 @@
|
||||
|
||||
## 是否原生支持
|
||||
|
||||
否,使用前需请参考 [数据源配置](../howto/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
否,使用前需请参考 [数据源配置](../installation/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
|
@ -14,4 +14,4 @@
|
||||
|
||||
## 是否原生支持
|
||||
|
||||
否,使用前需请参考 [数据源配置](../howto/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
否,使用前需请参考 [数据源配置](../installation/datasource-setting.md) 中的 "数据源中心" 章节激活数据源。
|
||||
|
@ -15,7 +15,7 @@
|
||||
|
||||
## 是否原生支持
|
||||
|
||||
否,使用前需要先引入 OceanBase 的 JDBC 驱动 [oceanbase-client](https://mvnrepository.com/artifact/com.oceanbase/oceanbase-client),请参考 [数据源配置](../howto/datasource-setting.md) 中的 "数据源中心" 章节。
|
||||
否,使用前需要先引入 OceanBase 的 JDBC 驱动 [oceanbase-client](https://mvnrepository.com/artifact/com.oceanbase/oceanbase-client),请参考 [数据源配置](../installation/datasource-setting.md) 中的 "数据源中心" 章节。
|
||||
|
||||
OceanBase 数据源的兼容模式可以是 'mysql' 或 'oracle',如果你只使用 mysql 模式,你也可以选择将 OceanBase 数据源当作 MySQL 数据源来使用,请参考 [MySQL 数据源](mysql.md)
|
||||
|
||||
|
@ -14,4 +14,4 @@
|
||||
|
||||
## 是否原生支持
|
||||
|
||||
否,StarRocks使用Mysql JDBC Driver, 使用前需请参考 [数据源配置](../howto/datasource-setting.md) 中的 "数据源中心" 章节激活Mysql JDBC Driver。
|
||||
否,StarRocks使用Mysql JDBC Driver, 使用前需请参考 [数据源配置](../installation/datasource-setting.md) 中的 "数据源中心" 章节激活Mysql JDBC Driver。
|
||||
|
@ -19,7 +19,7 @@
|
||||
### 数据库配置
|
||||
|
||||
初始化工作流 demo 服务需要使用 MySQL 或 PostgreSQL 等其他数据库作为其元数据存储数据,因此必须更改一些配置。
|
||||
请参考[数据源配置](howto/datasource-setting.md) `Standalone 切换元数据库`创建并初始化数据库 ,然后运行 demo 服务启动脚本。
|
||||
请参考[数据源配置](installation/datasource-setting.md) `Standalone 切换元数据库`创建并初始化数据库 ,然后运行 demo 服务启动脚本。
|
||||
|
||||
### 租户配置
|
||||
|
||||
|
@ -4,9 +4,9 @@
|
||||
|
||||
Spark 任务类型用于执行 Spark 应用。对于 Spark 节点,worker 支持两个不同类型的 spark 命令提交任务:
|
||||
|
||||
(1) `spark submit` 方式提交任务。更多详情查看 [spark-submit](https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit)。
|
||||
(1) `spark submit` 方式提交任务。更多详情查看 [spark-submit](https://archive.apache.org/dist/spark/docs/3.2.1/#running-the-examples-and-shell)。
|
||||
|
||||
(2) `spark sql` 方式提交任务。更多详情查看 [spark sql](https://spark.apache.org/docs/3.2.1/sql-ref-syntax.html)。
|
||||
(2) `spark sql` 方式提交任务。更多详情查看 [spark sql](https://archive.apache.org/dist/spark/docs/3.2.1/api/sql/index.html)。
|
||||
|
||||
## 创建任务
|
||||
|
||||
|
@ -6,7 +6,7 @@ SQL任务类型,用于连接数据库并执行相应SQL。
|
||||
|
||||
## 创建数据源
|
||||
|
||||
可参考 [数据源配置](../howto/datasource-setting.md) `数据源中心`。
|
||||
可参考 [数据源配置](../installation/datasource-setting.md) `数据源中心`。
|
||||
|
||||
## 创建任务
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user