mirror of
https://github.com/ggogel/seafile-containerized.git
synced 2024-11-16 17:05:32 +00:00
Merge branch 'cluster'
This commit is contained in:
commit
35222d95a2
|
@ -3,6 +3,3 @@
|
|||
*.swp
|
||||
.DS_Store
|
||||
*.pyc
|
||||
|
||||
shared/*
|
||||
.snapshot/*
|
||||
|
|
17
.github/stale.yml
vendored
17
.github/stale.yml
vendored
|
@ -1,17 +0,0 @@
|
|||
# Number of days of inactivity before an issue becomes stale
|
||||
daysUntilStale: 90
|
||||
# Number of days of inactivity before a stale issue is closed
|
||||
daysUntilClose: 7
|
||||
# Issues with these labels will never be considered stale
|
||||
exemptLabels:
|
||||
- pinned
|
||||
- security
|
||||
# Label to use when marking an issue as stale
|
||||
staleLabel: wontfix
|
||||
# Comment to post when marking an issue as stale. Set to `false` to disable
|
||||
markComment: >
|
||||
This issue has been automatically marked as stale because it has not had
|
||||
recent activity. It will be closed if no further activity occurs. Thank you
|
||||
for your contributions.
|
||||
# Comment to post when closing a stale issue. Set to `false` to disable
|
||||
closeComment: false
|
5
.gitignore
vendored
5
.gitignore
vendored
|
@ -3,8 +3,3 @@
|
|||
*.swp
|
||||
.DS_Store
|
||||
*.pyc
|
||||
|
||||
bootstrap/*
|
||||
shared/*
|
||||
image/seafile/scripts
|
||||
image/pro-seafile/scripts
|
||||
|
|
13
LICENSE.txt
13
LICENSE.txt
|
@ -1,13 +0,0 @@
|
|||
Copyright (c) 2016 Seafile Ltd.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
42
MAINT.md
42
MAINT.md
|
@ -1,42 +0,0 @@
|
|||
## For Project Maintainer: How to update seafile-docker when a new version is released
|
||||
|
||||
Imagine the previous version is 6.0.5 and we have released 6.0.7. Here are the steps to do the upgrade.
|
||||
|
||||
* Switch to a branch "master"
|
||||
```sh
|
||||
git branch -f master origin/master
|
||||
git checkout master
|
||||
```
|
||||
* Update the version number in all the files/scripts from "6.0.5" to "6.0.7" and push it to github, then wait for travis ci (https://travis-ci.org/haiwen/seafile-docker/builds) to pass
|
||||
```sh
|
||||
git push origin master
|
||||
```
|
||||
|
||||
* Normal
|
||||
|
||||
* Create a tag "seafile-base" and push it to github. Wait for travis ci to finish: this time it would push the image seafileltd/base:16.04 to docker hub since it's triggered by a tag.
|
||||
```sh
|
||||
git tag seafile-base
|
||||
git push origin seafile-base
|
||||
```
|
||||
|
||||
* Create a tag "v6.0.7" and push it to github. Wait for travis ci to finish: this time it would push the image seafileltd/seafile:6.0.7 to docker hub since it's triggered by a tag.
|
||||
```sh
|
||||
git tag v6.0.7
|
||||
git push origin v6.0.7
|
||||
```
|
||||
* Ensure the new image is available in https://hub.docker.com/r/seafileltd/seafile/tags/
|
||||
|
||||
* Pro
|
||||
|
||||
* Create a tag "seafile-pro-base" and push it to github. Wait for travis ci to finish: this time it would push the image ${registry}/seafileltd/pro-base:16.04 to docker Registry since it's triggered by a tag.
|
||||
```sh
|
||||
git tag seafile-pro-base
|
||||
git push origin seafile-pro-base
|
||||
```
|
||||
|
||||
* Create a tag "v6.0.7-pro" and push it to github. Wait for travis ci to finish: this time it would push the image ${registry}/seafileltd/seafile-pro:6.0.7 to docker Registry since it's triggered by a tag.
|
||||
```sh
|
||||
git tag v6.0.7-pro
|
||||
git push origin v6.0.7
|
||||
```
|
153
README.md
153
README.md
|
@ -1,153 +0,0 @@
|
|||
[![Build Status](https://secure.travis-ci.org/haiwen/seafile-docker.png?branch=master)](http://travis-ci.org/haiwen/seafile-docker)
|
||||
|
||||
## About
|
||||
|
||||
- [Docker](https://docker.com/) is an open source project to pack, ship and run any Linux application in a lighter weight, faster container than a traditional virtual machine.
|
||||
|
||||
- Docker makes it much easier to deploy [a Seafile server](https://github.com/haiwen/seafile) on your servers and keep it updated.
|
||||
|
||||
- The base image configures Seafile with the Seafile team's recommended optimal defaults.
|
||||
|
||||
If you are not familiar with docker commands, please refer to [docker documentation](https://docs.docker.com/engine/reference/commandline/cli/).
|
||||
|
||||
## For seafile 7.x.x
|
||||
|
||||
Starting with 7.0, we have adjusted seafile-docker image to use multiple containers. The old image runs MariaDB-Server and Memcached in the same container with Seafile server. Now, we strip the MariaDB-Server and Memcached services from the Seafile image and run them in their respective containers.
|
||||
|
||||
If you plan to deploy seafile 7.0, you should refer to the [Deploy Documentation](https://download.seafile.com/published/seafile-manual/docker/deploy%20seafile%20with%20docker.md).
|
||||
|
||||
If you plan to upgrade 6.3 to 7.0, you can refer to the [Upgrade Documentation](https://download.seafile.com/published/seafile-manual/docker/6.3%20upgrade%20to%207.0.md).
|
||||
|
||||
## For seafile 6.x.x
|
||||
|
||||
### Getting Started
|
||||
|
||||
To run the seafile server container:
|
||||
|
||||
```sh
|
||||
docker run -d --name seafile \
|
||||
-e SEAFILE_SERVER_HOSTNAME=seafile.example.com \
|
||||
-v /opt/seafile-data:/shared \
|
||||
-p 80:80 \
|
||||
seafileltd/seafile:latest
|
||||
```
|
||||
|
||||
Wait for a few minutes for the first time initialization, then visit `http://seafile.example.com` to open Seafile Web UI.
|
||||
|
||||
This command will mount folder `/opt/seafile-data` at the local server to the docker instance. You can find logs and other data under this folder.
|
||||
|
||||
### More configuration Options
|
||||
|
||||
#### Custom Admin Username and Password
|
||||
|
||||
The default admin account is `me@example.com` and the password is `asecret`. You can use a different password by setting the container's environment variables:
|
||||
e.g.
|
||||
|
||||
```sh
|
||||
docker run -d --name seafile \
|
||||
-e SEAFILE_SERVER_HOSTNAME=seafile.example.com \
|
||||
-e SEAFILE_ADMIN_EMAIL=me@example.com \
|
||||
-e SEAFILE_ADMIN_PASSWORD=a_very_secret_password \
|
||||
-v /opt/seafile-data:/shared \
|
||||
-p 80:80 \
|
||||
seafileltd/seafile:latest
|
||||
```
|
||||
|
||||
If you forget the admin password, you can add a new admin account and then go to the sysadmin panel to reset user password.
|
||||
|
||||
#### Let's encrypt SSL certificate
|
||||
|
||||
If you set `SEAFILE_SERVER_LETSENCRYPT` to `true`, the container would request a letsencrypt-signed SSL certificate for you automatically.
|
||||
|
||||
e.g.
|
||||
|
||||
```
|
||||
docker run -d --name seafile \
|
||||
-e SEAFILE_SERVER_LETSENCRYPT=true \
|
||||
-e SEAFILE_SERVER_HOSTNAME=seafile.example.com \
|
||||
-e SEAFILE_ADMIN_EMAIL=me@example.com \
|
||||
-e SEAFILE_ADMIN_PASSWORD=a_very_secret_password \
|
||||
-v /opt/seafile-data:/shared \
|
||||
-p 80:80 \
|
||||
-p 443:443 \
|
||||
seafileltd/seafile:latest
|
||||
```
|
||||
|
||||
If you want to use your own SSL certificate:
|
||||
- create a folder `/opt/seafile-data/ssl`, and put your certificate and private key under the ssl directory.
|
||||
- Assume your site name is `seafile.example.com`, then your certificate must have the name `seafile.example.com.crt`, and the private key must have the name `seafile.example.com.key`.
|
||||
|
||||
#### Modify Seafile Server Configurations
|
||||
|
||||
The config files are under `shared/seafile/conf`. You can modify the configurations according to [Seafile manual](https://manual.seafile.com/)
|
||||
|
||||
After modification, you need to restart the container:
|
||||
|
||||
```
|
||||
docker restart seafile
|
||||
```
|
||||
|
||||
#### Find logs
|
||||
|
||||
The seafile logs are under `shared/logs/seafile` in the docker, or `/opt/seafile-data/logs/seafile` in the server that run the docker.
|
||||
|
||||
The system logs are under `shared/logs/var-log`, or `/opt/seafile-data/logs/var-log` in the server that run the docker.
|
||||
|
||||
#### Add a new Admin
|
||||
|
||||
Ensure the container is running, then enter this command:
|
||||
|
||||
```
|
||||
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh
|
||||
```
|
||||
|
||||
Enter the username and password according to the prompts. You now have a new admin account.
|
||||
|
||||
### Directory Structure
|
||||
|
||||
#### `/shared`
|
||||
|
||||
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various logfiles and upload directory outside. This allows you to rebuild containers easily without losing important information.
|
||||
|
||||
- /shared/db: This is the data directory for mysql server
|
||||
- /shared/seafile: This is the directory for seafile server configuration and data.
|
||||
- /shared/logs: This is the directory for logs.
|
||||
- /shared/logs/var-log: This is the directory that would be mounted as `/var/log` inside the container. For example, you can find the nginx logs in `shared/logs/var-log/nginx/`.
|
||||
- /shared/logs/seafile: This is the directory that would contain the log files of seafile server processes. For example, you can find seaf-server logs in `shared/logs/seafile/seafile.log`.
|
||||
- /shared/ssl: This is directory for certificate, which does not exist by default.
|
||||
|
||||
### Upgrading Seafile Server
|
||||
|
||||
TO upgrade to latest version of seafile server:
|
||||
|
||||
```sh
|
||||
docker pull seafileltd/seafile:latest
|
||||
docker rm -f seafile
|
||||
docker run -d --name seafile \
|
||||
-e SEAFILE_SERVER_LETSENCRYPT=true \
|
||||
-e SEAFILE_SERVER_HOSTNAME=seafile.example.com \
|
||||
-e SEAFILE_ADMIN_EMAIL=me@example.com \
|
||||
-e SEAFILE_ADMIN_PASSWORD=a_very_secret_password \
|
||||
-v /opt/seafile-data:/shared \
|
||||
-p 80:80 \
|
||||
-p 443:443 \
|
||||
seafileltd/seafile:latest
|
||||
```
|
||||
|
||||
If you are one of the early users who use the `launcher` script, you should refer to [upgrade from old format](https://github.com/haiwen/seafile-docker/blob/master/upgrade_from_old_format.md) document.
|
||||
|
||||
### Garbage Collection
|
||||
|
||||
When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a ['garbage collection'](https://download.seafile.com/published/seafile-manual/maintain/seafile_gc.md) process to be run, which detects which blocks no longer used and purges them. (NOTE: for technical reasons, the GC process does not guarantee that _every single_ orphan block will be deleted.)
|
||||
|
||||
The required scripts can be found in the `/scripts` folder of the docker container. To perform garbage collection, simply run `docker exec seafile /scripts/gc.sh`. For the community edition, this process will stop the seafile server, but it is a relatively quick process and the seafile server will start automatically once the process has finished. The Professional supports an online garbage collection.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
You can run docker commands like "docker logs" or "docker exec" to find errors.
|
||||
|
||||
```sh
|
||||
docker logs -f seafile
|
||||
# or
|
||||
docker exec -it seafile bash
|
||||
```
|
160
README.pro.md
160
README.pro.md
|
@ -1,160 +0,0 @@
|
|||
[![Build Status](https://secure.travis-ci.org/haiwen/seafile-docker.png?branch=master)](http://travis-ci.org/haiwen/seafile-docker)
|
||||
|
||||
### About
|
||||
|
||||
- [Docker](https://docker.com/) is an open source project to pack, ship and run any Linux application in a lighter weight, faster container than a traditional virtual machine.
|
||||
|
||||
- Docker makes it much easier to deploy [a Seafile server](https://github.com/haiwen/seafile) on your servers and keep it updated.
|
||||
|
||||
- The base image configures Seafile with the Seafile team's recommended optimal defaults.
|
||||
|
||||
If you are not familiar with docker commands, please refer to [docker documentation](https://docs.docker.com/engine/reference/commandline/cli/).
|
||||
|
||||
### Getting Started
|
||||
|
||||
To login the seafile private registry:
|
||||
|
||||
```sh
|
||||
docker login {pro-host}
|
||||
```
|
||||
|
||||
You can see the private registry information on the [customer center](https://customer.seafile.com/downloads/)
|
||||
|
||||
To run the seafile server container:
|
||||
|
||||
```sh
|
||||
docker run -d --name seafile \
|
||||
-e SEAFILE_SERVER_HOSTNAME=seafile.example.com \
|
||||
-v /opt/seafile-data:/shared \
|
||||
-p 80:80 \
|
||||
{pro-host}/seafileltd/seafile-pro:latest
|
||||
```
|
||||
|
||||
Wait for a few minutes for the first time initialization, then visit `http://seafile.example.com` to open Seafile Web UI.
|
||||
|
||||
This command will mount folder `/opt/seafile-data` at the local server to the docker instance. You can find logs and other data under this folder.
|
||||
|
||||
### Put your licence file
|
||||
|
||||
If you have a `seafile-license.txt` licence file, simply put it in the folder `/opt/seafile-data/seafile/`. In your host machine:
|
||||
|
||||
```sh
|
||||
mkdir -p /opt/seafile-data/seafile/
|
||||
cp /path/to/seafile-license.txt /opt/seafile-data/seafile/
|
||||
```
|
||||
|
||||
Then restart the container.
|
||||
|
||||
```sh
|
||||
docker restart seafile
|
||||
```
|
||||
|
||||
### More configuration Options
|
||||
|
||||
#### Custom Admin Username and Password
|
||||
|
||||
The default admin account is `me@example.com` and the password is `asecret`. You can use a different password by setting the container's environment variables:
|
||||
e.g.
|
||||
|
||||
```sh
|
||||
docker run -d --name seafile \
|
||||
-e SEAFILE_SERVER_HOSTNAME=seafile.example.com \
|
||||
-e SEAFILE_ADMIN_EMAIL=me@example.com \
|
||||
-e SEAFILE_ADMIN_PASSWORD=a_very_secret_password \
|
||||
-v /opt/seafile-data:/shared \
|
||||
-p 80:80 \
|
||||
{pro-host}/seafileltd/seafile-pro:latest
|
||||
```
|
||||
|
||||
If you forget the admin password, you can add a new admin account and then go to the sysadmin panel to reset user password.
|
||||
|
||||
#### Let's encrypt SSL certificate
|
||||
|
||||
If you set `SEAFILE_SERVER_LETSENCRYPT` to `true`, the container would request a letsencrypt-signed SSL certificate for you automatically.
|
||||
|
||||
e.g.
|
||||
|
||||
```
|
||||
docker run -d --name seafile \
|
||||
-e SEAFILE_SERVER_LETSENCRYPT=true \
|
||||
-e SEAFILE_SERVER_HOSTNAME=seafile.example.com \
|
||||
-e SEAFILE_ADMIN_EMAIL=me@example.com \
|
||||
-e SEAFILE_ADMIN_PASSWORD=a_very_secret_password \
|
||||
-v /opt/seafile-data:/shared \
|
||||
-p 80:80 \
|
||||
-p 443:443 \
|
||||
{pro-host}/seafileltd/seafile-pro:latest
|
||||
```
|
||||
|
||||
If you want to use your own SSL certificate:
|
||||
- create a folder `/opt/seafile-data/ssl`, and put your certificate and private key under the ssl directory.
|
||||
- Assume your site name is `seafile.example.com`, then your certificate must have the name `seafile.example.com.crt`, and the private key must have the name `seafile.example.com.key`.
|
||||
|
||||
#### Modify Seafile Server Configurations
|
||||
|
||||
The config files are under `shared/seafile/conf`. You can modify the configurations according to [Seafile manual](https://manual.seafile.com/)
|
||||
|
||||
After modification, you need to restart the container:
|
||||
|
||||
```
|
||||
docker restart seafile
|
||||
```
|
||||
|
||||
#### Find logs
|
||||
|
||||
The seafile logs are under `/shared/logs/seafile` in the docker, or `/opt/seafile-data/logs/seafile` in the server that run the docker.
|
||||
|
||||
The system logs are under `/shared/logs/var-log`, or `/opt/seafile-data/logs/var-log` in the server that run the docker.
|
||||
|
||||
#### Add a new Admin
|
||||
|
||||
Ensure the container is running, then enter this command:
|
||||
|
||||
```
|
||||
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh
|
||||
```
|
||||
|
||||
Enter the username and password according to the prompts. You now have a new admin account.
|
||||
|
||||
### Directory Structure
|
||||
|
||||
#### `/shared`
|
||||
|
||||
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various logfiles and upload directory outside. This allows you to rebuild containers easily without losing important information.
|
||||
|
||||
- /shared/db: This is the data directory for mysql server
|
||||
- /shared/seafile: This is the directory for seafile server configuration and data.
|
||||
- /shared/logs: This is the directory for logs.
|
||||
- /shared/logs/var-log: This is the directory that would be mounted as `/var/log` inside the container. For example, you can find the nginx logs in `shared/logs/var-log/nginx/`.
|
||||
- /shared/logs/seafile: This is the directory that would contain the log files of seafile server processes. For example, you can find seaf-server logs in `shared/logs/seafile/seafile.log`.
|
||||
- /shared/ssl: This is directory for certificate, which does not exist by default.
|
||||
|
||||
### Upgrading Seafile Server
|
||||
|
||||
TO upgrade to latest version of seafile server:
|
||||
|
||||
```sh
|
||||
docker pull {pro-host}/seafileltd/seafile-pro:latest
|
||||
docker rm -f seafile
|
||||
docker run -d --name seafile \
|
||||
-e SEAFILE_SERVER_LETSENCRYPT=true \
|
||||
-e SEAFILE_SERVER_HOSTNAME=seafile.example.com \
|
||||
-e SEAFILE_ADMIN_EMAIL=me@example.com \
|
||||
-e SEAFILE_ADMIN_PASSWORD=a_very_secret_password \
|
||||
-v /opt/seafile-data:/shared \
|
||||
-p 80:80 \
|
||||
-p 443:443 \
|
||||
{pro-host}/seafileltd/seafile-pro:latest
|
||||
```
|
||||
|
||||
If you are one of the early users who use the `launcher` script, you should refer to [upgrade from old format](https://github.com/haiwen/seafile-docker/blob/master/upgrade_from_old_format.md) document.
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
You can run docker commands like "docker logs" or "docker exec" to find errors.
|
||||
|
||||
```sh
|
||||
docker logs -f seafile
|
||||
# or
|
||||
docker exec -it seafile bash
|
||||
```
|
|
@ -1,64 +0,0 @@
|
|||
# Getting Stared On Windows
|
||||
|
||||
## System Requirements
|
||||
|
||||
Only Windows 10 is supported.
|
||||
|
||||
At this time(2016/12/5) windows server 2016 doesn't support running linux containers yet. We'll add the support for windows server 2016 once it supports running linux containers.
|
||||
|
||||
## Install the Requirements
|
||||
|
||||
### Install Docker for Windows
|
||||
|
||||
Follow the instructions on https://docs.docker.com/docker-for-windows/ to install docker on windows.
|
||||
|
||||
You need to turn on the Hyper-V feature in your system:
|
||||
|
||||
- Open "program and features" from windows start menu
|
||||
- Click "turn on/off windows features", you'll see a settings window
|
||||
- If Hyper-V is not checked, check it. You may be asked to reboot your system after that.
|
||||
|
||||
Make sure docker daemon is running before continuing.
|
||||
|
||||
### Install Git
|
||||
|
||||
Download the installer from https://git-scm.com/download/win and install it.
|
||||
|
||||
**Important**: During the installation, you must choose "Checkout as is, Commit as is".
|
||||
|
||||
### Install Notepad++
|
||||
|
||||
Seafile configuration files are using linux-style line-ending, which can't be hanlded by the notepad.exe program. So we need to install notepad++, an lightweight text editor that works better in this case.
|
||||
|
||||
Download the installer from https://notepad-plus-plus.org/download/v7.2.2.html and install it. During the installation, you can uncheck all the optional components.
|
||||
|
||||
## Getting Started with Seafile Docker
|
||||
|
||||
### Decide Which Drive to Store Seafile Data
|
||||
|
||||
Choose the larges drive on your system. Assume it's the `C:` Drive. Now right-click on the tray icon of the docker app, and click the "settings" menu item.
|
||||
|
||||
You should see a settings dialog now. Click the "Shared Drives" tab, and check the `C:` drive. Then click "**apply**" on the settings dialog.
|
||||
|
||||
### Run Seafile Docker
|
||||
|
||||
First open a powershell window **as administrator** , and run the following command to set the execution policy to "RemoteSigned":
|
||||
|
||||
```sh
|
||||
Set-ExecutionPolicy RemoteSigned
|
||||
```
|
||||
|
||||
Close the powershell window, and open a new one **as the normal user**.
|
||||
|
||||
Now run the following commands:
|
||||
|
||||
(Note that if you're using another drive than "C:", say "D:", you should change the "c:\\seafile" in the following commands to "d:\\seafile" instead.)
|
||||
|
||||
```sh
|
||||
docker pull seafileltd/seafile:6.3.3
|
||||
docker run -d --name seafile-server -v /root/seafile:/shared -p 80:80 seafileltd/seafile:6.3.3
|
||||
```
|
||||
|
||||
The tag for the most recent version of the image can be found at https://hub.docker.com/r/seafileltd/seafile/tags/.
|
||||
|
||||
If you are not familiar with docker commands, refer to [docker documentation](https://docs.docker.com/engine/reference/commandline/cli/).
|
35
Vagrantfile
vendored
35
Vagrantfile
vendored
|
@ -1,35 +0,0 @@
|
|||
Vagrant.configure(2) do |config|
|
||||
config.vm.provider "virtualbox" do |v|
|
||||
v.memory = 2048
|
||||
v.cpus = 4
|
||||
end
|
||||
|
||||
config.vm.define :dockerhost do |config|
|
||||
config.vm.box = "trusty64"
|
||||
config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-vagrant-disk1.box"
|
||||
|
||||
if ENV["http_proxy"]
|
||||
config.vm.provision "shell", inline: <<-EOF
|
||||
echo "Acquire::http::Proxy \\"#{ENV['http_proxy']}\\";" >/etc/apt/apt.conf.d/50proxy
|
||||
echo "http_proxy=\"#{ENV['http_proxy']}\"" >/etc/profile.d/http_proxy.sh
|
||||
EOF
|
||||
end
|
||||
|
||||
config.vm.provision "shell", inline: <<-EOF
|
||||
set -e
|
||||
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
echo "en_US.UTF-8 UTF-8" >/etc/locale.gen
|
||||
locale-gen
|
||||
echo "Apt::Install-Recommends 'false';" >/etc/apt/apt.conf.d/02no-recommends
|
||||
echo "Acquire::Languages { 'none' };" >/etc/apt/apt.conf.d/05no-languages
|
||||
apt-get update
|
||||
apt-get -y remove --purge puppet juju
|
||||
apt-get -y autoremove --purge
|
||||
wget -qO- https://get.docker.com/ | sh
|
||||
|
||||
ln -s /vagrant /var/seafile
|
||||
EOF
|
||||
end
|
||||
end
|
13
caddy/Caddyfile
Normal file
13
caddy/Caddyfile
Normal file
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
auto_https disable_redirects
|
||||
}
|
||||
|
||||
http:// https:// {
|
||||
reverse_proxy seafile:8000
|
||||
handle_path /seafhttp* {
|
||||
uri strip_prefix seafhttp
|
||||
reverse_proxy seafile:8082
|
||||
}
|
||||
reverse_proxy /seafdav/* seafile:8080
|
||||
reverse_proxy /media/* seahub-media:80
|
||||
}
|
3
caddy/Dockerfile
Normal file
3
caddy/Dockerfile
Normal file
|
@ -0,0 +1,3 @@
|
|||
FROM caddy:2.2.1-alpine
|
||||
|
||||
COPY Caddyfile /etc/caddy/Caddyfile
|
|
@ -1,56 +0,0 @@
|
|||
server_version=6.3.7
|
||||
|
||||
base_image=seafileltd/cluster-base:18.04
|
||||
base_image_squashed=seafileltd/cluster-base:18.04-squashed
|
||||
pro_base_image=seafileltd/cluster-pro-base:18.04
|
||||
pro_base_image_squashed=seafileltd/cluster-pro-base:18.04-squashed
|
||||
server_image=seafileltd/seafile:$(server_version)
|
||||
server_image_squashed=seafileltd/seafile:$(server_version)-squashed
|
||||
pro_server_image=seafileltd/cluster-seafile-pro:$(server_version)
|
||||
pro_server_image_squashed=seafileltd/cluster-seafile-pro:$(server_version)-squashed
|
||||
latest_pro_server_image=seafileltd/cluster-seafile-pro:latest
|
||||
latest_server_image=seafileltd/seafile:latest
|
||||
|
||||
all:
|
||||
@echo
|
||||
@echo Pleaes use '"make base"' or '"make server"' or '"make push"'.
|
||||
@echo
|
||||
|
||||
base:
|
||||
docker pull phusion/baseimage:0.11
|
||||
docker-squash --tag phusion/baseimage:latest phusion/baseimage:0.11
|
||||
docker tag phusion/baseimage:latest phusion/baseimage:0.11
|
||||
cd base && docker build -t $(base_image) .
|
||||
docker-squash --tag $(base_image_squashed) $(base_image)
|
||||
docker tag $(base_image_squashed) $(base_image)
|
||||
docker rmi `docker images --filter "dangling=true" -q --no-trunc`
|
||||
|
||||
pro-base:
|
||||
cd pro_base && docker build -t $(pro_base_image) .
|
||||
docker-squash --tag $(pro_base_image_squashed) $(pro_base_image)
|
||||
docker tag $(pro_base_image_squashed) $(pro_base_image)
|
||||
docker rmi `docker images --filter "dangling=true" -q --no-trunc`
|
||||
|
||||
pro-server:
|
||||
cd pro_seafile && cp -rf ../../../templates ./ && cp -rf ../../scripts ./ && docker build -t $(pro_server_image) .
|
||||
docker-squash --tag $(pro_server_image_squashed) $(pro_server_image) --from-layer=$(pro_base_image)
|
||||
docker tag $(pro_server_image_squashed) $(pro_server_image)
|
||||
docker tag $(pro_server_image) $(latest_pro_server_image)
|
||||
docker rmi `docker images --filter "dangling=true" -q --no-trunc`
|
||||
|
||||
push-base:
|
||||
docker push $(base_image)
|
||||
|
||||
push-pro-base:
|
||||
docker tag $(pro_base_image) ${host}/$(pro_base_image)
|
||||
docker push ${host}/$(pro_base_image)
|
||||
|
||||
push-pro-server:
|
||||
docker tag $(pro_server_image) ${host}/$(pro_server_image)
|
||||
docker tag $(pro_server_image) ${host}/$(latest_pro_server_image)
|
||||
docker push ${host}/$(pro_server_image)
|
||||
docker push ${host}/$(latest_pro_server_image)
|
||||
|
||||
push: push-base push-server
|
||||
|
||||
.PHONY: base server push push-base push-server
|
|
@ -1,50 +0,0 @@
|
|||
# Lastet phusion baseimage as of 20180412, based on ubuntu 18.04
|
||||
# See https://hub.docker.com/r/phusion/baseimage/tags/
|
||||
FROM phusion/baseimage:0.11
|
||||
|
||||
ENV UPDATED_AT=20180412 \
|
||||
DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
CMD ["/sbin/my_init", "--", "bash", "-l"]
|
||||
|
||||
RUN apt-get update -qq && apt-get -qq -y install nginx tzdata
|
||||
|
||||
# Utility tools
|
||||
RUN apt-get install -qq -y vim htop net-tools psmisc git wget curl
|
||||
|
||||
# Guidline for installing python libs: if a lib has C-compoment (e.g.
|
||||
# python-imaging depends on libjpeg/libpng), we install it use apt-get.
|
||||
# Otherwise we install it with pip.
|
||||
RUN apt-get install -y python2.7-dev python-ldap python-mysqldb libmemcached-dev zlib1g-dev gcc
|
||||
RUN curl -sSL -o /tmp/get-pip.py https://bootstrap.pypa.io/get-pip.py && \
|
||||
python /tmp/get-pip.py && \
|
||||
rm -rf /tmp/get-pip.py && \
|
||||
pip install -U wheel
|
||||
|
||||
ADD requirements.txt /tmp/requirements.txt
|
||||
RUN pip install -r /tmp/requirements.txt
|
||||
|
||||
COPY services /services
|
||||
|
||||
RUN mkdir -p /etc/service/nginx && \
|
||||
rm -f /etc/nginx/sites-enabled/* /etc/nginx/conf.d/* && \
|
||||
mv /services/nginx.conf /etc/nginx/nginx.conf && \
|
||||
mv /services/nginx.sh /etc/service/nginx/run
|
||||
|
||||
RUN mkdir -p /etc/my_init.d && rm -f /etc/my_init.d/00_regen_ssh_host_keys.sh
|
||||
|
||||
# Clean up for docker squash
|
||||
# See https://github.com/goldmann/docker-squash
|
||||
RUN rm -rf \
|
||||
/root/.cache \
|
||||
/root/.npm \
|
||||
/root/.pip \
|
||||
/usr/local/share/doc \
|
||||
/usr/share/doc \
|
||||
/usr/share/man \
|
||||
/usr/share/vim/vim74/doc \
|
||||
/usr/share/vim/vim74/lang \
|
||||
/usr/share/vim/vim74/spell/en* \
|
||||
/usr/share/vim/vim74/tutor \
|
||||
/var/lib/apt/lists/* \
|
||||
/tmp/*
|
|
@ -1,12 +0,0 @@
|
|||
# -*- mode: conf -*-
|
||||
|
||||
# Required by seafile/seahub
|
||||
python-memcached==1.58
|
||||
urllib3==1.19
|
||||
|
||||
# Utility libraries
|
||||
click==6.6
|
||||
termcolor==1.1.0
|
||||
prettytable==0.7.2
|
||||
colorlog==2.7.0
|
||||
Jinja2==2.8
|
|
@ -1,16 +0,0 @@
|
|||
#
|
||||
# This file is autogenerated by pip-compile
|
||||
# To update, run:
|
||||
#
|
||||
# pip-compile --output-file requirements.txt requirements.in
|
||||
#
|
||||
click==6.6
|
||||
colorlog==2.7.0
|
||||
Jinja2==2.8
|
||||
MarkupSafe==0.23 # via jinja2
|
||||
prettytable==0.7.2
|
||||
termcolor==1.1.0
|
||||
urllib3==1.19
|
||||
Pillow==4.3.0
|
||||
pylibmc
|
||||
django-pylibmc
|
|
@ -1,33 +0,0 @@
|
|||
daemon off;
|
||||
user www-data;
|
||||
worker_processes auto;
|
||||
|
||||
events {
|
||||
worker_connections 768;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
server_names_hash_bucket_size 256;
|
||||
server_names_hash_max_size 1024;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
|
||||
access_log /var/log/nginx/access.log;
|
||||
error_log /var/log/nginx/error.log info;
|
||||
|
||||
gzip on;
|
||||
gzip_types text/plain text/css application/javascript application/json text/javascript;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
location / {
|
||||
return 444;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,3 +0,0 @@
|
|||
#!/bin/bash
|
||||
exec 2>&1
|
||||
exec /usr/sbin/nginx
|
|
@ -1,25 +0,0 @@
|
|||
FROM seafileltd/cluster-base:18.04
|
||||
|
||||
# syslog-ng and syslog-forwarder would mess up the container stdout, not good
|
||||
# when debugging/upgrading.
|
||||
|
||||
# Fixing the "Sub-process /usr/bin/dpkg returned an error code (1)",
|
||||
# when RUN apt-get
|
||||
RUN mkdir -p /usr/share/man/man1
|
||||
|
||||
RUN apt update
|
||||
|
||||
RUN apt-get install -y openjdk-8-jre libmemcached-dev zlib1g-dev pwgen curl openssl poppler-utils libpython2.7 libreoffice \
|
||||
libreoffice-script-provider-python ttf-wqy-microhei ttf-wqy-zenhei xfonts-wqy python-requests mysql-client
|
||||
|
||||
RUN apt-get install -y tzdata python-pip python-setuptools python-urllib3 python-ldap python-ceph
|
||||
|
||||
# The S3 storage, oss storage and psd online preview etc,
|
||||
# depends on the python-backages as follow:
|
||||
RUN pip install boto==2.43.0 \
|
||||
oss2==2.3.0 \
|
||||
psd-tools==1.4 \
|
||||
pycryptodome==3.7.2 \
|
||||
twilio==5.7.0
|
||||
|
||||
RUN apt clean
|
|
@ -1,17 +0,0 @@
|
|||
FROM seafileltd/cluster-pro-base:18.04
|
||||
WORKDIR /opt/seafile
|
||||
|
||||
ENV SEAFILE_VERSION=6.3.7 SEAFILE_SERVER=seafile-pro-server
|
||||
|
||||
RUN mkdir -p /etc/my_init.d
|
||||
|
||||
RUN mkdir -p /opt/seafile/
|
||||
|
||||
RUN curl -sSL -G -d "p=/pro/seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz&dl=1" https://download.seafile.com/d/6e5297246c/files/ \
|
||||
| tar xzf - -C /opt/seafile/
|
||||
|
||||
ADD scripts/create_data_links.sh /etc/my_init.d/01_create_data_links.sh
|
||||
|
||||
COPY scripts /scripts
|
||||
COPY templates /templates
|
||||
RUN chmod u+x /scripts/*
|
|
@ -1,156 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
Bootstraping seafile server, letsencrypt (verification & cron job).
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import uuid
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_conf, get_install_dir, loginfo,
|
||||
get_script, render_template, get_seafile_version, eprint,
|
||||
cert_has_valid_days, get_version_stamp_file, update_version_stamp,
|
||||
wait_for_mysql, wait_for_nginx, read_version_stamp
|
||||
)
|
||||
|
||||
seafile_version = get_seafile_version()
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
shared_seafiledir = '/shared/seafile'
|
||||
ssl_dir = '/shared/ssl'
|
||||
generated_dir = '/bootstrap/generated'
|
||||
|
||||
def init_letsencrypt():
|
||||
loginfo('Preparing for letsencrypt ...')
|
||||
wait_for_nginx()
|
||||
|
||||
if not exists(ssl_dir):
|
||||
os.mkdir(ssl_dir)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'ssl_dir': ssl_dir,
|
||||
'domain': domain,
|
||||
}
|
||||
render_template(
|
||||
'/templates/letsencrypt.cron.template',
|
||||
join(generated_dir, 'letsencrypt.cron'),
|
||||
context
|
||||
)
|
||||
|
||||
ssl_crt = '/shared/ssl/{}.crt'.format(domain)
|
||||
if exists(ssl_crt):
|
||||
loginfo('Found existing cert file {}'.format(ssl_crt))
|
||||
if cert_has_valid_days(ssl_crt, 30):
|
||||
loginfo('Skip letsencrypt verification since we have a valid certificate')
|
||||
return
|
||||
|
||||
loginfo('Starting letsencrypt verification')
|
||||
# Create a temporary nginx conf to start a server, which would accessed by letsencrypt
|
||||
context = {
|
||||
'https': False,
|
||||
'domain': domain,
|
||||
}
|
||||
render_template('/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf', context)
|
||||
|
||||
call('nginx -s reload')
|
||||
time.sleep(2)
|
||||
|
||||
call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain))
|
||||
# if call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain), check_call=False) != 0:
|
||||
# eprint('Now waiting 1000s for postmortem')
|
||||
# time.sleep(1000)
|
||||
# sys.exit(1)
|
||||
|
||||
|
||||
def generate_local_nginx_conf():
|
||||
# Now create the final nginx configuratin
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'https': is_https(),
|
||||
'domain': domain,
|
||||
}
|
||||
render_template(
|
||||
'/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf',
|
||||
context
|
||||
)
|
||||
|
||||
|
||||
def is_https():
|
||||
return get_conf('SEAFILE_SERVER_LETSENCRYPT', 'false').lower() == 'true'
|
||||
|
||||
def parse_args():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument('--parse-ports', action='store_true')
|
||||
|
||||
return ap.parse_args()
|
||||
|
||||
def init_seafile_server():
|
||||
version_stamp_file = get_version_stamp_file()
|
||||
if exists(join(shared_seafiledir, 'seafile-data')):
|
||||
if not exists(version_stamp_file):
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
||||
# sysbol link unlink after docker finish.
|
||||
latest_version_dir='/opt/seafile/seafile-server-latest'
|
||||
current_version_dir='/opt/seafile/' + get_conf('SEAFILE_SERVER', 'seafile-server') + '-' + read_version_stamp()
|
||||
if not exists(latest_version_dir):
|
||||
call('ln -sf ' + current_version_dir + ' ' + latest_version_dir)
|
||||
loginfo('Skip running setup-seafile-mysql.py because there is existing seafile-data folder.')
|
||||
return
|
||||
|
||||
loginfo('Now running setup-seafile-mysql.py in auto mode.')
|
||||
env = {
|
||||
'SERVER_NAME': 'seafile',
|
||||
'SERVER_IP': get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com'),
|
||||
'MYSQL_USER': 'seafile',
|
||||
'MYSQL_USER_PASSWD': str(uuid.uuid4()),
|
||||
'MYSQL_USER_HOST': '127.0.0.1',
|
||||
# Default MariaDB root user has empty password and can only connect from localhost.
|
||||
'MYSQL_ROOT_PASSWD': '',
|
||||
}
|
||||
|
||||
# Change the script to allow mysql root password to be empty
|
||||
call('''sed -i -e 's/if not mysql_root_passwd/if not mysql_root_passwd and "MYSQL_ROOT_PASSWD" not in os.environ/g' {}'''
|
||||
.format(get_script('setup-seafile-mysql.py')))
|
||||
|
||||
setup_script = get_script('setup-seafile-mysql.sh')
|
||||
call('{} auto -n seafile'.format(setup_script), env=env)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
proto = 'https' if is_https() else 'http'
|
||||
with open(join(topdir, 'conf', 'seahub_settings.py'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write('FILE_SERVER_ROOT = "{proto}://{domain}/seafhttp"'.format(proto=proto, domain=domain))
|
||||
fp.write('\n')
|
||||
|
||||
# By default ccnet-server binds to the unix socket file
|
||||
# "/opt/seafile/ccnet/ccnet.sock", but /opt/seafile/ccnet/ is a mounted
|
||||
# volume from the docker host, and on windows and some linux environment
|
||||
# it's not possible to create unix sockets in an external-mounted
|
||||
# directories. So we change the unix socket file path to
|
||||
# "/opt/seafile/ccnet.sock" to avoid this problem.
|
||||
with open(join(topdir, 'conf', 'ccnet.conf'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write('[Client]\n')
|
||||
fp.write('UNIX_SOCKET = /opt/seafile/ccnet.sock\n')
|
||||
fp.write('\n')
|
||||
|
||||
files_to_copy = ['conf', 'ccnet', 'seafile-data', 'seahub-data', 'pro-data']
|
||||
for fn in files_to_copy:
|
||||
src = join(topdir, fn)
|
||||
dst = join(shared_seafiledir, fn)
|
||||
if not exists(dst) and exists(src):
|
||||
shutil.move(src, shared_seafiledir)
|
||||
call('ln -sf ' + join(shared_seafiledir, fn) + ' ' + src)
|
||||
|
||||
loginfo('Updating version stamp')
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
|
@ -1,81 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
if [[ $SEAFILE_BOOTSRAP != "" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ $TIME_ZONE != "" ]]; then
|
||||
time_zone=/usr/share/zoneinfo/$TIME_ZONE
|
||||
if [[ ! -e $time_zone ]]; then
|
||||
echo "invalid time zone"
|
||||
exit 1
|
||||
else
|
||||
ln -snf $time_zone /etc/localtime
|
||||
echo "$TIME_ZONE" > /etc/timezone
|
||||
fi
|
||||
fi
|
||||
|
||||
dirs=(
|
||||
conf
|
||||
ccnet
|
||||
seafile-data
|
||||
seahub-data
|
||||
pro-data
|
||||
seafile-license.txt
|
||||
)
|
||||
|
||||
for d in ${dirs[*]}; do
|
||||
src=/shared/seafile/$d
|
||||
if [[ -e $src ]]; then
|
||||
rm -rf /opt/seafile/$d && ln -sf $src /opt/seafile
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ! -e /shared/logs/seafile ]]; then
|
||||
mkdir -p /shared/logs/seafile
|
||||
fi
|
||||
rm -rf /opt/seafile/logs && ln -sf /shared/logs/seafile/ /opt/seafile/logs
|
||||
|
||||
current_version_dir=/opt/seafile/${SEAFILE_SERVER}-${SEAFILE_VERSION}
|
||||
latest_version_dir=/opt/seafile/seafile-server-latest
|
||||
seahub_data_dir=/shared/seafile/seahub-data
|
||||
|
||||
if [[ ! -e $seahub_data_dir ]]; then
|
||||
mkdir -p $seahub_data_dir
|
||||
fi
|
||||
|
||||
media_dirs=(
|
||||
avatars
|
||||
custom
|
||||
)
|
||||
for d in ${media_dirs[*]}; do
|
||||
source_media_dir=${current_version_dir}/seahub/media/$d
|
||||
if [ -e ${source_media_dir} ] && [ ! -e ${seahub_data_dir}/$d ]; then
|
||||
mv $source_media_dir ${seahub_data_dir}/$d
|
||||
fi
|
||||
rm -rf $source_media_dir && ln -sf ${seahub_data_dir}/$d $source_media_dir
|
||||
done
|
||||
|
||||
rm -rf /var/lib/mysql
|
||||
if [[ ! -e /shared/db ]];then
|
||||
mkdir -p /shared/db
|
||||
fi
|
||||
ln -sf /shared/db /var/lib/mysql
|
||||
|
||||
if [[ ! -e /shared/logs/var-log ]]; then
|
||||
chmod 777 /var/log -R
|
||||
mv /var/log /shared/logs/var-log
|
||||
fi
|
||||
rm -rf /var/log && ln -sf /shared/logs/var-log /var/log
|
||||
|
||||
if [[ ! -e latest_version_dir ]]; then
|
||||
ln -sf $current_version_dir $latest_version_dir
|
||||
fi
|
||||
|
||||
chmod u+x /scripts/*
|
||||
|
||||
echo $PYTHON
|
||||
$PYTHON /scripts/init.py
|
|
@ -1,46 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
Starts the seafile/seahub server and watches the controller process. It is
|
||||
the entrypoint command of the docker container.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_conf, get_install_dir, get_script, get_command_output,
|
||||
render_template, wait_for_mysql
|
||||
)
|
||||
from upgrade import check_upgrade
|
||||
from bootstrap import init_seafile_server, is_https, init_letsencrypt, generate_local_nginx_conf
|
||||
|
||||
|
||||
shared_seafiledir = '/shared/seafile'
|
||||
ssl_dir = '/shared/ssl'
|
||||
generated_dir = '/bootstrap/generated'
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
|
||||
|
||||
def main():
|
||||
call('cp -rf /scripts/setup-seafile-mysql.py ' + join(installdir, 'setup-seafile-mysql.py'))
|
||||
if not exists(shared_seafiledir):
|
||||
os.mkdir(shared_seafiledir)
|
||||
if not exists(generated_dir):
|
||||
os.makedirs(generated_dir)
|
||||
|
||||
if is_https():
|
||||
init_letsencrypt()
|
||||
generate_local_nginx_conf()
|
||||
|
||||
if not exists(join(shared_seafiledir, 'conf')):
|
||||
init_seafile_server()
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
File diff suppressed because it is too large
Load diff
|
@ -1,46 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
ssldir=${1:?"error params"}
|
||||
domain=${2:?"error params"}
|
||||
|
||||
letsencryptdir=$ssldir/letsencrypt
|
||||
letsencrypt_script=$letsencryptdir/acme_tiny.py
|
||||
|
||||
ssl_account_key=${domain}.account.key
|
||||
ssl_csr=${domain}.csr
|
||||
ssl_key=${domain}.key
|
||||
ssl_crt=${domain}.crt
|
||||
|
||||
mkdir -p /var/www/challenges && chmod -R 777 /var/www/challenges
|
||||
mkdir -p ssldir
|
||||
|
||||
if ! [[ -d $letsencryptdir ]]; then
|
||||
git clone git://github.com/diafygi/acme-tiny.git $letsencryptdir
|
||||
else
|
||||
cd $letsencryptdir
|
||||
git pull origin master:master
|
||||
fi
|
||||
|
||||
cd $ssldir
|
||||
|
||||
if [[ ! -e ${ssl_account_key} ]]; then
|
||||
openssl genrsa 4096 > ${ssl_account_key}
|
||||
fi
|
||||
|
||||
if [[ ! -e ${ssl_key} ]]; then
|
||||
openssl genrsa 4096 > ${ssl_key}
|
||||
fi
|
||||
|
||||
if [[ ! -e ${ssl_csr} ]]; then
|
||||
openssl req -new -sha256 -key ${ssl_key} -subj "/CN=$domain" > $ssl_csr
|
||||
fi
|
||||
|
||||
python $letsencrypt_script --account-key ${ssl_account_key} --csr $ssl_csr --acme-dir /var/www/challenges/ > ./signed.crt
|
||||
curl -sSL -o intermediate.pem https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem
|
||||
cat signed.crt intermediate.pem > ${ssl_crt}
|
||||
|
||||
nginx -s reload
|
||||
|
||||
echo "Nginx reloaded."
|
|
@ -1,61 +0,0 @@
|
|||
import os
|
||||
import time
|
||||
import json
|
||||
import argparse
|
||||
from os.path import join, exists, dirname
|
||||
|
||||
from upgrade import check_upgrade
|
||||
from utils import call, get_conf, get_script, get_command_output, get_install_dir
|
||||
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
|
||||
def watch_controller():
|
||||
maxretry = 4
|
||||
retry = 0
|
||||
while retry < maxretry:
|
||||
controller_pid = get_command_output('ps aux | grep seafile-controller | grep -v grep || true').strip()
|
||||
garbage_collector_pid = get_command_output('ps aux | grep /scripts/gc.sh | grep -v grep || true').strip()
|
||||
if not controller_pid and not garbage_collector_pid:
|
||||
retry += 1
|
||||
else:
|
||||
retry = 0
|
||||
time.sleep(5)
|
||||
print 'seafile controller exited unexpectedly.'
|
||||
sys.exit(1)
|
||||
|
||||
def main(args):
|
||||
call('/scripts/create_data_links.sh')
|
||||
check_upgrade()
|
||||
os.chdir(installdir)
|
||||
call('service nginx start &')
|
||||
|
||||
admin_pw = {
|
||||
'email': get_conf('SEAFILE_ADMIN_EMAIL', 'me@example.com'),
|
||||
'password': get_conf('SEAFILE_ADMIN_PASSWORD', 'asecret'),
|
||||
}
|
||||
password_file = join(topdir, 'conf', 'admin.txt')
|
||||
with open(password_file, 'w+') as fp:
|
||||
json.dump(admin_pw, fp)
|
||||
|
||||
|
||||
try:
|
||||
call('{} start'.format(get_script('seafile.sh')))
|
||||
call('{} start'.format(get_script('seahub.sh')))
|
||||
if args.mode == 'backend':
|
||||
call('{} start'.format(get_script('seafile-background-tasks.sh')))
|
||||
finally:
|
||||
if exists(password_file):
|
||||
os.unlink(password_file)
|
||||
|
||||
print 'seafile server is running now.'
|
||||
try:
|
||||
watch_controller()
|
||||
except KeyboardInterrupt:
|
||||
print 'Stopping seafile server.'
|
||||
sys.exit(0)
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description='Seafile cluster start script')
|
||||
parser.add_argument('--mode')
|
||||
main(parser.parse_args())
|
|
@ -1,18 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
function start-front-end() {
|
||||
python /scripts/start.py
|
||||
}
|
||||
|
||||
function start-back-end() {
|
||||
python /scripts/start.py --mode backend
|
||||
}
|
||||
|
||||
case $1 in
|
||||
"front-end" )
|
||||
start-front-end
|
||||
;;
|
||||
"back-end" )
|
||||
start-back-end
|
||||
;;
|
||||
esac
|
|
@ -1,82 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
This script is used to run proper upgrade scripts automatically.
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
import glob
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_install_dir, get_script, get_command_output, replace_file_pattern,
|
||||
read_version_stamp, wait_for_mysql, update_version_stamp, loginfo
|
||||
)
|
||||
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
|
||||
def collect_upgrade_scripts(from_version, to_version):
|
||||
"""
|
||||
Give the current installed version, calculate which upgrade scripts we need
|
||||
to run to upgrade it to the latest verison.
|
||||
|
||||
For example, given current version 5.0.1 and target version 6.1.0, and these
|
||||
upgrade scripts:
|
||||
|
||||
upgrade_4.4_5.0.sh
|
||||
upgrade_5.0_5.1.sh
|
||||
upgrade_5.1_6.0.sh
|
||||
upgrade_6.0_6.1.sh
|
||||
|
||||
We need to run upgrade_5.0_5.1.sh, upgrade_5.1_6.0.sh, and upgrade_6.0_6.1.sh.
|
||||
"""
|
||||
from_major_ver = '.'.join(from_version.split('.')[:2])
|
||||
to_major_ver = '.'.join(to_version.split('.')[:2])
|
||||
|
||||
scripts = []
|
||||
for fn in sorted(glob.glob(join(installdir, 'upgrade', 'upgrade_*_*.sh'))):
|
||||
va, vb = parse_upgrade_script_version(fn)
|
||||
if va >= from_major_ver and vb <= to_major_ver:
|
||||
scripts.append(fn)
|
||||
return scripts
|
||||
|
||||
def parse_upgrade_script_version(script):
|
||||
script = basename(script)
|
||||
m = re.match(r'upgrade_([0-9+.]+)_([0-9+.]+).sh', basename(script))
|
||||
return m.groups()
|
||||
|
||||
def check_upgrade():
|
||||
last_version = read_version_stamp()
|
||||
current_version = os.environ['SEAFILE_VERSION']
|
||||
if last_version == current_version:
|
||||
return
|
||||
|
||||
scripts_to_run = collect_upgrade_scripts(from_version=last_version, to_version=current_version)
|
||||
for script in scripts_to_run:
|
||||
loginfo('Running scripts {}'.format(script))
|
||||
# Here we use a trick: use a version stamp like 6.1.0 to prevent running
|
||||
# all upgrade scripts before 6.1 again (because 6.1 < 6.1.0 in python)
|
||||
new_version = parse_upgrade_script_version(script)[1] + '.0'
|
||||
|
||||
replace_file_pattern(script, 'read dummy', '')
|
||||
call(script)
|
||||
|
||||
update_version_stamp(new_version)
|
||||
|
||||
update_version_stamp(current_version)
|
||||
|
||||
def main():
|
||||
wait_for_mysql()
|
||||
|
||||
os.chdir(installdir)
|
||||
check_upgrade()
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,287 +0,0 @@
|
|||
# coding: UTF-8
|
||||
|
||||
from __future__ import print_function
|
||||
from ConfigParser import ConfigParser
|
||||
from contextlib import contextmanager
|
||||
import os
|
||||
import datetime
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir, expanduser
|
||||
import platform
|
||||
import sys
|
||||
import subprocess
|
||||
import time
|
||||
import logging
|
||||
import logging.config
|
||||
import click
|
||||
import termcolor
|
||||
import colorlog
|
||||
|
||||
logger = logging.getLogger('.utils')
|
||||
|
||||
DEBUG_ENABLED = os.environ.get('SEAFILE_DOCKER_VERBOSE', '').lower() in ('true', '1', 'yes')
|
||||
|
||||
def eprint(*a, **kw):
|
||||
kw['file'] = sys.stderr
|
||||
print(*a, **kw)
|
||||
|
||||
def identity(msg, *a, **kw):
|
||||
return msg
|
||||
|
||||
colored = identity if not os.isatty(sys.stdin.fileno()) else termcolor.colored
|
||||
red = lambda s: colored(s, 'red')
|
||||
green = lambda s: colored(s, 'green')
|
||||
|
||||
def underlined(msg):
|
||||
return '\x1b[4m{}\x1b[0m'.format(msg)
|
||||
|
||||
def sudo(*a, **kw):
|
||||
call('sudo ' + a[0], *a[1:], **kw)
|
||||
|
||||
def _find_flag(args, *opts, **kw):
|
||||
is_flag = kw.get('is_flag', False)
|
||||
if is_flag:
|
||||
return any([opt in args for opt in opts])
|
||||
else:
|
||||
for opt in opts:
|
||||
try:
|
||||
return args[args.index(opt) + 1]
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
def call(*a, **kw):
|
||||
dry_run = kw.pop('dry_run', False)
|
||||
quiet = kw.pop('quiet', DEBUG_ENABLED)
|
||||
cwd = kw.get('cwd', os.getcwd())
|
||||
check_call = kw.pop('check_call', True)
|
||||
reduct_args = kw.pop('reduct_args', [])
|
||||
if not quiet:
|
||||
toprint = a[0]
|
||||
args = [x.strip('"') for x in a[0].split() if '=' not in x]
|
||||
for arg in reduct_args:
|
||||
value = _find_flag(args, arg)
|
||||
toprint = toprint.replace(value, '{}**reducted**'.format(value[:3]))
|
||||
logdbg('calling: ' + green(toprint))
|
||||
logdbg('cwd: ' + green(cwd))
|
||||
kw.setdefault('shell', True)
|
||||
if not dry_run:
|
||||
if check_call:
|
||||
return subprocess.check_call(*a, **kw)
|
||||
else:
|
||||
return subprocess.Popen(*a, **kw).wait()
|
||||
|
||||
@contextmanager
|
||||
def cd(path):
|
||||
path = expanduser(path)
|
||||
olddir = os.getcwd()
|
||||
os.chdir(path)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
os.chdir(olddir)
|
||||
|
||||
def must_makedir(p):
|
||||
p = expanduser(p)
|
||||
if not exists(p):
|
||||
logger.info('created folder %s', p)
|
||||
os.makedirs(p)
|
||||
else:
|
||||
logger.debug('folder %s already exists', p)
|
||||
|
||||
def setup_colorlog():
|
||||
logging.config.dictConfig({
|
||||
'version': 1,
|
||||
'disable_existing_loggers': False,
|
||||
'formatters': {
|
||||
'standard': {
|
||||
'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
|
||||
},
|
||||
'colored': {
|
||||
'()': 'colorlog.ColoredFormatter',
|
||||
'format': "%(log_color)s[%(asctime)s]%(reset)s %(blue)s%(message)s",
|
||||
'datefmt': '%m/%d/%Y %H:%M:%S',
|
||||
},
|
||||
},
|
||||
'handlers': {
|
||||
'default': {
|
||||
'level': 'INFO',
|
||||
'formatter': 'colored',
|
||||
'class': 'logging.StreamHandler',
|
||||
},
|
||||
},
|
||||
'loggers': {
|
||||
'': {
|
||||
'handlers': ['default'],
|
||||
'level': 'INFO',
|
||||
'propagate': True
|
||||
},
|
||||
'django.request': {
|
||||
'handlers': ['default'],
|
||||
'level': 'WARN',
|
||||
'propagate': False
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(
|
||||
logging.WARNING)
|
||||
|
||||
|
||||
def setup_logging(level=logging.INFO):
|
||||
kw = {
|
||||
'format': '[%(asctime)s][%(module)s]: %(message)s',
|
||||
'datefmt': '%m/%d/%Y %H:%M:%S',
|
||||
'level': level,
|
||||
'stream': sys.stdout
|
||||
}
|
||||
|
||||
logging.basicConfig(**kw)
|
||||
logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(
|
||||
logging.WARNING)
|
||||
|
||||
def get_process_cmd(pid, env=False):
|
||||
env = 'e' if env else ''
|
||||
try:
|
||||
return subprocess.check_output('ps {} -o command {}'.format(env, pid),
|
||||
shell=True).strip().splitlines()[1]
|
||||
# except Exception, e:
|
||||
# print(e)
|
||||
except:
|
||||
return None
|
||||
|
||||
def get_match_pids(pattern):
|
||||
pgrep_output = subprocess.check_output(
|
||||
'pgrep -f "{}" || true'.format(pattern),
|
||||
shell=True).strip()
|
||||
return [int(pid) for pid in pgrep_output.splitlines()]
|
||||
|
||||
def ask_for_confirm(msg):
|
||||
confirm = click.prompt(msg, default='Y')
|
||||
return confirm.lower() in ('y', 'yes')
|
||||
|
||||
def confirm_command_to_run(cmd):
|
||||
if ask_for_confirm('Run the command: {} ?'.format(green(cmd))):
|
||||
call(cmd)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
def git_current_commit():
|
||||
return get_command_output('git rev-parse --short HEAD').strip()
|
||||
|
||||
def get_command_output(cmd):
|
||||
shell = not isinstance(cmd, list)
|
||||
return subprocess.check_output(cmd, shell=shell)
|
||||
|
||||
def ask_yes_or_no(msg, prompt='', default=None):
|
||||
print('\n' + msg + '\n')
|
||||
while True:
|
||||
answer = raw_input(prompt + ' [yes/no] ').lower()
|
||||
if not answer:
|
||||
continue
|
||||
|
||||
if answer not in ('yes', 'no', 'y', 'n'):
|
||||
continue
|
||||
|
||||
if answer in ('yes', 'y'):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def git_branch_exists(branch):
|
||||
return call('git rev-parse --short --verify {}'.format(branch)) == 0
|
||||
|
||||
def to_unicode(s):
|
||||
if isinstance(s, str):
|
||||
return s.decode('utf-8')
|
||||
else:
|
||||
return s
|
||||
|
||||
def to_utf8(s):
|
||||
if isinstance(s, unicode):
|
||||
return s.encode('utf-8')
|
||||
else:
|
||||
return s
|
||||
|
||||
def git_commit_time(refspec):
|
||||
return int(get_command_output('git log -1 --format="%ct" {}'.format(
|
||||
refspec)).strip())
|
||||
|
||||
def get_seafile_version():
|
||||
return os.environ['SEAFILE_VERSION']
|
||||
|
||||
def get_install_dir():
|
||||
return join('/opt/seafile/' + get_conf('SEAFILE_SERVER', 'seafile-server') + '-{}'.format(get_seafile_version()))
|
||||
|
||||
def get_script(script):
|
||||
return join(get_install_dir(), script)
|
||||
|
||||
|
||||
_config = None
|
||||
|
||||
def get_conf(key, default=None):
|
||||
key = key.upper()
|
||||
return os.environ.get(key, default)
|
||||
|
||||
def _add_default_context(context):
|
||||
default_context = {
|
||||
'current_timestr': datetime.datetime.now().strftime('%m/%d/%Y %H:%M:%S'),
|
||||
}
|
||||
for k in default_context:
|
||||
context.setdefault(k, default_context[k])
|
||||
|
||||
def render_template(template, target, context):
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
env = Environment(loader=FileSystemLoader(dirname(template)))
|
||||
_add_default_context(context)
|
||||
content = env.get_template(basename(template)).render(**context)
|
||||
with open(target, 'w') as fp:
|
||||
fp.write(content)
|
||||
|
||||
def logdbg(msg):
|
||||
if DEBUG_ENABLED:
|
||||
msg = '[debug] ' + msg
|
||||
loginfo(msg)
|
||||
|
||||
def loginfo(msg):
|
||||
msg = '[{}] {}'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), green(msg))
|
||||
eprint(msg)
|
||||
|
||||
def cert_has_valid_days(cert, days):
|
||||
assert exists(cert)
|
||||
|
||||
secs = 86400 * int(days)
|
||||
retcode = call('openssl x509 -checkend {} -noout -in {}'.format(secs, cert), check_call=False)
|
||||
return retcode == 0
|
||||
|
||||
def get_version_stamp_file():
|
||||
return '/shared/seafile/seafile-data/current_version'
|
||||
|
||||
def read_version_stamp(fn=get_version_stamp_file()):
|
||||
assert exists(fn), 'version stamp file {} does not exist!'.format(fn)
|
||||
with open(fn, 'r') as fp:
|
||||
return fp.read().strip()
|
||||
|
||||
def update_version_stamp(version, fn=get_version_stamp_file()):
|
||||
with open(fn, 'w') as fp:
|
||||
fp.write(version + '\n')
|
||||
|
||||
def wait_for_mysql():
|
||||
while not exists('/var/run/mysqld/mysqld.sock'):
|
||||
logdbg('waiting for mysql server to be ready')
|
||||
time.sleep(2)
|
||||
logdbg('mysql server is ready')
|
||||
|
||||
def wait_for_nginx():
|
||||
while True:
|
||||
logdbg('waiting for nginx server to be ready')
|
||||
output = get_command_output('netstat -nltp')
|
||||
if ':80 ' in output:
|
||||
logdbg(output)
|
||||
logdbg('nginx is ready')
|
||||
return
|
||||
time.sleep(2)
|
||||
|
||||
def replace_file_pattern(fn, pattern, replacement):
|
||||
with open(fn, 'r') as fp:
|
||||
content = fp.read()
|
||||
with open(fn, 'w') as fp:
|
||||
fp.write(content.replace(pattern, replacement))
|
|
@ -1,3 +0,0 @@
|
|||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
# min hour dayofmonth month dayofweek command
|
||||
0 0 1 * * root /scripts/ssl.sh {{ ssl_dir }} {{ domain }}
|
|
@ -1,82 +0,0 @@
|
|||
# -*- mode: nginx -*-
|
||||
# Auto generated at {{ current_timestr }}
|
||||
{% if https -%}
|
||||
server {
|
||||
listen 80;
|
||||
server_name _ default_server;
|
||||
rewrite ^ https://{{ domain }}$request_uri? permanent;
|
||||
}
|
||||
{% endif -%}
|
||||
|
||||
server {
|
||||
{% if https -%}
|
||||
listen 443;
|
||||
ssl on;
|
||||
ssl_certificate /shared/ssl/{{ domain }}.crt;
|
||||
ssl_certificate_key /shared/ssl/{{ domain }}.key;
|
||||
|
||||
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
|
||||
|
||||
# TODO: More SSL security hardening: ssl_session_tickets & ssl_dhparam
|
||||
# ssl_session_tickets on;
|
||||
# ssl_session_ticket_key /etc/nginx/sessionticket.key;
|
||||
# ssl_session_cache shared:SSL:10m;
|
||||
# ssl_session_timeout 10m;
|
||||
{% else -%}
|
||||
listen 80;
|
||||
{% endif -%}
|
||||
|
||||
server_name {{ domain }};
|
||||
|
||||
client_max_body_size 10m;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000/;
|
||||
proxy_read_timeout 310s;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header Connection "";
|
||||
proxy_http_version 1.1;
|
||||
}
|
||||
|
||||
location /seafhttp {
|
||||
rewrite ^/seafhttp(.*)$ $1 break;
|
||||
proxy_pass http://127.0.0.1:8082;
|
||||
client_max_body_size 0;
|
||||
proxy_connect_timeout 36000s;
|
||||
proxy_read_timeout 36000s;
|
||||
proxy_request_buffering off;
|
||||
}
|
||||
|
||||
location /seafdav {
|
||||
client_max_body_size 0;
|
||||
fastcgi_pass 127.0.0.1:8080;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
fastcgi_param PATH_INFO $fastcgi_script_name;
|
||||
|
||||
fastcgi_param SERVER_PROTOCOL $server_protocol;
|
||||
fastcgi_param QUERY_STRING $query_string;
|
||||
fastcgi_param REQUEST_METHOD $request_method;
|
||||
fastcgi_param CONTENT_TYPE $content_type;
|
||||
fastcgi_param CONTENT_LENGTH $content_length;
|
||||
fastcgi_param SERVER_ADDR $server_addr;
|
||||
fastcgi_param SERVER_PORT $server_port;
|
||||
fastcgi_param SERVER_NAME $server_name;
|
||||
|
||||
access_log /var/log/nginx/seafdav.access.log;
|
||||
error_log /var/log/nginx/seafdav.error.log;
|
||||
}
|
||||
|
||||
location /media {
|
||||
root /opt/seafile/seafile-server-latest/seahub;
|
||||
}
|
||||
|
||||
# For letsencrypt
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
}
|
|
@ -1,71 +0,0 @@
|
|||
# See https://hub.docker.com/r/phusion/baseimage/tags/
|
||||
FROM phusion/baseimage:0.11
|
||||
ENV SEAFILE_SERVER=seafile-pro-server SEAFILE_VERSION=
|
||||
|
||||
RUN apt-get update --fix-missing
|
||||
|
||||
# Utility tools
|
||||
RUN apt-get install -y vim htop net-tools psmisc wget curl git
|
||||
|
||||
# For suport set local time zone.
|
||||
RUN export DEBIAN_FRONTEND=noninteractive && apt-get install tzdata -y
|
||||
|
||||
# Nginx
|
||||
RUN apt-get install -y nginx
|
||||
|
||||
# Java
|
||||
RUN apt-get install -y openjdk-8-jre
|
||||
|
||||
# Libreoffice
|
||||
RUN apt-get install -y libreoffice libreoffice-script-provider-python libsm-dev
|
||||
RUN apt-get install -y ttf-wqy-microhei ttf-wqy-zenhei xfonts-wqy
|
||||
|
||||
# Tools
|
||||
RUN apt-get install -y zlib1g-dev pwgen openssl poppler-utils
|
||||
|
||||
|
||||
# Python3
|
||||
RUN apt-get install -y python3 python3-pip python3-setuptools python3-ldap python-rados
|
||||
RUN python3.6 -m pip install --upgrade pip && rm -r /root/.cache/pip
|
||||
|
||||
RUN pip3 install --timeout=3600 click termcolor colorlog pymysql \
|
||||
django==1.11.29 && rm -r /root/.cache/pip
|
||||
|
||||
RUN pip3 install --timeout=3600 Pillow pylibmc captcha jinja2 \
|
||||
sqlalchemy django-pylibmc django-simple-captcha && \
|
||||
rm -r /root/.cache/pip
|
||||
|
||||
RUN pip3 install --timeout=3600 boto oss2 pycryptodome twilio python-ldap configparser && \
|
||||
rm -r /root/.cache/pip
|
||||
|
||||
|
||||
# Scripts
|
||||
COPY scripts_7.1 /scripts
|
||||
COPY templates /templates
|
||||
COPY services /services
|
||||
RUN chmod u+x /scripts/*
|
||||
|
||||
RUN mkdir -p /etc/my_init.d && \
|
||||
rm -f /etc/my_init.d/* && \
|
||||
cp /scripts/create_data_links.sh /etc/my_init.d/01_create_data_links.sh
|
||||
|
||||
RUN mkdir -p /etc/service/nginx && \
|
||||
rm -f /etc/nginx/sites-enabled/* /etc/nginx/conf.d/* && \
|
||||
mv /services/nginx.conf /etc/nginx/nginx.conf && \
|
||||
mv /services/nginx.sh /etc/service/nginx/run
|
||||
|
||||
|
||||
# Seafile
|
||||
WORKDIR /opt/seafile
|
||||
|
||||
RUN mkdir -p /opt/seafile/ && cd /opt/seafile/ && \
|
||||
wget -O seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz \
|
||||
"https://download.seafile.com/d/6e5297246c/files/?p=/pro/seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz&dl=1" && \
|
||||
tar -zxvf seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz && \
|
||||
rm -f seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz
|
||||
|
||||
|
||||
EXPOSE 80
|
||||
|
||||
|
||||
CMD ["/sbin/my_init", "--", "/scripts/enterpoint.sh"]
|
|
@ -1,200 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
Bootstraping seafile server, letsencrypt (verification & cron job).
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import uuid
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_conf, get_install_dir, loginfo,
|
||||
get_script, render_template, get_seafile_version, eprint,
|
||||
cert_has_valid_days, get_version_stamp_file, update_version_stamp,
|
||||
wait_for_mysql, wait_for_nginx, read_version_stamp
|
||||
)
|
||||
|
||||
seafile_version = get_seafile_version()
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
shared_seafiledir = '/shared/seafile'
|
||||
ssl_dir = '/shared/ssl'
|
||||
generated_dir = '/bootstrap/generated'
|
||||
|
||||
def init_letsencrypt():
|
||||
loginfo('Preparing for letsencrypt ...')
|
||||
wait_for_nginx()
|
||||
|
||||
if not exists(ssl_dir):
|
||||
os.mkdir(ssl_dir)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'ssl_dir': ssl_dir,
|
||||
'domain': domain,
|
||||
}
|
||||
render_template(
|
||||
'/templates/letsencrypt.cron.template',
|
||||
join(generated_dir, 'letsencrypt.cron'),
|
||||
context
|
||||
)
|
||||
|
||||
ssl_crt = '/shared/ssl/{}.crt'.format(domain)
|
||||
if exists(ssl_crt):
|
||||
loginfo('Found existing cert file {}'.format(ssl_crt))
|
||||
if cert_has_valid_days(ssl_crt, 30):
|
||||
loginfo('Skip letsencrypt verification since we have a valid certificate')
|
||||
return
|
||||
|
||||
loginfo('Starting letsencrypt verification')
|
||||
# Create a temporary nginx conf to start a server, which would accessed by letsencrypt
|
||||
context = {
|
||||
'https': False,
|
||||
'domain': domain,
|
||||
}
|
||||
render_template('/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf', context)
|
||||
|
||||
call('nginx -s reload')
|
||||
time.sleep(2)
|
||||
|
||||
call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain))
|
||||
# if call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain), check_call=False) != 0:
|
||||
# eprint('Now waiting 1000s for postmortem')
|
||||
# time.sleep(1000)
|
||||
# sys.exit(1)
|
||||
|
||||
|
||||
def generate_local_nginx_conf():
|
||||
# Now create the final nginx configuratin
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'https': is_https(),
|
||||
'domain': domain,
|
||||
}
|
||||
render_template(
|
||||
'/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf',
|
||||
context
|
||||
)
|
||||
|
||||
|
||||
def is_https():
|
||||
return get_conf('SEAFILE_SERVER_LETSENCRYPT', 'false').lower() == 'true'
|
||||
|
||||
def parse_args():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument('--parse-ports', action='store_true')
|
||||
|
||||
return ap.parse_args()
|
||||
|
||||
def init_seafile_server():
|
||||
version_stamp_file = get_version_stamp_file()
|
||||
if exists(join(shared_seafiledir, 'seafile-data')):
|
||||
if not exists(version_stamp_file):
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
||||
# sysbol link unlink after docker finish.
|
||||
latest_version_dir='/opt/seafile/seafile-server-latest'
|
||||
current_version_dir='/opt/seafile/' + get_conf('SEAFILE_SERVER', 'seafile-server') + '-' + read_version_stamp()
|
||||
if not exists(latest_version_dir):
|
||||
call('ln -sf ' + current_version_dir + ' ' + latest_version_dir)
|
||||
loginfo('Skip running setup-seafile-mysql.py because there is existing seafile-data folder.')
|
||||
return
|
||||
|
||||
loginfo('Now running setup-seafile-mysql.py in auto mode.')
|
||||
env = {
|
||||
'SERVER_NAME': 'seafile',
|
||||
'SERVER_IP': get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com'),
|
||||
'MYSQL_USER': 'seafile',
|
||||
'MYSQL_USER_PASSWD': str(uuid.uuid4()),
|
||||
'MYSQL_USER_HOST': '127.0.0.1',
|
||||
# Default MariaDB root user has empty password and can only connect from localhost.
|
||||
'MYSQL_ROOT_PASSWD': '',
|
||||
}
|
||||
|
||||
# Change the script to allow mysql root password to be empty
|
||||
call('''sed -i -e 's/if not mysql_root_passwd/if not mysql_root_passwd and "MYSQL_ROOT_PASSWD" not in os.environ/g' {}'''
|
||||
.format(get_script('setup-seafile-mysql.py')))
|
||||
|
||||
setup_script = get_script('setup-seafile-mysql.sh')
|
||||
call('{} auto -n seafile'.format(setup_script), env=env)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
proto = 'https' if is_https() else 'http'
|
||||
with open(join(topdir, 'conf', 'seahub_settings.py'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write("""CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
|
||||
'LOCATION': '127.0.0.1:11211',
|
||||
},
|
||||
'locmem': {
|
||||
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
|
||||
},
|
||||
}
|
||||
COMPRESS_CACHE_BACKEND = 'locmem'
|
||||
|
||||
OFFICE_CONVERTOR_ROOT = 'http://127.0.0.1:6000/'\n""")
|
||||
fp.write("\nFILE_SERVER_ROOT = '{proto}://{domain}/seafhttp'\n".format(proto=proto, domain=domain))
|
||||
fp.write("""
|
||||
TIME_ZONE = 'Europe/Berlin'
|
||||
SITE_BASE = 'http://127.0.0.1'
|
||||
SITE_NAME = 'Seafile Server'
|
||||
SITE_TITLE = 'Seafile Server'
|
||||
SITE_ROOT = '/'
|
||||
ENABLE_SIGNUP = False
|
||||
ACTIVATE_AFTER_REGISTRATION = False
|
||||
SEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True
|
||||
SEND_EMAIL_ON_RESETTING_USER_PASSWD = True
|
||||
CLOUD_MODE = False
|
||||
FILE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024
|
||||
SESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2
|
||||
SESSION_SAVE_EVERY_REQUEST = False
|
||||
SESSION_EXPIRE_AT_BROWSER_CLOSE = False\n""")
|
||||
|
||||
# By default ccnet-server binds to the unix socket file
|
||||
# "/opt/seafile/ccnet/ccnet.sock", but /opt/seafile/ccnet/ is a mounted
|
||||
# volume from the docker host, and on windows and some linux environment
|
||||
# it's not possible to create unix sockets in an external-mounted
|
||||
# directories. So we change the unix socket file path to
|
||||
# "/opt/seafile/ccnet.sock" to avoid this problem.
|
||||
with open(join(topdir, 'conf', 'ccnet.conf'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write('[Client]\n')
|
||||
fp.write('UNIX_SOCKET = /opt/seafile/ccnet.sock\n')
|
||||
fp.write('\n')
|
||||
|
||||
# Disabled the Elasticsearch process on Seafile-container
|
||||
# Connection to the Elasticsearch-container
|
||||
with open(join(topdir, 'conf', 'seafevents.conf'), 'r') as fp:
|
||||
seafevents_lines = fp.readlines()
|
||||
# es
|
||||
es_insert_index = seafevents_lines.index('[INDEX FILES]\n') + 1
|
||||
es_insert_lines = ['external_es_server = true\n', 'es_host = 127.0.0.1\n', 'es_port = 9200\n']
|
||||
for line in es_insert_lines:
|
||||
seafevents_lines.insert(es_insert_index, line)
|
||||
# office
|
||||
office_insert_index = seafevents_lines.index('[OFFICE CONVERTER]\n') + 1
|
||||
office_insert_lines = ['host = 127.0.0.1\n', 'port = 6000\n']
|
||||
for line in office_insert_lines:
|
||||
seafevents_lines.insert(office_insert_index, line)
|
||||
|
||||
with open(join(topdir, 'conf', 'seafevents.conf'), 'w') as fp:
|
||||
fp.writelines(seafevents_lines)
|
||||
|
||||
files_to_copy = ['conf', 'ccnet', 'seafile-data', 'seahub-data', 'pro-data']
|
||||
for fn in files_to_copy:
|
||||
src = join(topdir, fn)
|
||||
dst = join(shared_seafiledir, fn)
|
||||
if not exists(dst) and exists(src):
|
||||
shutil.move(src, shared_seafiledir)
|
||||
call('ln -sf ' + join(shared_seafiledir, fn) + ' ' + src)
|
||||
|
||||
loginfo('Updating version stamp')
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
|
@ -1,81 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
if [[ $SEAFILE_BOOTSRAP != "" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ $TIME_ZONE != "" ]]; then
|
||||
time_zone=/usr/share/zoneinfo/$TIME_ZONE
|
||||
if [[ ! -e $time_zone ]]; then
|
||||
echo "invalid time zone"
|
||||
exit 1
|
||||
else
|
||||
ln -snf $time_zone /etc/localtime
|
||||
echo "$TIME_ZONE" > /etc/timezone
|
||||
fi
|
||||
fi
|
||||
|
||||
dirs=(
|
||||
conf
|
||||
ccnet
|
||||
seafile-data
|
||||
seahub-data
|
||||
pro-data
|
||||
seafile-license.txt
|
||||
)
|
||||
|
||||
for d in ${dirs[*]}; do
|
||||
src=/shared/seafile/$d
|
||||
if [[ -e $src ]]; then
|
||||
rm -rf /opt/seafile/$d && ln -sf $src /opt/seafile
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ! -e /shared/logs/seafile ]]; then
|
||||
mkdir -p /shared/logs/seafile
|
||||
fi
|
||||
rm -rf /opt/seafile/logs && ln -sf /shared/logs/seafile/ /opt/seafile/logs
|
||||
|
||||
current_version_dir=/opt/seafile/${SEAFILE_SERVER}-${SEAFILE_VERSION}
|
||||
latest_version_dir=/opt/seafile/seafile-server-latest
|
||||
seahub_data_dir=/shared/seafile/seahub-data
|
||||
|
||||
if [[ ! -e $seahub_data_dir ]]; then
|
||||
mkdir -p $seahub_data_dir
|
||||
fi
|
||||
|
||||
media_dirs=(
|
||||
avatars
|
||||
custom
|
||||
)
|
||||
for d in ${media_dirs[*]}; do
|
||||
source_media_dir=${current_version_dir}/seahub/media/$d
|
||||
if [ -e ${source_media_dir} ] && [ ! -e ${seahub_data_dir}/$d ]; then
|
||||
mv $source_media_dir ${seahub_data_dir}/$d
|
||||
fi
|
||||
rm -rf $source_media_dir && ln -sf ${seahub_data_dir}/$d $source_media_dir
|
||||
done
|
||||
|
||||
rm -rf /var/lib/mysql
|
||||
if [[ ! -e /shared/db ]];then
|
||||
mkdir -p /shared/db
|
||||
fi
|
||||
ln -sf /shared/db /var/lib/mysql
|
||||
|
||||
if [[ ! -e /shared/logs/var-log ]]; then
|
||||
chmod 777 /var/log -R
|
||||
mv /var/log /shared/logs/var-log
|
||||
fi
|
||||
rm -rf /var/log && ln -sf /shared/logs/var-log /var/log
|
||||
|
||||
if [[ ! -e latest_version_dir ]]; then
|
||||
ln -sf $current_version_dir $latest_version_dir
|
||||
fi
|
||||
|
||||
# chmod u+x /scripts/*
|
||||
|
||||
# echo $PYTHON
|
||||
# $PYTHON /scripts/init.py
|
|
@ -1,42 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
|
||||
# log function
|
||||
function log() {
|
||||
local time=$(date +"%F %T")
|
||||
echo "$time $1 "
|
||||
echo "[$time] $1 " &>> /opt/seafile/logs/enterpoint.log
|
||||
}
|
||||
|
||||
|
||||
# check nginx
|
||||
while [ 1 ]; do
|
||||
process_num=$(ps -ef | grep "/usr/sbin/nginx" | grep -v "grep" | wc -l)
|
||||
if [ $process_num -eq 0 ]; then
|
||||
log "Waiting Nginx"
|
||||
sleep 0.2
|
||||
else
|
||||
log "Nginx ready"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ! -L /etc/nginx/sites-enabled/default ]]; then
|
||||
ln -s /opt/seafile/conf/nginx.conf /etc/nginx/sites-enabled/default
|
||||
nginx -s reload
|
||||
fi
|
||||
|
||||
|
||||
log "This is a idle script (infinite loop) to keep container running."
|
||||
|
||||
function cleanup() {
|
||||
kill -s SIGTERM $!
|
||||
exit 0
|
||||
}
|
||||
|
||||
trap cleanup SIGINT SIGTERM
|
||||
|
||||
while [ 1 ]; do
|
||||
sleep 60 &
|
||||
wait $!
|
||||
done
|
|
@ -1,37 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
# Before
|
||||
SEAFILE_DIR=/opt/seafile/seafile-server-latest
|
||||
|
||||
if [[ $SEAFILE_SERVER != *"pro"* ]]; then
|
||||
echo "Seafile CE: Stop Seafile to perform offline garbage collection."
|
||||
$SEAFILE_DIR/seafile.sh stop
|
||||
|
||||
echo "Waiting for the server to shut down properly..."
|
||||
sleep 5
|
||||
else
|
||||
echo "Seafile Pro: Perform online garbage collection."
|
||||
fi
|
||||
|
||||
# Do it
|
||||
(
|
||||
set +e
|
||||
$SEAFILE_DIR/seaf-gc.sh "$@" | tee -a /var/log/gc.log
|
||||
# We want to presevent the exit code of seaf-gc.sh
|
||||
exit "${PIPESTATUS[0]}"
|
||||
)
|
||||
|
||||
gc_exit_code=$?
|
||||
|
||||
# After
|
||||
|
||||
if [[ $SEAFILE_SERVER != *"pro"* ]]; then
|
||||
echo "Giving the server some time..."
|
||||
sleep 3
|
||||
|
||||
$SEAFILE_DIR/seafile.sh start
|
||||
fi
|
||||
|
||||
exit $gc_exit_code
|
|
@ -1,57 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
Starts the seafile/seahub server and watches the controller process. It is
|
||||
the entrypoint command of the docker container.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_conf, get_install_dir, get_script, get_command_output,
|
||||
render_template, wait_for_mysql
|
||||
)
|
||||
from upgrade import check_upgrade
|
||||
from bootstrap import init_seafile_server, is_https, init_letsencrypt, generate_local_nginx_conf
|
||||
|
||||
|
||||
shared_seafiledir = '/shared/seafile'
|
||||
ssl_dir = '/shared/ssl'
|
||||
generated_dir = '/bootstrap/generated'
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
|
||||
|
||||
def main():
|
||||
call('cp -rf /scripts/setup-seafile-mysql.py ' + join(installdir, 'setup-seafile-mysql.py'))
|
||||
if not exists(shared_seafiledir):
|
||||
os.mkdir(shared_seafiledir)
|
||||
if not exists(generated_dir):
|
||||
os.makedirs(generated_dir)
|
||||
|
||||
if not exists(join(shared_seafiledir, 'conf')):
|
||||
print('Start init')
|
||||
|
||||
# conf
|
||||
init_seafile_server()
|
||||
|
||||
# nginx conf
|
||||
if is_https():
|
||||
init_letsencrypt()
|
||||
generate_local_nginx_conf()
|
||||
call('mv -f /etc/nginx/sites-enabled/seafile.nginx.conf /shared/seafile/conf/nginx.conf')
|
||||
call('ln -snf /shared/seafile/conf/nginx.conf /etc/nginx/sites-enabled/default')
|
||||
call('nginx -s reload')
|
||||
|
||||
print('Init success')
|
||||
else:
|
||||
print('Conf exists')
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
File diff suppressed because it is too large
Load diff
|
@ -1,46 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
ssldir=${1:?"error params"}
|
||||
domain=${2:?"error params"}
|
||||
|
||||
letsencryptdir=$ssldir/letsencrypt
|
||||
letsencrypt_script=$letsencryptdir/acme_tiny.py
|
||||
|
||||
ssl_account_key=${domain}.account.key
|
||||
ssl_csr=${domain}.csr
|
||||
ssl_key=${domain}.key
|
||||
ssl_crt=${domain}.crt
|
||||
|
||||
mkdir -p /var/www/challenges && chmod -R 777 /var/www/challenges
|
||||
mkdir -p ssldir
|
||||
|
||||
if ! [[ -d $letsencryptdir ]]; then
|
||||
git clone git://github.com/diafygi/acme-tiny.git $letsencryptdir
|
||||
else
|
||||
cd $letsencryptdir
|
||||
git pull origin master:master
|
||||
fi
|
||||
|
||||
cd $ssldir
|
||||
|
||||
if [[ ! -e ${ssl_account_key} ]]; then
|
||||
openssl genrsa 4096 > ${ssl_account_key}
|
||||
fi
|
||||
|
||||
if [[ ! -e ${ssl_key} ]]; then
|
||||
openssl genrsa 4096 > ${ssl_key}
|
||||
fi
|
||||
|
||||
if [[ ! -e ${ssl_csr} ]]; then
|
||||
openssl req -new -sha256 -key ${ssl_key} -subj "/CN=$domain" > $ssl_csr
|
||||
fi
|
||||
|
||||
python $letsencrypt_script --account-key ${ssl_account_key} --csr $ssl_csr --acme-dir /var/www/challenges/ > ./signed.crt
|
||||
curl -sSL -o intermediate.pem https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem
|
||||
cat signed.crt intermediate.pem > ${ssl_crt}
|
||||
|
||||
nginx -s reload
|
||||
|
||||
echo "Nginx reloaded."
|
|
@ -1,65 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
#coding: UTF-8
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import json
|
||||
import argparse
|
||||
from os.path import join, exists, dirname
|
||||
|
||||
from upgrade import check_upgrade
|
||||
from utils import call, get_conf, get_script, get_command_output, get_install_dir
|
||||
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
|
||||
def watch_controller():
|
||||
maxretry = 4
|
||||
retry = 0
|
||||
while retry < maxretry:
|
||||
controller_pid = get_command_output('ps aux | grep seafile-controller | grep -v grep || true').strip()
|
||||
garbage_collector_pid = get_command_output('ps aux | grep /scripts/gc.sh | grep -v grep || true').strip()
|
||||
if not controller_pid and not garbage_collector_pid:
|
||||
retry += 1
|
||||
else:
|
||||
retry = 0
|
||||
time.sleep(5)
|
||||
print('seafile controller exited unexpectedly.')
|
||||
sys.exit(1)
|
||||
|
||||
def main(args):
|
||||
call('/scripts/create_data_links.sh')
|
||||
# check_upgrade()
|
||||
os.chdir(installdir)
|
||||
call('service nginx start &')
|
||||
|
||||
admin_pw = {
|
||||
'email': get_conf('SEAFILE_ADMIN_EMAIL', 'me@example.com'),
|
||||
'password': get_conf('SEAFILE_ADMIN_PASSWORD', 'asecret'),
|
||||
}
|
||||
password_file = join(topdir, 'conf', 'admin.txt')
|
||||
with open(password_file, 'w+') as fp:
|
||||
json.dump(admin_pw, fp)
|
||||
|
||||
|
||||
try:
|
||||
call('{} start'.format(get_script('seafile.sh')))
|
||||
call('{} start'.format(get_script('seahub.sh')))
|
||||
if args.mode == 'backend':
|
||||
call('{} start'.format(get_script('seafile-background-tasks.sh')))
|
||||
finally:
|
||||
if exists(password_file):
|
||||
os.unlink(password_file)
|
||||
|
||||
print('seafile server is running now.')
|
||||
try:
|
||||
watch_controller()
|
||||
except KeyboardInterrupt:
|
||||
print('Stopping seafile server.')
|
||||
sys.exit(0)
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description='Seafile cluster start script')
|
||||
parser.add_argument('--mode')
|
||||
main(parser.parse_args())
|
|
@ -1,18 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
function start-front-end() {
|
||||
python /scripts/start.py
|
||||
}
|
||||
|
||||
function start-back-end() {
|
||||
python /scripts/start.py --mode backend
|
||||
}
|
||||
|
||||
case $1 in
|
||||
"front-end" )
|
||||
start-front-end
|
||||
;;
|
||||
"back-end" )
|
||||
start-back-end
|
||||
;;
|
||||
esac
|
|
@ -1,82 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
This script is used to run proper upgrade scripts automatically.
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
import glob
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_install_dir, get_script, get_command_output, replace_file_pattern,
|
||||
read_version_stamp, wait_for_mysql, update_version_stamp, loginfo
|
||||
)
|
||||
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
|
||||
def collect_upgrade_scripts(from_version, to_version):
|
||||
"""
|
||||
Give the current installed version, calculate which upgrade scripts we need
|
||||
to run to upgrade it to the latest verison.
|
||||
|
||||
For example, given current version 5.0.1 and target version 6.1.0, and these
|
||||
upgrade scripts:
|
||||
|
||||
upgrade_4.4_5.0.sh
|
||||
upgrade_5.0_5.1.sh
|
||||
upgrade_5.1_6.0.sh
|
||||
upgrade_6.0_6.1.sh
|
||||
|
||||
We need to run upgrade_5.0_5.1.sh, upgrade_5.1_6.0.sh, and upgrade_6.0_6.1.sh.
|
||||
"""
|
||||
from_major_ver = '.'.join(from_version.split('.')[:2])
|
||||
to_major_ver = '.'.join(to_version.split('.')[:2])
|
||||
|
||||
scripts = []
|
||||
for fn in sorted(glob.glob(join(installdir, 'upgrade', 'upgrade_*_*.sh'))):
|
||||
va, vb = parse_upgrade_script_version(fn)
|
||||
if va >= from_major_ver and vb <= to_major_ver:
|
||||
scripts.append(fn)
|
||||
return scripts
|
||||
|
||||
def parse_upgrade_script_version(script):
|
||||
script = basename(script)
|
||||
m = re.match(r'upgrade_([0-9+.]+)_([0-9+.]+).sh', basename(script))
|
||||
return m.groups()
|
||||
|
||||
def check_upgrade():
|
||||
last_version = read_version_stamp()
|
||||
current_version = os.environ['SEAFILE_VERSION']
|
||||
if last_version == current_version:
|
||||
return
|
||||
|
||||
scripts_to_run = collect_upgrade_scripts(from_version=last_version, to_version=current_version)
|
||||
for script in scripts_to_run:
|
||||
loginfo('Running scripts {}'.format(script))
|
||||
# Here we use a trick: use a version stamp like 6.1.0 to prevent running
|
||||
# all upgrade scripts before 6.1 again (because 6.1 < 6.1.0 in python)
|
||||
new_version = parse_upgrade_script_version(script)[1] + '.0'
|
||||
|
||||
replace_file_pattern(script, 'read dummy', '')
|
||||
call(script)
|
||||
|
||||
update_version_stamp(new_version)
|
||||
|
||||
update_version_stamp(current_version)
|
||||
|
||||
def main():
|
||||
wait_for_mysql()
|
||||
|
||||
os.chdir(installdir)
|
||||
check_upgrade()
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,287 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
#coding: UTF-8
|
||||
|
||||
from configparser import ConfigParser
|
||||
from contextlib import contextmanager
|
||||
import os
|
||||
import datetime
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir, expanduser
|
||||
import platform
|
||||
import sys
|
||||
import subprocess
|
||||
import time
|
||||
import logging
|
||||
import logging.config
|
||||
import click
|
||||
import termcolor
|
||||
import colorlog
|
||||
|
||||
logger = logging.getLogger('.utils')
|
||||
|
||||
DEBUG_ENABLED = os.environ.get('SEAFILE_DOCKER_VERBOSE', '').lower() in ('true', '1', 'yes')
|
||||
|
||||
def eprint(*a, **kw):
|
||||
kw['file'] = sys.stderr
|
||||
print(*a, **kw)
|
||||
|
||||
def identity(msg, *a, **kw):
|
||||
return msg
|
||||
|
||||
colored = identity if not os.isatty(sys.stdin.fileno()) else termcolor.colored
|
||||
red = lambda s: colored(s, 'red')
|
||||
green = lambda s: colored(s, 'green')
|
||||
|
||||
def underlined(msg):
|
||||
return '\x1b[4m{}\x1b[0m'.format(msg)
|
||||
|
||||
def sudo(*a, **kw):
|
||||
call('sudo ' + a[0], *a[1:], **kw)
|
||||
|
||||
def _find_flag(args, *opts, **kw):
|
||||
is_flag = kw.get('is_flag', False)
|
||||
if is_flag:
|
||||
return any([opt in args for opt in opts])
|
||||
else:
|
||||
for opt in opts:
|
||||
try:
|
||||
return args[args.index(opt) + 1]
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
def call(*a, **kw):
|
||||
dry_run = kw.pop('dry_run', False)
|
||||
quiet = kw.pop('quiet', DEBUG_ENABLED)
|
||||
cwd = kw.get('cwd', os.getcwd())
|
||||
check_call = kw.pop('check_call', True)
|
||||
reduct_args = kw.pop('reduct_args', [])
|
||||
if not quiet:
|
||||
toprint = a[0]
|
||||
args = [x.strip('"') for x in a[0].split() if '=' not in x]
|
||||
for arg in reduct_args:
|
||||
value = _find_flag(args, arg)
|
||||
toprint = toprint.replace(value, '{}**reducted**'.format(value[:3]))
|
||||
logdbg('calling: ' + green(toprint))
|
||||
logdbg('cwd: ' + green(cwd))
|
||||
kw.setdefault('shell', True)
|
||||
if not dry_run:
|
||||
if check_call:
|
||||
return subprocess.check_call(*a, **kw)
|
||||
else:
|
||||
return subprocess.Popen(*a, **kw).wait()
|
||||
|
||||
@contextmanager
|
||||
def cd(path):
|
||||
path = expanduser(path)
|
||||
olddir = os.getcwd()
|
||||
os.chdir(path)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
os.chdir(olddir)
|
||||
|
||||
def must_makedir(p):
|
||||
p = expanduser(p)
|
||||
if not exists(p):
|
||||
logger.info('created folder %s', p)
|
||||
os.makedirs(p)
|
||||
else:
|
||||
logger.debug('folder %s already exists', p)
|
||||
|
||||
def setup_colorlog():
|
||||
logging.config.dictConfig({
|
||||
'version': 1,
|
||||
'disable_existing_loggers': False,
|
||||
'formatters': {
|
||||
'standard': {
|
||||
'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
|
||||
},
|
||||
'colored': {
|
||||
'()': 'colorlog.ColoredFormatter',
|
||||
'format': "%(log_color)s[%(asctime)s]%(reset)s %(blue)s%(message)s",
|
||||
'datefmt': '%m/%d/%Y %H:%M:%S',
|
||||
},
|
||||
},
|
||||
'handlers': {
|
||||
'default': {
|
||||
'level': 'INFO',
|
||||
'formatter': 'colored',
|
||||
'class': 'logging.StreamHandler',
|
||||
},
|
||||
},
|
||||
'loggers': {
|
||||
'': {
|
||||
'handlers': ['default'],
|
||||
'level': 'INFO',
|
||||
'propagate': True
|
||||
},
|
||||
'django.request': {
|
||||
'handlers': ['default'],
|
||||
'level': 'WARN',
|
||||
'propagate': False
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(
|
||||
logging.WARNING)
|
||||
|
||||
|
||||
def setup_logging(level=logging.INFO):
|
||||
kw = {
|
||||
'format': '[%(asctime)s][%(module)s]: %(message)s',
|
||||
'datefmt': '%m/%d/%Y %H:%M:%S',
|
||||
'level': level,
|
||||
'stream': sys.stdout
|
||||
}
|
||||
|
||||
logging.basicConfig(**kw)
|
||||
logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(
|
||||
logging.WARNING)
|
||||
|
||||
def get_process_cmd(pid, env=False):
|
||||
env = 'e' if env else ''
|
||||
try:
|
||||
return subprocess.check_output('ps {} -o command {}'.format(env, pid),
|
||||
shell=True).strip().splitlines()[1]
|
||||
# except Exception, e:
|
||||
# print(e)
|
||||
except:
|
||||
return None
|
||||
|
||||
def get_match_pids(pattern):
|
||||
pgrep_output = subprocess.check_output(
|
||||
'pgrep -f "{}" || true'.format(pattern),
|
||||
shell=True).strip()
|
||||
return [int(pid) for pid in pgrep_output.splitlines()]
|
||||
|
||||
def ask_for_confirm(msg):
|
||||
confirm = click.prompt(msg, default='Y')
|
||||
return confirm.lower() in ('y', 'yes')
|
||||
|
||||
def confirm_command_to_run(cmd):
|
||||
if ask_for_confirm('Run the command: {} ?'.format(green(cmd))):
|
||||
call(cmd)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
def git_current_commit():
|
||||
return get_command_output('git rev-parse --short HEAD').strip()
|
||||
|
||||
def get_command_output(cmd):
|
||||
shell = not isinstance(cmd, list)
|
||||
return subprocess.check_output(cmd, shell=shell)
|
||||
|
||||
def ask_yes_or_no(msg, prompt='', default=None):
|
||||
print('\n' + msg + '\n')
|
||||
while True:
|
||||
answer = input(prompt + ' [yes/no] ').lower()
|
||||
if not answer:
|
||||
continue
|
||||
|
||||
if answer not in ('yes', 'no', 'y', 'n'):
|
||||
continue
|
||||
|
||||
if answer in ('yes', 'y'):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def git_branch_exists(branch):
|
||||
return call('git rev-parse --short --verify {}'.format(branch)) == 0
|
||||
|
||||
def to_unicode(s):
|
||||
if isinstance(s, str):
|
||||
return s.decode('utf-8')
|
||||
else:
|
||||
return s
|
||||
|
||||
def to_utf8(s):
|
||||
if isinstance(s, str):
|
||||
return s.encode('utf-8')
|
||||
else:
|
||||
return s
|
||||
|
||||
def git_commit_time(refspec):
|
||||
return int(get_command_output('git log -1 --format="%ct" {}'.format(
|
||||
refspec)).strip())
|
||||
|
||||
def get_seafile_version():
|
||||
return os.environ['SEAFILE_VERSION']
|
||||
|
||||
def get_install_dir():
|
||||
return join('/opt/seafile/' + get_conf('SEAFILE_SERVER', 'seafile-server') + '-{}'.format(get_seafile_version()))
|
||||
|
||||
def get_script(script):
|
||||
return join(get_install_dir(), script)
|
||||
|
||||
|
||||
_config = None
|
||||
|
||||
def get_conf(key, default=None):
|
||||
key = key.upper()
|
||||
return os.environ.get(key, default)
|
||||
|
||||
def _add_default_context(context):
|
||||
default_context = {
|
||||
'current_timestr': datetime.datetime.now().strftime('%m/%d/%Y %H:%M:%S'),
|
||||
}
|
||||
for k in default_context:
|
||||
context.setdefault(k, default_context[k])
|
||||
|
||||
def render_template(template, target, context):
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
env = Environment(loader=FileSystemLoader(dirname(template)))
|
||||
_add_default_context(context)
|
||||
content = env.get_template(basename(template)).render(**context)
|
||||
with open(target, 'w') as fp:
|
||||
fp.write(content)
|
||||
|
||||
def logdbg(msg):
|
||||
if DEBUG_ENABLED:
|
||||
msg = '[debug] ' + msg
|
||||
loginfo(msg)
|
||||
|
||||
def loginfo(msg):
|
||||
msg = '[{}] {}'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), green(msg))
|
||||
eprint(msg)
|
||||
|
||||
def cert_has_valid_days(cert, days):
|
||||
assert exists(cert)
|
||||
|
||||
secs = 86400 * int(days)
|
||||
retcode = call('openssl x509 -checkend {} -noout -in {}'.format(secs, cert), check_call=False)
|
||||
return retcode == 0
|
||||
|
||||
def get_version_stamp_file():
|
||||
return '/shared/seafile/seafile-data/current_version'
|
||||
|
||||
def read_version_stamp(fn=get_version_stamp_file()):
|
||||
assert exists(fn), 'version stamp file {} does not exist!'.format(fn)
|
||||
with open(fn, 'r') as fp:
|
||||
return fp.read().strip()
|
||||
|
||||
def update_version_stamp(version, fn=get_version_stamp_file()):
|
||||
with open(fn, 'w') as fp:
|
||||
fp.write(version + '\n')
|
||||
|
||||
def wait_for_mysql():
|
||||
while not exists('/var/run/mysqld/mysqld.sock'):
|
||||
logdbg('waiting for mysql server to be ready')
|
||||
time.sleep(2)
|
||||
logdbg('mysql server is ready')
|
||||
|
||||
def wait_for_nginx():
|
||||
while True:
|
||||
logdbg('waiting for nginx server to be ready')
|
||||
output = get_command_output('netstat -nltp')
|
||||
if ':80 ' in output:
|
||||
logdbg(output)
|
||||
logdbg('nginx is ready')
|
||||
return
|
||||
time.sleep(2)
|
||||
|
||||
def replace_file_pattern(fn, pattern, replacement):
|
||||
with open(fn, 'r') as fp:
|
||||
content = fp.read()
|
||||
with open(fn, 'w') as fp:
|
||||
fp.write(content.replace(pattern, replacement))
|
|
@ -1,33 +0,0 @@
|
|||
daemon off;
|
||||
user www-data;
|
||||
worker_processes auto;
|
||||
|
||||
events {
|
||||
worker_connections 768;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
server_names_hash_bucket_size 256;
|
||||
server_names_hash_max_size 1024;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
|
||||
access_log /var/log/nginx/access.log;
|
||||
error_log /var/log/nginx/error.log info;
|
||||
|
||||
gzip on;
|
||||
gzip_types text/plain text/css application/javascript application/json text/javascript;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
location / {
|
||||
return 444;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,3 +0,0 @@
|
|||
#!/bin/bash
|
||||
exec 2>&1
|
||||
exec /usr/sbin/nginx
|
|
@ -1,3 +0,0 @@
|
|||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
# min hour dayofmonth month dayofweek command
|
||||
0 0 1 * * root /scripts/ssl.sh {{ ssl_dir }} {{ domain }}
|
|
@ -1,82 +0,0 @@
|
|||
# -*- mode: nginx -*-
|
||||
# Auto generated at {{ current_timestr }}
|
||||
{% if https -%}
|
||||
server {
|
||||
listen 80;
|
||||
server_name _ default_server;
|
||||
rewrite ^ https://{{ domain }}$request_uri? permanent;
|
||||
}
|
||||
{% endif -%}
|
||||
|
||||
server {
|
||||
{% if https -%}
|
||||
listen 443;
|
||||
ssl on;
|
||||
ssl_certificate /shared/ssl/{{ domain }}.crt;
|
||||
ssl_certificate_key /shared/ssl/{{ domain }}.key;
|
||||
|
||||
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
|
||||
|
||||
# TODO: More SSL security hardening: ssl_session_tickets & ssl_dhparam
|
||||
# ssl_session_tickets on;
|
||||
# ssl_session_ticket_key /etc/nginx/sessionticket.key;
|
||||
# ssl_session_cache shared:SSL:10m;
|
||||
# ssl_session_timeout 10m;
|
||||
{% else -%}
|
||||
listen 80;
|
||||
{% endif -%}
|
||||
|
||||
server_name {{ domain }};
|
||||
|
||||
client_max_body_size 10m;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000/;
|
||||
proxy_read_timeout 310s;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header Connection "";
|
||||
proxy_http_version 1.1;
|
||||
}
|
||||
|
||||
location /seafhttp {
|
||||
rewrite ^/seafhttp(.*)$ $1 break;
|
||||
proxy_pass http://127.0.0.1:8082;
|
||||
client_max_body_size 0;
|
||||
proxy_connect_timeout 36000s;
|
||||
proxy_read_timeout 36000s;
|
||||
proxy_request_buffering off;
|
||||
}
|
||||
|
||||
location /seafdav {
|
||||
client_max_body_size 0;
|
||||
fastcgi_pass 127.0.0.1:8080;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
fastcgi_param PATH_INFO $fastcgi_script_name;
|
||||
|
||||
fastcgi_param SERVER_PROTOCOL $server_protocol;
|
||||
fastcgi_param QUERY_STRING $query_string;
|
||||
fastcgi_param REQUEST_METHOD $request_method;
|
||||
fastcgi_param CONTENT_TYPE $content_type;
|
||||
fastcgi_param CONTENT_LENGTH $content_length;
|
||||
fastcgi_param SERVER_ADDR $server_addr;
|
||||
fastcgi_param SERVER_PORT $server_port;
|
||||
fastcgi_param SERVER_NAME $server_name;
|
||||
|
||||
access_log /var/log/nginx/seafdav.access.log;
|
||||
error_log /var/log/nginx/seafdav.error.log;
|
||||
}
|
||||
|
||||
location /media {
|
||||
root /opt/seafile/seafile-server-latest/seahub;
|
||||
}
|
||||
|
||||
# For letsencrypt
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
}
|
|
@ -1,156 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
Bootstraping seafile server, letsencrypt (verification & cron job).
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import uuid
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_conf, get_install_dir, loginfo,
|
||||
get_script, render_template, get_seafile_version, eprint,
|
||||
cert_has_valid_days, get_version_stamp_file, update_version_stamp,
|
||||
wait_for_mysql, wait_for_nginx, read_version_stamp
|
||||
)
|
||||
|
||||
seafile_version = get_seafile_version()
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
shared_seafiledir = '/shared/seafile'
|
||||
ssl_dir = '/shared/ssl'
|
||||
generated_dir = '/bootstrap/generated'
|
||||
|
||||
def init_letsencrypt():
|
||||
loginfo('Preparing for letsencrypt ...')
|
||||
wait_for_nginx()
|
||||
|
||||
if not exists(ssl_dir):
|
||||
os.mkdir(ssl_dir)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'ssl_dir': ssl_dir,
|
||||
'domain': domain,
|
||||
}
|
||||
render_template(
|
||||
'/templates/letsencrypt.cron.template',
|
||||
join(generated_dir, 'letsencrypt.cron'),
|
||||
context
|
||||
)
|
||||
|
||||
ssl_crt = '/shared/ssl/{}.crt'.format(domain)
|
||||
if exists(ssl_crt):
|
||||
loginfo('Found existing cert file {}'.format(ssl_crt))
|
||||
if cert_has_valid_days(ssl_crt, 30):
|
||||
loginfo('Skip letsencrypt verification since we have a valid certificate')
|
||||
return
|
||||
|
||||
loginfo('Starting letsencrypt verification')
|
||||
# Create a temporary nginx conf to start a server, which would accessed by letsencrypt
|
||||
context = {
|
||||
'https': False,
|
||||
'domain': domain,
|
||||
}
|
||||
render_template('/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf', context)
|
||||
|
||||
call('nginx -s reload')
|
||||
time.sleep(2)
|
||||
|
||||
call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain))
|
||||
# if call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain), check_call=False) != 0:
|
||||
# eprint('Now waiting 1000s for postmortem')
|
||||
# time.sleep(1000)
|
||||
# sys.exit(1)
|
||||
|
||||
|
||||
def generate_local_nginx_conf():
|
||||
# Now create the final nginx configuratin
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'https': is_https(),
|
||||
'domain': domain,
|
||||
}
|
||||
render_template(
|
||||
'/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf',
|
||||
context
|
||||
)
|
||||
|
||||
|
||||
def is_https():
|
||||
return get_conf('SEAFILE_SERVER_LETSENCRYPT', 'false').lower() == 'true'
|
||||
|
||||
def parse_args():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument('--parse-ports', action='store_true')
|
||||
|
||||
return ap.parse_args()
|
||||
|
||||
def init_seafile_server():
|
||||
version_stamp_file = get_version_stamp_file()
|
||||
if exists(join(shared_seafiledir, 'seafile-data')):
|
||||
if not exists(version_stamp_file):
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
||||
# sysbol link unlink after docker finish.
|
||||
latest_version_dir='/opt/seafile/seafile-server-latest'
|
||||
current_version_dir='/opt/seafile/' + get_conf('SEAFILE_SERVER', 'seafile-server') + '-' + read_version_stamp()
|
||||
if not exists(latest_version_dir):
|
||||
call('ln -sf ' + current_version_dir + ' ' + latest_version_dir)
|
||||
loginfo('Skip running setup-seafile-mysql.py because there is existing seafile-data folder.')
|
||||
return
|
||||
|
||||
loginfo('Now running setup-seafile-mysql.py in auto mode.')
|
||||
env = {
|
||||
'SERVER_NAME': 'seafile',
|
||||
'SERVER_IP': get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com'),
|
||||
'MYSQL_USER': 'seafile',
|
||||
'MYSQL_USER_PASSWD': str(uuid.uuid4()),
|
||||
'MYSQL_USER_HOST': '127.0.0.1',
|
||||
# Default MariaDB root user has empty password and can only connect from localhost.
|
||||
'MYSQL_ROOT_PASSWD': '',
|
||||
}
|
||||
|
||||
# Change the script to allow mysql root password to be empty
|
||||
call('''sed -i -e 's/if not mysql_root_passwd/if not mysql_root_passwd and "MYSQL_ROOT_PASSWD" not in os.environ/g' {}'''
|
||||
.format(get_script('setup-seafile-mysql.py')))
|
||||
|
||||
setup_script = get_script('setup-seafile-mysql.sh')
|
||||
call('{} auto -n seafile'.format(setup_script), env=env)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
proto = 'https' if is_https() else 'http'
|
||||
with open(join(topdir, 'conf', 'seahub_settings.py'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write('FILE_SERVER_ROOT = "{proto}://{domain}/seafhttp"'.format(proto=proto, domain=domain))
|
||||
fp.write('\n')
|
||||
|
||||
# By default ccnet-server binds to the unix socket file
|
||||
# "/opt/seafile/ccnet/ccnet.sock", but /opt/seafile/ccnet/ is a mounted
|
||||
# volume from the docker host, and on windows and some linux environment
|
||||
# it's not possible to create unix sockets in an external-mounted
|
||||
# directories. So we change the unix socket file path to
|
||||
# "/opt/seafile/ccnet.sock" to avoid this problem.
|
||||
with open(join(topdir, 'conf', 'ccnet.conf'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write('[Client]\n')
|
||||
fp.write('UNIX_SOCKET = /opt/seafile/ccnet.sock\n')
|
||||
fp.write('\n')
|
||||
|
||||
files_to_copy = ['conf', 'ccnet', 'seafile-data', 'seahub-data', 'pro-data']
|
||||
for fn in files_to_copy:
|
||||
src = join(topdir, fn)
|
||||
dst = join(shared_seafiledir, fn)
|
||||
if not exists(dst) and exists(src):
|
||||
shutil.move(src, shared_seafiledir)
|
||||
call('ln -sf ' + join(shared_seafiledir, fn) + ' ' + src)
|
||||
|
||||
loginfo('Updating version stamp')
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
|
@ -1,81 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
set -o pipefail
|
||||
|
||||
if [[ $SEAFILE_BOOTSRAP != "" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ $TIME_ZONE != "" ]]; then
|
||||
time_zone=/usr/share/zoneinfo/$TIME_ZONE
|
||||
if [[ ! -e $time_zone ]]; then
|
||||
echo "invalid time zone"
|
||||
exit 1
|
||||
else
|
||||
ln -snf $time_zone /etc/localtime
|
||||
echo "$TIME_ZONE" > /etc/timezone
|
||||
fi
|
||||
fi
|
||||
|
||||
dirs=(
|
||||
conf
|
||||
ccnet
|
||||
seafile-data
|
||||
seahub-data
|
||||
pro-data
|
||||
seafile-license.txt
|
||||
)
|
||||
|
||||
for d in ${dirs[*]}; do
|
||||
src=/shared/seafile/$d
|
||||
if [[ -e $src ]]; then
|
||||
rm -rf /opt/seafile/$d && ln -sf $src /opt/seafile
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ! -e /shared/logs/seafile ]]; then
|
||||
mkdir -p /shared/logs/seafile
|
||||
fi
|
||||
rm -rf /opt/seafile/logs && ln -sf /shared/logs/seafile/ /opt/seafile/logs
|
||||
|
||||
current_version_dir=/opt/seafile/${SEAFILE_SERVER}-${SEAFILE_VERSION}
|
||||
latest_version_dir=/opt/seafile/seafile-server-latest
|
||||
seahub_data_dir=/shared/seafile/seahub-data
|
||||
|
||||
if [[ ! -e $seahub_data_dir ]]; then
|
||||
mkdir -p $seahub_data_dir
|
||||
fi
|
||||
|
||||
media_dirs=(
|
||||
avatars
|
||||
custom
|
||||
)
|
||||
for d in ${media_dirs[*]}; do
|
||||
source_media_dir=${current_version_dir}/seahub/media/$d
|
||||
if [ -e ${source_media_dir} ] && [ ! -e ${seahub_data_dir}/$d ]; then
|
||||
mv $source_media_dir ${seahub_data_dir}/$d
|
||||
fi
|
||||
rm -rf $source_media_dir && ln -sf ${seahub_data_dir}/$d $source_media_dir
|
||||
done
|
||||
|
||||
rm -rf /var/lib/mysql
|
||||
if [[ ! -e /shared/db ]];then
|
||||
mkdir -p /shared/db
|
||||
fi
|
||||
ln -sf /shared/db /var/lib/mysql
|
||||
|
||||
if [[ ! -e /shared/logs/var-log ]]; then
|
||||
chmod 777 /var/log -R
|
||||
mv /var/log /shared/logs/var-log
|
||||
fi
|
||||
rm -rf /var/log && ln -sf /shared/logs/var-log /var/log
|
||||
|
||||
if [[ ! -e latest_version_dir ]]; then
|
||||
ln -sf $current_version_dir $latest_version_dir
|
||||
fi
|
||||
|
||||
chmod u+x /scripts/*
|
||||
|
||||
echo $PYTHON
|
||||
$PYTHON /scripts/init.py
|
|
@ -1,37 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
# Before
|
||||
SEAFILE_DIR=/opt/seafile/seafile-server-latest
|
||||
|
||||
if [[ $SEAFILE_SERVER != *"pro"* ]]; then
|
||||
echo "Seafile CE: Stop Seafile to perform offline garbage collection."
|
||||
$SEAFILE_DIR/seafile.sh stop
|
||||
|
||||
echo "Waiting for the server to shut down properly..."
|
||||
sleep 5
|
||||
else
|
||||
echo "Seafile Pro: Perform online garbage collection."
|
||||
fi
|
||||
|
||||
# Do it
|
||||
(
|
||||
set +e
|
||||
$SEAFILE_DIR/seaf-gc.sh "$@" | tee -a /var/log/gc.log
|
||||
# We want to presevent the exit code of seaf-gc.sh
|
||||
exit "${PIPESTATUS[0]}"
|
||||
)
|
||||
|
||||
gc_exit_code=$?
|
||||
|
||||
# After
|
||||
|
||||
if [[ $SEAFILE_SERVER != *"pro"* ]]; then
|
||||
echo "Giving the server some time..."
|
||||
sleep 3
|
||||
|
||||
$SEAFILE_DIR/seafile.sh start
|
||||
fi
|
||||
|
||||
exit $gc_exit_code
|
|
@ -1,46 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
Starts the seafile/seahub server and watches the controller process. It is
|
||||
the entrypoint command of the docker container.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_conf, get_install_dir, get_script, get_command_output,
|
||||
render_template, wait_for_mysql
|
||||
)
|
||||
from upgrade import check_upgrade
|
||||
from bootstrap import init_seafile_server, is_https, init_letsencrypt, generate_local_nginx_conf
|
||||
|
||||
|
||||
shared_seafiledir = '/shared/seafile'
|
||||
ssl_dir = '/shared/ssl'
|
||||
generated_dir = '/bootstrap/generated'
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
|
||||
|
||||
def main():
|
||||
call('cp -rf /scripts/setup-seafile-mysql.py ' + join(installdir, 'setup-seafile-mysql.py'))
|
||||
if not exists(shared_seafiledir):
|
||||
os.mkdir(shared_seafiledir)
|
||||
if not exists(generated_dir):
|
||||
os.makedirs(generated_dir)
|
||||
|
||||
if is_https():
|
||||
init_letsencrypt()
|
||||
generate_local_nginx_conf()
|
||||
|
||||
if not exists(join(shared_seafiledir, 'conf')):
|
||||
init_seafile_server()
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
File diff suppressed because it is too large
Load diff
|
@ -1,46 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
ssldir=${1:?"error params"}
|
||||
domain=${2:?"error params"}
|
||||
|
||||
letsencryptdir=$ssldir/letsencrypt
|
||||
letsencrypt_script=$letsencryptdir/acme_tiny.py
|
||||
|
||||
ssl_account_key=${domain}.account.key
|
||||
ssl_csr=${domain}.csr
|
||||
ssl_key=${domain}.key
|
||||
ssl_crt=${domain}.crt
|
||||
|
||||
mkdir -p /var/www/challenges && chmod -R 777 /var/www/challenges
|
||||
mkdir -p ssldir
|
||||
|
||||
if ! [[ -d $letsencryptdir ]]; then
|
||||
git clone git://github.com/diafygi/acme-tiny.git $letsencryptdir
|
||||
else
|
||||
cd $letsencryptdir
|
||||
git pull origin master:master
|
||||
fi
|
||||
|
||||
cd $ssldir
|
||||
|
||||
if [[ ! -e ${ssl_account_key} ]]; then
|
||||
openssl genrsa 4096 > ${ssl_account_key}
|
||||
fi
|
||||
|
||||
if [[ ! -e ${ssl_key} ]]; then
|
||||
openssl genrsa 4096 > ${ssl_key}
|
||||
fi
|
||||
|
||||
if [[ ! -e ${ssl_csr} ]]; then
|
||||
openssl req -new -sha256 -key ${ssl_key} -subj "/CN=$domain" > $ssl_csr
|
||||
fi
|
||||
|
||||
python $letsencrypt_script --account-key ${ssl_account_key} --csr $ssl_csr --acme-dir /var/www/challenges/ > ./signed.crt
|
||||
curl -sSL -o intermediate.pem https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem
|
||||
cat signed.crt intermediate.pem > ${ssl_crt}
|
||||
|
||||
nginx -s reload
|
||||
|
||||
echo "Nginx reloaded."
|
|
@ -1,61 +0,0 @@
|
|||
import os
|
||||
import time
|
||||
import json
|
||||
import argparse
|
||||
from os.path import join, exists, dirname
|
||||
|
||||
from upgrade import check_upgrade
|
||||
from utils import call, get_conf, get_script, get_command_output, get_install_dir
|
||||
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
|
||||
def watch_controller():
|
||||
maxretry = 4
|
||||
retry = 0
|
||||
while retry < maxretry:
|
||||
controller_pid = get_command_output('ps aux | grep seafile-controller | grep -v grep || true').strip()
|
||||
garbage_collector_pid = get_command_output('ps aux | grep /scripts/gc.sh | grep -v grep || true').strip()
|
||||
if not controller_pid and not garbage_collector_pid:
|
||||
retry += 1
|
||||
else:
|
||||
retry = 0
|
||||
time.sleep(5)
|
||||
print 'seafile controller exited unexpectedly.'
|
||||
sys.exit(1)
|
||||
|
||||
def main(args):
|
||||
call('/scripts/create_data_links.sh')
|
||||
check_upgrade()
|
||||
os.chdir(installdir)
|
||||
call('service nginx start &')
|
||||
|
||||
admin_pw = {
|
||||
'email': get_conf('SEAFILE_ADMIN_EMAIL', 'me@example.com'),
|
||||
'password': get_conf('SEAFILE_ADMIN_PASSWORD', 'asecret'),
|
||||
}
|
||||
password_file = join(topdir, 'conf', 'admin.txt')
|
||||
with open(password_file, 'w+') as fp:
|
||||
json.dump(admin_pw, fp)
|
||||
|
||||
|
||||
try:
|
||||
call('{} start'.format(get_script('seafile.sh')))
|
||||
call('{} start'.format(get_script('seahub.sh')))
|
||||
if args.mode == 'backend':
|
||||
call('{} start'.format(get_script('seafile-background-tasks.sh')))
|
||||
finally:
|
||||
if exists(password_file):
|
||||
os.unlink(password_file)
|
||||
|
||||
print 'seafile server is running now.'
|
||||
try:
|
||||
watch_controller()
|
||||
except KeyboardInterrupt:
|
||||
print 'Stopping seafile server.'
|
||||
sys.exit(0)
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description='Seafile cluster start script')
|
||||
parser.add_argument('--mode')
|
||||
main(parser.parse_args())
|
|
@ -1,18 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
function start-front-end() {
|
||||
python /scripts/start.py
|
||||
}
|
||||
|
||||
function start-back-end() {
|
||||
python /scripts/start.py --mode backend
|
||||
}
|
||||
|
||||
case $1 in
|
||||
"front-end" )
|
||||
start-front-end
|
||||
;;
|
||||
"back-end" )
|
||||
start-back-end
|
||||
;;
|
||||
esac
|
|
@ -1,82 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
This script is used to run proper upgrade scripts automatically.
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
import glob
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_install_dir, get_script, get_command_output, replace_file_pattern,
|
||||
read_version_stamp, wait_for_mysql, update_version_stamp, loginfo
|
||||
)
|
||||
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
|
||||
def collect_upgrade_scripts(from_version, to_version):
|
||||
"""
|
||||
Give the current installed version, calculate which upgrade scripts we need
|
||||
to run to upgrade it to the latest verison.
|
||||
|
||||
For example, given current version 5.0.1 and target version 6.1.0, and these
|
||||
upgrade scripts:
|
||||
|
||||
upgrade_4.4_5.0.sh
|
||||
upgrade_5.0_5.1.sh
|
||||
upgrade_5.1_6.0.sh
|
||||
upgrade_6.0_6.1.sh
|
||||
|
||||
We need to run upgrade_5.0_5.1.sh, upgrade_5.1_6.0.sh, and upgrade_6.0_6.1.sh.
|
||||
"""
|
||||
from_major_ver = '.'.join(from_version.split('.')[:2])
|
||||
to_major_ver = '.'.join(to_version.split('.')[:2])
|
||||
|
||||
scripts = []
|
||||
for fn in sorted(glob.glob(join(installdir, 'upgrade', 'upgrade_*_*.sh'))):
|
||||
va, vb = parse_upgrade_script_version(fn)
|
||||
if va >= from_major_ver and vb <= to_major_ver:
|
||||
scripts.append(fn)
|
||||
return scripts
|
||||
|
||||
def parse_upgrade_script_version(script):
|
||||
script = basename(script)
|
||||
m = re.match(r'upgrade_([0-9+.]+)_([0-9+.]+).sh', basename(script))
|
||||
return m.groups()
|
||||
|
||||
def check_upgrade():
|
||||
last_version = read_version_stamp()
|
||||
current_version = os.environ['SEAFILE_VERSION']
|
||||
if last_version == current_version:
|
||||
return
|
||||
|
||||
scripts_to_run = collect_upgrade_scripts(from_version=last_version, to_version=current_version)
|
||||
for script in scripts_to_run:
|
||||
loginfo('Running scripts {}'.format(script))
|
||||
# Here we use a trick: use a version stamp like 6.1.0 to prevent running
|
||||
# all upgrade scripts before 6.1 again (because 6.1 < 6.1.0 in python)
|
||||
new_version = parse_upgrade_script_version(script)[1] + '.0'
|
||||
|
||||
replace_file_pattern(script, 'read dummy', '')
|
||||
call(script)
|
||||
|
||||
update_version_stamp(new_version)
|
||||
|
||||
update_version_stamp(current_version)
|
||||
|
||||
def main():
|
||||
wait_for_mysql()
|
||||
|
||||
os.chdir(installdir)
|
||||
check_upgrade()
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,287 +0,0 @@
|
|||
# coding: UTF-8
|
||||
|
||||
from __future__ import print_function
|
||||
from ConfigParser import ConfigParser
|
||||
from contextlib import contextmanager
|
||||
import os
|
||||
import datetime
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir, expanduser
|
||||
import platform
|
||||
import sys
|
||||
import subprocess
|
||||
import time
|
||||
import logging
|
||||
import logging.config
|
||||
import click
|
||||
import termcolor
|
||||
import colorlog
|
||||
|
||||
logger = logging.getLogger('.utils')
|
||||
|
||||
DEBUG_ENABLED = os.environ.get('SEAFILE_DOCKER_VERBOSE', '').lower() in ('true', '1', 'yes')
|
||||
|
||||
def eprint(*a, **kw):
|
||||
kw['file'] = sys.stderr
|
||||
print(*a, **kw)
|
||||
|
||||
def identity(msg, *a, **kw):
|
||||
return msg
|
||||
|
||||
colored = identity if not os.isatty(sys.stdin.fileno()) else termcolor.colored
|
||||
red = lambda s: colored(s, 'red')
|
||||
green = lambda s: colored(s, 'green')
|
||||
|
||||
def underlined(msg):
|
||||
return '\x1b[4m{}\x1b[0m'.format(msg)
|
||||
|
||||
def sudo(*a, **kw):
|
||||
call('sudo ' + a[0], *a[1:], **kw)
|
||||
|
||||
def _find_flag(args, *opts, **kw):
|
||||
is_flag = kw.get('is_flag', False)
|
||||
if is_flag:
|
||||
return any([opt in args for opt in opts])
|
||||
else:
|
||||
for opt in opts:
|
||||
try:
|
||||
return args[args.index(opt) + 1]
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
def call(*a, **kw):
|
||||
dry_run = kw.pop('dry_run', False)
|
||||
quiet = kw.pop('quiet', DEBUG_ENABLED)
|
||||
cwd = kw.get('cwd', os.getcwd())
|
||||
check_call = kw.pop('check_call', True)
|
||||
reduct_args = kw.pop('reduct_args', [])
|
||||
if not quiet:
|
||||
toprint = a[0]
|
||||
args = [x.strip('"') for x in a[0].split() if '=' not in x]
|
||||
for arg in reduct_args:
|
||||
value = _find_flag(args, arg)
|
||||
toprint = toprint.replace(value, '{}**reducted**'.format(value[:3]))
|
||||
logdbg('calling: ' + green(toprint))
|
||||
logdbg('cwd: ' + green(cwd))
|
||||
kw.setdefault('shell', True)
|
||||
if not dry_run:
|
||||
if check_call:
|
||||
return subprocess.check_call(*a, **kw)
|
||||
else:
|
||||
return subprocess.Popen(*a, **kw).wait()
|
||||
|
||||
@contextmanager
|
||||
def cd(path):
|
||||
path = expanduser(path)
|
||||
olddir = os.getcwd()
|
||||
os.chdir(path)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
os.chdir(olddir)
|
||||
|
||||
def must_makedir(p):
|
||||
p = expanduser(p)
|
||||
if not exists(p):
|
||||
logger.info('created folder %s', p)
|
||||
os.makedirs(p)
|
||||
else:
|
||||
logger.debug('folder %s already exists', p)
|
||||
|
||||
def setup_colorlog():
|
||||
logging.config.dictConfig({
|
||||
'version': 1,
|
||||
'disable_existing_loggers': False,
|
||||
'formatters': {
|
||||
'standard': {
|
||||
'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s'
|
||||
},
|
||||
'colored': {
|
||||
'()': 'colorlog.ColoredFormatter',
|
||||
'format': "%(log_color)s[%(asctime)s]%(reset)s %(blue)s%(message)s",
|
||||
'datefmt': '%m/%d/%Y %H:%M:%S',
|
||||
},
|
||||
},
|
||||
'handlers': {
|
||||
'default': {
|
||||
'level': 'INFO',
|
||||
'formatter': 'colored',
|
||||
'class': 'logging.StreamHandler',
|
||||
},
|
||||
},
|
||||
'loggers': {
|
||||
'': {
|
||||
'handlers': ['default'],
|
||||
'level': 'INFO',
|
||||
'propagate': True
|
||||
},
|
||||
'django.request': {
|
||||
'handlers': ['default'],
|
||||
'level': 'WARN',
|
||||
'propagate': False
|
||||
},
|
||||
}
|
||||
})
|
||||
|
||||
logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(
|
||||
logging.WARNING)
|
||||
|
||||
|
||||
def setup_logging(level=logging.INFO):
|
||||
kw = {
|
||||
'format': '[%(asctime)s][%(module)s]: %(message)s',
|
||||
'datefmt': '%m/%d/%Y %H:%M:%S',
|
||||
'level': level,
|
||||
'stream': sys.stdout
|
||||
}
|
||||
|
||||
logging.basicConfig(**kw)
|
||||
logging.getLogger('requests.packages.urllib3.connectionpool').setLevel(
|
||||
logging.WARNING)
|
||||
|
||||
def get_process_cmd(pid, env=False):
|
||||
env = 'e' if env else ''
|
||||
try:
|
||||
return subprocess.check_output('ps {} -o command {}'.format(env, pid),
|
||||
shell=True).strip().splitlines()[1]
|
||||
# except Exception, e:
|
||||
# print(e)
|
||||
except:
|
||||
return None
|
||||
|
||||
def get_match_pids(pattern):
|
||||
pgrep_output = subprocess.check_output(
|
||||
'pgrep -f "{}" || true'.format(pattern),
|
||||
shell=True).strip()
|
||||
return [int(pid) for pid in pgrep_output.splitlines()]
|
||||
|
||||
def ask_for_confirm(msg):
|
||||
confirm = click.prompt(msg, default='Y')
|
||||
return confirm.lower() in ('y', 'yes')
|
||||
|
||||
def confirm_command_to_run(cmd):
|
||||
if ask_for_confirm('Run the command: {} ?'.format(green(cmd))):
|
||||
call(cmd)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
def git_current_commit():
|
||||
return get_command_output('git rev-parse --short HEAD').strip()
|
||||
|
||||
def get_command_output(cmd):
|
||||
shell = not isinstance(cmd, list)
|
||||
return subprocess.check_output(cmd, shell=shell)
|
||||
|
||||
def ask_yes_or_no(msg, prompt='', default=None):
|
||||
print('\n' + msg + '\n')
|
||||
while True:
|
||||
answer = raw_input(prompt + ' [yes/no] ').lower()
|
||||
if not answer:
|
||||
continue
|
||||
|
||||
if answer not in ('yes', 'no', 'y', 'n'):
|
||||
continue
|
||||
|
||||
if answer in ('yes', 'y'):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def git_branch_exists(branch):
|
||||
return call('git rev-parse --short --verify {}'.format(branch)) == 0
|
||||
|
||||
def to_unicode(s):
|
||||
if isinstance(s, str):
|
||||
return s.decode('utf-8')
|
||||
else:
|
||||
return s
|
||||
|
||||
def to_utf8(s):
|
||||
if isinstance(s, unicode):
|
||||
return s.encode('utf-8')
|
||||
else:
|
||||
return s
|
||||
|
||||
def git_commit_time(refspec):
|
||||
return int(get_command_output('git log -1 --format="%ct" {}'.format(
|
||||
refspec)).strip())
|
||||
|
||||
def get_seafile_version():
|
||||
return os.environ['SEAFILE_VERSION']
|
||||
|
||||
def get_install_dir():
|
||||
return join('/opt/seafile/' + get_conf('SEAFILE_SERVER', 'seafile-server') + '-{}'.format(get_seafile_version()))
|
||||
|
||||
def get_script(script):
|
||||
return join(get_install_dir(), script)
|
||||
|
||||
|
||||
_config = None
|
||||
|
||||
def get_conf(key, default=None):
|
||||
key = key.upper()
|
||||
return os.environ.get(key, default)
|
||||
|
||||
def _add_default_context(context):
|
||||
default_context = {
|
||||
'current_timestr': datetime.datetime.now().strftime('%m/%d/%Y %H:%M:%S'),
|
||||
}
|
||||
for k in default_context:
|
||||
context.setdefault(k, default_context[k])
|
||||
|
||||
def render_template(template, target, context):
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
env = Environment(loader=FileSystemLoader(dirname(template)))
|
||||
_add_default_context(context)
|
||||
content = env.get_template(basename(template)).render(**context)
|
||||
with open(target, 'w') as fp:
|
||||
fp.write(content)
|
||||
|
||||
def logdbg(msg):
|
||||
if DEBUG_ENABLED:
|
||||
msg = '[debug] ' + msg
|
||||
loginfo(msg)
|
||||
|
||||
def loginfo(msg):
|
||||
msg = '[{}] {}'.format(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'), green(msg))
|
||||
eprint(msg)
|
||||
|
||||
def cert_has_valid_days(cert, days):
|
||||
assert exists(cert)
|
||||
|
||||
secs = 86400 * int(days)
|
||||
retcode = call('openssl x509 -checkend {} -noout -in {}'.format(secs, cert), check_call=False)
|
||||
return retcode == 0
|
||||
|
||||
def get_version_stamp_file():
|
||||
return '/shared/seafile/seafile-data/current_version'
|
||||
|
||||
def read_version_stamp(fn=get_version_stamp_file()):
|
||||
assert exists(fn), 'version stamp file {} does not exist!'.format(fn)
|
||||
with open(fn, 'r') as fp:
|
||||
return fp.read().strip()
|
||||
|
||||
def update_version_stamp(version, fn=get_version_stamp_file()):
|
||||
with open(fn, 'w') as fp:
|
||||
fp.write(version + '\n')
|
||||
|
||||
def wait_for_mysql():
|
||||
while not exists('/var/run/mysqld/mysqld.sock'):
|
||||
logdbg('waiting for mysql server to be ready')
|
||||
time.sleep(2)
|
||||
logdbg('mysql server is ready')
|
||||
|
||||
def wait_for_nginx():
|
||||
while True:
|
||||
logdbg('waiting for nginx server to be ready')
|
||||
output = get_command_output('netstat -nltp')
|
||||
if ':80 ' in output:
|
||||
logdbg(output)
|
||||
logdbg('nginx is ready')
|
||||
return
|
||||
time.sleep(2)
|
||||
|
||||
def replace_file_pattern(fn, pattern, replacement):
|
||||
with open(fn, 'r') as fp:
|
||||
content = fp.read()
|
||||
with open(fn, 'w') as fp:
|
||||
fp.write(content.replace(pattern, replacement))
|
58
compose/docker-compose.yml
Normal file
58
compose/docker-compose.yml
Normal file
|
@ -0,0 +1,58 @@
|
|||
version: '3.8'
|
||||
services:
|
||||
seafile:
|
||||
image: ggogel/seafile:8.0.2
|
||||
volumes:
|
||||
- seafile-data:/shared
|
||||
environment:
|
||||
- DB_HOST=db
|
||||
- DB_ROOT_PASSWD=db_dev
|
||||
- TIME_ZONE=Europe/Berlin
|
||||
- SEAFILE_ADMIN_EMAIL=me@example.com
|
||||
- SEAFILE_ADMIN_PASSWORD=asecret
|
||||
- SEAFILE_SERVER_HOSTNAME=seafile.mydomain.com # Mandatory on first deployment!
|
||||
depends_on:
|
||||
- db
|
||||
- memcached
|
||||
networks:
|
||||
- seafile-net
|
||||
|
||||
seahub-media:
|
||||
image: ggogel/seahub-media:8.0.2
|
||||
volumes:
|
||||
- seafile-data/seafile/seahub-data/avatars:/usr/share/caddy/media/avatars
|
||||
- seafile-data/seafile/seahub-data/custom:/usr/share/caddy/media/custom
|
||||
networks:
|
||||
- seafile-net
|
||||
|
||||
db:
|
||||
image: mariadb:latest
|
||||
environment:
|
||||
- MYSQL_ROOT_PASSWORD=db_dev
|
||||
- MYSQL_LOG_CONSOLE=true
|
||||
volumes:
|
||||
- seafile-mariadb:/var/lib/mysql
|
||||
networks:
|
||||
- seafile-net
|
||||
|
||||
memcached:
|
||||
image: memcached:latest
|
||||
entrypoint: memcached -m 1024
|
||||
networks:
|
||||
- seafile-net
|
||||
|
||||
caddy:
|
||||
image: ggogel/seafile-caddy
|
||||
ports:
|
||||
- 80:80 # Point your reverse proxy to port 80 of this service
|
||||
networks:
|
||||
- seafile-net
|
||||
|
||||
networks:
|
||||
seafile-net:
|
||||
driver: overlay
|
||||
internal: true
|
||||
|
||||
volumes:
|
||||
seafile-data:
|
||||
seafile-mariadb:
|
|
@ -1,67 +0,0 @@
|
|||
server_version=7.0.11
|
||||
|
||||
base_image=seafileltd/base-mc:18.04
|
||||
base_image_squashed=seafileltd/base-mc:18.04-squashed
|
||||
pro_base_image=seafileltd/pro-base-mc:18.04
|
||||
pro_base_image_squashed=seafileltd/pro-base-mc:18.04-squashed
|
||||
server_image=seafileltd/seafile-mc:$(server_version)
|
||||
server_image_squashed=seafileltd/seafile-mc:$(server_version)-squashed
|
||||
pro_server_image=seafileltd/seafile-pro-mc:$(server_version)
|
||||
pro_server_image_squashed=seafileltd/seafile-pro-mc:$(server_version)-squashed
|
||||
latest_pro_server_image=seafileltd/seafile-pro-mc:latest
|
||||
latest_server_image=seafileltd/seafile-mc:latest
|
||||
|
||||
all:
|
||||
@echo
|
||||
@echo Pleaes use '"make base"' or '"make server"' or '"make push"'.
|
||||
@echo
|
||||
|
||||
base:
|
||||
docker pull phusion/baseimage:0.11
|
||||
docker-squash --tag phusion/baseimage:latest phusion/baseimage:0.11
|
||||
docker tag phusion/baseimage:latest phusion/baseimage:0.11
|
||||
cd base && docker build -t $(base_image) .
|
||||
docker-squash --tag $(base_image_squashed) $(base_image)
|
||||
docker tag $(base_image_squashed) $(base_image)
|
||||
docker rmi `docker images --filter "dangling=true" -q --no-trunc`
|
||||
|
||||
server:
|
||||
cd seafile && cp -rf ../../scripts ./ && docker build -t $(server_image) .
|
||||
docker-squash --tag $(server_image_squashed) $(server_image) --from-layer=$(base_image)
|
||||
docker tag $(server_image_squashed) $(server_image)
|
||||
docker tag $(server_image) $(latest_server_image)
|
||||
docker rmi `docker images --filter "dangling=true" -q --no-trunc`
|
||||
|
||||
pro-base:
|
||||
cd pro_base && docker build -t $(pro_base_image) .
|
||||
docker-squash --tag $(pro_base_image_squashed) $(pro_base_image)
|
||||
docker tag $(pro_base_image_squashed) $(pro_base_image)
|
||||
docker rmi `docker images --filter "dangling=true" -q --no-trunc`
|
||||
|
||||
pro-server:
|
||||
cd pro_seafile && cp -rf ../../scripts ./ && docker build -t $(pro_server_image) .
|
||||
docker-squash --tag $(pro_server_image_squashed) $(pro_server_image) --from-layer=$(pro_base_image)
|
||||
docker tag $(pro_server_image_squashed) $(pro_server_image)
|
||||
docker tag $(pro_server_image) $(latest_pro_server_image)
|
||||
docker rmi `docker images --filter "dangling=true" -q --no-trunc`
|
||||
|
||||
push-base:
|
||||
docker push $(base_image)
|
||||
|
||||
push-pro-base:
|
||||
docker tag $(pro_base_image) ${host}/$(pro_base_image)
|
||||
docker push ${host}/$(pro_base_image)
|
||||
|
||||
push-server:
|
||||
docker push $(server_image)
|
||||
docker push $(latest_server_image)
|
||||
|
||||
push-pro-server:
|
||||
docker tag $(pro_server_image) ${host}/$(pro_server_image)
|
||||
docker tag $(pro_server_image) ${host}/$(latest_pro_server_image)
|
||||
docker push ${host}/$(pro_server_image)
|
||||
docker push ${host}/$(latest_pro_server_image)
|
||||
|
||||
push: push-base push-server
|
||||
|
||||
.PHONY: base server push push-base push-server
|
|
@ -1,48 +0,0 @@
|
|||
# Lastet phusion baseimage as of 20180412, based on ubuntu 18.04
|
||||
# See https://hub.docker.com/r/phusion/baseimage/tags/
|
||||
FROM phusion/baseimage:0.11
|
||||
|
||||
ENV UPDATED_AT=20180412 \
|
||||
DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
CMD ["/sbin/my_init", "--", "bash", "-l"]
|
||||
|
||||
RUN apt-get update -qq && apt-get -qq -y install nginx
|
||||
|
||||
# Utility tools
|
||||
RUN apt-get install -qq -y vim htop net-tools psmisc git wget curl
|
||||
|
||||
# Guidline for installing python libs: if a lib has C-compoment (e.g.
|
||||
# python-imaging depends on libjpeg/libpng), we install it use apt-get.
|
||||
# Otherwise we install it with pip.
|
||||
RUN apt-get install -y python2.7-dev python-ldap python-mysqldb zlib1g-dev libmemcached-dev gcc
|
||||
RUN curl -sSL -o /tmp/get-pip.py https://bootstrap.pypa.io/get-pip.py && \
|
||||
python /tmp/get-pip.py && \
|
||||
rm -rf /tmp/get-pip.py && \
|
||||
pip install -U wheel
|
||||
|
||||
ADD requirements.txt /tmp/requirements.txt
|
||||
RUN pip install -r /tmp/requirements.txt
|
||||
|
||||
COPY services /services
|
||||
|
||||
RUN mkdir -p /etc/service/nginx && \
|
||||
rm -f /etc/nginx/sites-enabled/* /etc/nginx/conf.d/* && \
|
||||
mv /services/nginx.conf /etc/nginx/nginx.conf && \
|
||||
mv /services/nginx.sh /etc/service/nginx/run
|
||||
|
||||
RUN mkdir -p /etc/my_init.d && rm -f /etc/my_init.d/00_regen_ssh_host_keys.sh
|
||||
|
||||
RUN rm -rf \
|
||||
/root/.cache \
|
||||
/root/.npm \
|
||||
/root/.pip \
|
||||
/usr/local/share/doc \
|
||||
/usr/share/doc \
|
||||
/usr/share/man \
|
||||
/usr/share/vim/vim74/doc \
|
||||
/usr/share/vim/vim74/lang \
|
||||
/usr/share/vim/vim74/spell/en* \
|
||||
/usr/share/vim/vim74/tutor \
|
||||
/var/lib/apt/lists/* \
|
||||
/tmp/*
|
|
@ -1,47 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Init mysql data dir.
|
||||
# Borrowed from https://github.com/fideloper/docker-mysql/blob/master/etc/my_init.d/99_mysql_setup.sh
|
||||
|
||||
if [[ ! -d /var/lib/mysql/mysql ]]; then
|
||||
echo 'Rebuilding mysql data dir'
|
||||
|
||||
chown -R mysql.mysql /var/lib/mysql
|
||||
|
||||
mysql_install_db >/var/log/mysql-bootstrap.log 2>&1
|
||||
# TODO: print the log if mysql_install_db fails
|
||||
|
||||
rm -rf /var/run/mysqld/*
|
||||
|
||||
echo 'Starting mysqld'
|
||||
mysqld_safe >>/var/log/mysql-bootstrap.log 2>&1 &
|
||||
|
||||
echo 'Waiting for mysqld to come online'
|
||||
# The sleep 1 is there to make sure that inotifywait starts up before the socket is created
|
||||
while [[ ! -S /var/run/mysqld/mysqld.sock ]]; do
|
||||
sleep 1
|
||||
done
|
||||
|
||||
echo 'Fixing root password'
|
||||
/usr/bin/mysqladmin -u root password ''
|
||||
|
||||
# if [ -d /var/lib/mysql/setup ]; then
|
||||
# echo 'Found /var/lib/mysql/setup - scanning for SQL scripts'
|
||||
# for sql in $(ls /var/lib/mysql/setup/*.sql 2>/dev/null | sort); do
|
||||
# echo 'Running script:' $sql
|
||||
# mysql -uroot -proot -e "\. $sql"
|
||||
# mv $sql $sql.processed
|
||||
# done
|
||||
# else
|
||||
# echo 'No setup directory with extra sql scripts to run'
|
||||
# fi
|
||||
|
||||
echo 'Shutting down mysqld'
|
||||
mysqladmin -uroot shutdown
|
||||
|
||||
retry=0 maxretry=10
|
||||
while [[ -e /var/run/mysqld/mysqld.sock && $retry -le $maxretry ]]; do
|
||||
retry=$((retry+1))
|
||||
sleep 1
|
||||
done
|
||||
fi
|
|
@ -1,12 +0,0 @@
|
|||
# -*- mode: conf -*-
|
||||
|
||||
# Required by seafile/seahub
|
||||
python-memcached==1.58
|
||||
urllib3==1.19
|
||||
|
||||
# Utility libraries
|
||||
click==6.6
|
||||
termcolor==1.1.0
|
||||
prettytable==0.7.2
|
||||
colorlog==2.7.0
|
||||
Jinja2==2.8
|
|
@ -1,16 +0,0 @@
|
|||
#
|
||||
# This file is autogenerated by pip-compile
|
||||
# To update, run:
|
||||
#
|
||||
# pip-compile --output-file requirements.txt requirements.in
|
||||
#
|
||||
click==6.6
|
||||
colorlog==2.7.0
|
||||
Jinja2==2.8
|
||||
MarkupSafe==0.23 # via jinja2
|
||||
prettytable==0.7.2
|
||||
termcolor==1.1.0
|
||||
urllib3==1.19
|
||||
Pillow==4.3.0
|
||||
pylibmc==1.6.0
|
||||
django-pylibmc==0.6.1
|
|
@ -1,4 +0,0 @@
|
|||
#!/bin/bash
|
||||
# `/sbin/setuser memcache` runs the given command as the user `memcache`.
|
||||
# If you omit that part, the command will be run as root.
|
||||
exec /sbin/setuser memcache /usr/bin/memcached >>/var/log/memcached.log 2>&1
|
|
@ -1,18 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
shutdown_mysql() {
|
||||
if [[ -S /var/run/mysqld/mysqld.sock ]]; then
|
||||
mysqladmin -u root shutdown || true
|
||||
fi
|
||||
}
|
||||
|
||||
trap shutdown_mysql EXIT
|
||||
|
||||
mkdir -p /var/run/mysqld
|
||||
chown mysql:mysql /var/run/mysqld
|
||||
|
||||
rm -f /var/lib/mysql/aria_log_control
|
||||
|
||||
/sbin/setuser mysql /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --skip-log-error --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 >/var/log/mysql.log 2>&1
|
|
@ -1,34 +0,0 @@
|
|||
daemon off;
|
||||
user www-data;
|
||||
worker_processes auto;
|
||||
|
||||
events {
|
||||
worker_connections 768;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
server_names_hash_bucket_size 256;
|
||||
server_names_hash_max_size 1024;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $upstream_response_time';
|
||||
|
||||
access_log /var/log/nginx/access.log seafileformat;
|
||||
error_log /var/log/nginx/error.log info;
|
||||
|
||||
gzip on;
|
||||
gzip_types text/plain text/css application/javascript application/json text/javascript;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
location / {
|
||||
return 444;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,3 +0,0 @@
|
|||
#!/bin/bash
|
||||
exec 2>&1
|
||||
exec /usr/sbin/nginx
|
|
@ -1,23 +0,0 @@
|
|||
FROM seafileltd/base-mc:18.04
|
||||
|
||||
# syslog-ng and syslog-forwarder would mess up the container stdout, not good
|
||||
# when debugging/upgrading.
|
||||
|
||||
# Fixing the "Sub-process /usr/bin/dpkg returned an error code (1)",
|
||||
# when RUN apt-get
|
||||
RUN mkdir -p /usr/share/man/man1
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y libmemcached-dev zlib1g-dev pwgen curl openssl poppler-utils libpython2.7 libreoffice \
|
||||
libreoffice-script-provider-python ttf-wqy-microhei ttf-wqy-zenhei xfonts-wqy python-requests tzdata \
|
||||
python-pip python-setuptools python-urllib3 python-ldap python-ceph
|
||||
|
||||
# The S3 storage, oss storage and psd online preview etc,
|
||||
# depends on the python-backages as follow:
|
||||
RUN pip install boto==2.43.0 \
|
||||
oss2==2.3.0 \
|
||||
psd-tools==1.4 \
|
||||
pycryptodome==3.7.2 \
|
||||
twilio==5.7.0
|
||||
|
||||
RUN apt clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
|
|
@ -1,22 +0,0 @@
|
|||
FROM seafileltd/pro-base-mc:18.04
|
||||
WORKDIR /opt/seafile
|
||||
|
||||
ENV SEAFILE_VERSION=7.0.11 SEAFILE_SERVER=seafile-pro-server
|
||||
|
||||
RUN mkdir -p /etc/my_init.d
|
||||
|
||||
RUN mkdir -p /opt/seafile/
|
||||
|
||||
RUN curl -sSL -G -d "p=/pro/seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz&dl=1" https://download.seafile.com/d/6e5297246c/files/ \
|
||||
| tar xzf - -C /opt/seafile/
|
||||
|
||||
#ADD seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz /opt/seafile/
|
||||
|
||||
ADD scripts/create_data_links.sh /etc/my_init.d/01_create_data_links.sh
|
||||
|
||||
COPY scripts /scripts
|
||||
COPY templates /templates
|
||||
|
||||
EXPOSE 80
|
||||
|
||||
CMD ["/sbin/my_init", "--", "/scripts/start.py"]
|
|
@ -1,3 +0,0 @@
|
|||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
# min hour dayofmonth month dayofweek command
|
||||
0 0 1 * * root /scripts/ssl.sh {{ ssl_dir }} {{ domain }}
|
|
@ -1,99 +0,0 @@
|
|||
# -*- mode: nginx -*-
|
||||
# Auto generated at {{ current_timestr }}
|
||||
{% if https -%}
|
||||
server {
|
||||
listen 80;
|
||||
server_name _ default_server;
|
||||
# allow certbot to connect to challenge location via HTTP Port 80
|
||||
# otherwise renewal request will fail
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
location / {
|
||||
rewrite ^ https://{{ domain }}$request_uri? permanent;
|
||||
}
|
||||
}
|
||||
{% endif -%}
|
||||
|
||||
server {
|
||||
{% if https -%}
|
||||
listen 443;
|
||||
ssl on;
|
||||
ssl_certificate /shared/ssl/{{ domain }}.crt;
|
||||
ssl_certificate_key /shared/ssl/{{ domain }}.key;
|
||||
|
||||
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
|
||||
|
||||
# TODO: More SSL security hardening: ssl_session_tickets & ssl_dhparam
|
||||
# ssl_session_tickets on;
|
||||
# ssl_session_ticket_key /etc/nginx/sessionticket.key;
|
||||
# ssl_session_cache shared:SSL:10m;
|
||||
# ssl_session_timeout 10m;
|
||||
{% else -%}
|
||||
listen 80;
|
||||
{% endif -%}
|
||||
|
||||
server_name {{ domain }};
|
||||
|
||||
client_max_body_size 10m;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000/;
|
||||
proxy_read_timeout 310s;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header Connection "";
|
||||
proxy_http_version 1.1;
|
||||
|
||||
client_max_body_size 0;
|
||||
access_log /var/log/nginx/seahub.access.log seafileformat;
|
||||
error_log /var/log/nginx/seahub.error.log;
|
||||
}
|
||||
|
||||
location /seafhttp {
|
||||
rewrite ^/seafhttp(.*)$ $1 break;
|
||||
proxy_pass http://127.0.0.1:8082;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
client_max_body_size 0;
|
||||
proxy_connect_timeout 36000s;
|
||||
proxy_read_timeout 36000s;
|
||||
proxy_request_buffering off;
|
||||
|
||||
access_log /var/log/nginx/seafhttp.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafhttp.error.log;
|
||||
}
|
||||
|
||||
location /seafdav {
|
||||
client_max_body_size 0;
|
||||
fastcgi_pass 127.0.0.1:8080;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
fastcgi_param PATH_INFO $fastcgi_script_name;
|
||||
|
||||
fastcgi_param SERVER_PROTOCOL $server_protocol;
|
||||
fastcgi_param QUERY_STRING $query_string;
|
||||
fastcgi_param REQUEST_METHOD $request_method;
|
||||
fastcgi_param CONTENT_TYPE $content_type;
|
||||
fastcgi_param CONTENT_LENGTH $content_length;
|
||||
fastcgi_param SERVER_ADDR $server_addr;
|
||||
fastcgi_param SERVER_PORT $server_port;
|
||||
fastcgi_param SERVER_NAME $server_name;
|
||||
|
||||
access_log /var/log/nginx/seafdav.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafdav.error.log;
|
||||
}
|
||||
|
||||
location /media {
|
||||
root /opt/seafile/seafile-server-latest/seahub;
|
||||
}
|
||||
|
||||
# For letsencrypt
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
}
|
|
@ -1,71 +0,0 @@
|
|||
# See https://hub.docker.com/r/phusion/baseimage/tags/
|
||||
FROM phusion/baseimage:0.11
|
||||
ENV SEAFILE_SERVER=seafile-pro-server SEAFILE_VERSION=
|
||||
|
||||
RUN apt-get update --fix-missing
|
||||
|
||||
# Utility tools
|
||||
RUN apt-get install -y vim htop net-tools psmisc wget curl git
|
||||
|
||||
# For suport set local time zone.
|
||||
RUN export DEBIAN_FRONTEND=noninteractive && apt-get install tzdata -y
|
||||
|
||||
# Nginx
|
||||
RUN apt-get install -y nginx
|
||||
|
||||
# Java
|
||||
RUN apt-get install -y openjdk-8-jre
|
||||
|
||||
# Libreoffice
|
||||
RUN apt-get install -y libreoffice libreoffice-script-provider-python libsm-dev
|
||||
RUN apt-get install -y ttf-wqy-microhei ttf-wqy-zenhei xfonts-wqy
|
||||
|
||||
# Tools
|
||||
RUN apt-get install -y zlib1g-dev pwgen openssl poppler-utils
|
||||
|
||||
|
||||
# Python3
|
||||
RUN apt-get install -y python3 python3-pip python3-setuptools python3-ldap python-rados
|
||||
RUN python3.6 -m pip install --upgrade pip && rm -r /root/.cache/pip
|
||||
|
||||
RUN pip3 install --timeout=3600 click termcolor colorlog pymysql \
|
||||
django==1.11.29 && rm -r /root/.cache/pip
|
||||
|
||||
RUN pip3 install --timeout=3600 Pillow pylibmc captcha jinja2 \
|
||||
sqlalchemy django-pylibmc django-simple-captcha && \
|
||||
rm -r /root/.cache/pip
|
||||
|
||||
RUN pip3 install --timeout=3600 boto oss2 pycryptodome twilio python-ldap configparser && \
|
||||
rm -r /root/.cache/pip
|
||||
|
||||
|
||||
# Scripts
|
||||
COPY scripts_7.1 /scripts
|
||||
COPY templates /templates
|
||||
COPY services /services
|
||||
RUN chmod u+x /scripts/*
|
||||
|
||||
RUN mkdir -p /etc/my_init.d && \
|
||||
rm -f /etc/my_init.d/* && \
|
||||
cp /scripts/create_data_links.sh /etc/my_init.d/01_create_data_links.sh
|
||||
|
||||
RUN mkdir -p /etc/service/nginx && \
|
||||
rm -f /etc/nginx/sites-enabled/* /etc/nginx/conf.d/* && \
|
||||
mv /services/nginx.conf /etc/nginx/nginx.conf && \
|
||||
mv /services/nginx.sh /etc/service/nginx/run
|
||||
|
||||
|
||||
# Seafile
|
||||
WORKDIR /opt/seafile
|
||||
|
||||
RUN mkdir -p /opt/seafile/ && cd /opt/seafile/ && \
|
||||
wget -O seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz \
|
||||
"https://download.seafile.com/d/6e5297246c/files/?p=/pro/seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz&dl=1" && \
|
||||
tar -zxvf seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz && \
|
||||
rm -f seafile-pro-server_${SEAFILE_VERSION}_x86-64_Ubuntu.tar.gz
|
||||
|
||||
|
||||
EXPOSE 80
|
||||
|
||||
|
||||
CMD ["/sbin/my_init", "--", "/scripts/start.py"]
|
|
@ -1,34 +0,0 @@
|
|||
daemon off;
|
||||
user www-data;
|
||||
worker_processes auto;
|
||||
|
||||
events {
|
||||
worker_connections 768;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
server_names_hash_bucket_size 256;
|
||||
server_names_hash_max_size 1024;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $upstream_response_time';
|
||||
|
||||
access_log /var/log/nginx/access.log seafileformat;
|
||||
error_log /var/log/nginx/error.log info;
|
||||
|
||||
gzip on;
|
||||
gzip_types text/plain text/css application/javascript application/json text/javascript;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
location / {
|
||||
return 444;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,3 +0,0 @@
|
|||
#!/bin/bash
|
||||
exec 2>&1
|
||||
exec /usr/sbin/nginx
|
|
@ -1,3 +0,0 @@
|
|||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
# min hour dayofmonth month dayofweek command
|
||||
0 0 1 * * root /scripts/ssl.sh {{ ssl_dir }} {{ domain }}
|
|
@ -1,94 +0,0 @@
|
|||
# -*- mode: nginx -*-
|
||||
# Auto generated at {{ current_timestr }}
|
||||
{% if https -%}
|
||||
server {
|
||||
listen 80;
|
||||
server_name _ default_server;
|
||||
|
||||
# allow certbot to connect to challenge location via HTTP Port 80
|
||||
# otherwise renewal request will fail
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
location / {
|
||||
rewrite ^ https://{{ domain }}$request_uri? permanent;
|
||||
}
|
||||
}
|
||||
{% endif -%}
|
||||
|
||||
server {
|
||||
{% if https -%}
|
||||
listen 443;
|
||||
ssl on;
|
||||
ssl_certificate /shared/ssl/{{ domain }}.crt;
|
||||
ssl_certificate_key /shared/ssl/{{ domain }}.key;
|
||||
|
||||
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
|
||||
|
||||
# TODO: More SSL security hardening: ssl_session_tickets & ssl_dhparam
|
||||
# ssl_session_tickets on;
|
||||
# ssl_session_ticket_key /etc/nginx/sessionticket.key;
|
||||
# ssl_session_cache shared:SSL:10m;
|
||||
# ssl_session_timeout 10m;
|
||||
{% else -%}
|
||||
listen 80;
|
||||
{% endif -%}
|
||||
|
||||
server_name {{ domain }};
|
||||
|
||||
client_max_body_size 10m;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000/;
|
||||
proxy_read_timeout 310s;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header Connection "";
|
||||
proxy_http_version 1.1;
|
||||
|
||||
client_max_body_size 0;
|
||||
access_log /var/log/nginx/seahub.access.log seafileformat;
|
||||
error_log /var/log/nginx/seahub.error.log;
|
||||
}
|
||||
|
||||
location /seafhttp {
|
||||
rewrite ^/seafhttp(.*)$ $1 break;
|
||||
proxy_pass http://127.0.0.1:8082;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
client_max_body_size 0;
|
||||
proxy_connect_timeout 36000s;
|
||||
proxy_read_timeout 36000s;
|
||||
proxy_request_buffering off;
|
||||
access_log /var/log/nginx/seafhttp.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafhttp.error.log;
|
||||
}
|
||||
|
||||
location /seafdav {
|
||||
proxy_pass http://127.0.0.1:8080;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Host $server_name;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_read_timeout 1200s;
|
||||
client_max_body_size 0;
|
||||
|
||||
access_log /var/log/nginx/seafdav.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafdav.error.log;
|
||||
}
|
||||
|
||||
location /media {
|
||||
root /opt/seafile/seafile-server-latest/seahub;
|
||||
}
|
||||
|
||||
# For letsencrypt
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
}
|
|
@ -1,5 +0,0 @@
|
|||
FROM elasticsearch:5.6.16
|
||||
|
||||
ADD elasticsearch-analysis-ik-5.6.16.tar /usr/share/elasticsearch/plugins/
|
||||
|
||||
RUN chown -R elasticsearch.elasticsearch /usr/share/elasticsearch/plugins/ik
|
Binary file not shown.
|
@ -1,26 +0,0 @@
|
|||
FROM seafileltd/base-mc:18.04
|
||||
|
||||
# For suport set local time zone.
|
||||
RUN export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install tzdata -y
|
||||
|
||||
WORKDIR /opt/seafile
|
||||
|
||||
RUN mkdir -p /etc/my_init.d
|
||||
|
||||
ENV SEAFILE_VERSION=7.0.4 SEAFILE_SERVER=seafile-server
|
||||
|
||||
RUN mkdir -p /opt/seafile/ && \
|
||||
curl -sSL -o - https://download.seadrive.org/seafile-server_${SEAFILE_VERSION}_x86-64.tar.gz \
|
||||
| tar xzf - -C /opt/seafile/
|
||||
|
||||
# For using TLS connection to LDAP/AD server with docker-ce.
|
||||
RUN find /opt/seafile/ \( -name "liblber-*" -o -name "libldap-*" -o -name "libldap_r*" -o -name "libsasl2.so*" \) -delete
|
||||
|
||||
ADD scripts/create_data_links.sh /etc/my_init.d/01_create_data_links.sh
|
||||
|
||||
COPY scripts /scripts
|
||||
COPY templates /templates
|
||||
|
||||
EXPOSE 80
|
||||
|
||||
CMD ["/sbin/my_init", "--", "/scripts/start.py"]
|
|
@ -1,3 +0,0 @@
|
|||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
# min hour dayofmonth month dayofweek command
|
||||
0 0 1 * * root /scripts/ssl.sh {{ ssl_dir }} {{ domain }}
|
|
@ -1,99 +0,0 @@
|
|||
# -*- mode: nginx -*-
|
||||
# Auto generated at {{ current_timestr }}
|
||||
{% if https -%}
|
||||
server {
|
||||
listen 80;
|
||||
server_name _ default_server;
|
||||
|
||||
# allow certbot to connect to challenge location via HTTP Port 80
|
||||
# otherwise renewal request will fail
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
location / {
|
||||
rewrite ^ https://{{ domain }}$request_uri? permanent;
|
||||
}
|
||||
}
|
||||
{% endif -%}
|
||||
|
||||
server {
|
||||
{% if https -%}
|
||||
listen 443;
|
||||
ssl on;
|
||||
ssl_certificate /shared/ssl/{{ domain }}.crt;
|
||||
ssl_certificate_key /shared/ssl/{{ domain }}.key;
|
||||
|
||||
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
|
||||
|
||||
# TODO: More SSL security hardening: ssl_session_tickets & ssl_dhparam
|
||||
# ssl_session_tickets on;
|
||||
# ssl_session_ticket_key /etc/nginx/sessionticket.key;
|
||||
# ssl_session_cache shared:SSL:10m;
|
||||
# ssl_session_timeout 10m;
|
||||
{% else -%}
|
||||
listen 80;
|
||||
{% endif -%}
|
||||
|
||||
server_name {{ domain }};
|
||||
|
||||
client_max_body_size 10m;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000/;
|
||||
proxy_read_timeout 310s;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header Connection "";
|
||||
proxy_http_version 1.1;
|
||||
|
||||
client_max_body_size 0;
|
||||
access_log /var/log/nginx/seahub.access.log seafileformat;
|
||||
error_log /var/log/nginx/seahub.error.log;
|
||||
}
|
||||
|
||||
location /seafhttp {
|
||||
rewrite ^/seafhttp(.*)$ $1 break;
|
||||
proxy_pass http://127.0.0.1:8082;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
client_max_body_size 0;
|
||||
proxy_connect_timeout 36000s;
|
||||
proxy_read_timeout 36000s;
|
||||
proxy_request_buffering off;
|
||||
access_log /var/log/nginx/seafhttp.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafhttp.error.log;
|
||||
}
|
||||
|
||||
location /seafdav {
|
||||
client_max_body_size 0;
|
||||
fastcgi_pass 127.0.0.1:8080;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
fastcgi_param PATH_INFO $fastcgi_script_name;
|
||||
|
||||
fastcgi_param SERVER_PROTOCOL $server_protocol;
|
||||
fastcgi_param QUERY_STRING $query_string;
|
||||
fastcgi_param REQUEST_METHOD $request_method;
|
||||
fastcgi_param CONTENT_TYPE $content_type;
|
||||
fastcgi_param CONTENT_LENGTH $content_length;
|
||||
fastcgi_param SERVER_ADDR $server_addr;
|
||||
fastcgi_param SERVER_PORT $server_port;
|
||||
fastcgi_param SERVER_NAME $server_name;
|
||||
|
||||
access_log /var/log/nginx/seafdav.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafdav.error.log;
|
||||
}
|
||||
|
||||
location /media {
|
||||
root /opt/seafile/seafile-server-latest/seahub;
|
||||
}
|
||||
|
||||
# For letsencrypt
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
}
|
|
@ -1,59 +0,0 @@
|
|||
# See https://hub.docker.com/r/phusion/baseimage/tags/
|
||||
FROM phusion/baseimage:0.11
|
||||
ENV SEAFILE_SERVER=seafile-server SEAFILE_VERSION=
|
||||
|
||||
RUN apt-get update --fix-missing
|
||||
|
||||
# Utility tools
|
||||
RUN apt-get install -y vim htop net-tools psmisc wget curl git
|
||||
|
||||
# For suport set local time zone.
|
||||
RUN export DEBIAN_FRONTEND=noninteractive && apt-get install tzdata -y
|
||||
|
||||
# Nginx
|
||||
RUN apt-get install -y nginx
|
||||
|
||||
# Python3
|
||||
RUN apt-get install -y python3 python3-pip python3-setuptools
|
||||
RUN python3.6 -m pip install --upgrade pip && rm -r /root/.cache/pip
|
||||
|
||||
RUN pip3 install --timeout=3600 click termcolor colorlog pymysql \
|
||||
django==1.11.29 && rm -r /root/.cache/pip
|
||||
|
||||
RUN pip3 install --timeout=3600 Pillow pylibmc captcha jinja2 \
|
||||
sqlalchemy django-pylibmc django-simple-captcha && \
|
||||
rm -r /root/.cache/pip
|
||||
|
||||
|
||||
# Scripts
|
||||
COPY scripts_7.1 /scripts
|
||||
COPY templates /templates
|
||||
COPY services /services
|
||||
RUN chmod u+x /scripts/*
|
||||
|
||||
RUN mkdir -p /etc/my_init.d && \
|
||||
rm -f /etc/my_init.d/* && \
|
||||
cp /scripts/create_data_links.sh /etc/my_init.d/01_create_data_links.sh
|
||||
|
||||
RUN mkdir -p /etc/service/nginx && \
|
||||
rm -f /etc/nginx/sites-enabled/* /etc/nginx/conf.d/* && \
|
||||
mv /services/nginx.conf /etc/nginx/nginx.conf && \
|
||||
mv /services/nginx.sh /etc/service/nginx/run
|
||||
|
||||
|
||||
# Seafile
|
||||
WORKDIR /opt/seafile
|
||||
|
||||
RUN mkdir -p /opt/seafile/ && cd /opt/seafile/ && \
|
||||
wget https://download.seadrive.org/seafile-server_${SEAFILE_VERSION}_x86-64.tar.gz && \
|
||||
tar -zxvf seafile-server_${SEAFILE_VERSION}_x86-64.tar.gz && \
|
||||
rm -f seafile-server_${SEAFILE_VERSION}_x86-64.tar.gz
|
||||
|
||||
# For using TLS connection to LDAP/AD server with docker-ce.
|
||||
RUN find /opt/seafile/ \( -name "liblber-*" -o -name "libldap-*" -o -name "libldap_r*" -o -name "libsasl2.so*" \) -delete
|
||||
|
||||
|
||||
EXPOSE 80
|
||||
|
||||
|
||||
CMD ["/sbin/my_init", "--", "/scripts/start.py"]
|
|
@ -1,34 +0,0 @@
|
|||
daemon off;
|
||||
user www-data;
|
||||
worker_processes auto;
|
||||
|
||||
events {
|
||||
worker_connections 768;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
server_names_hash_bucket_size 256;
|
||||
server_names_hash_max_size 1024;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $upstream_response_time';
|
||||
|
||||
access_log /var/log/nginx/access.log seafileformat;
|
||||
error_log /var/log/nginx/error.log info;
|
||||
|
||||
gzip on;
|
||||
gzip_types text/plain text/css application/javascript application/json text/javascript;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
location / {
|
||||
return 444;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,3 +0,0 @@
|
|||
#!/bin/bash
|
||||
exec 2>&1
|
||||
exec /usr/sbin/nginx
|
|
@ -1,3 +0,0 @@
|
|||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
# min hour dayofmonth month dayofweek command
|
||||
0 0 1 * * root /scripts/ssl.sh {{ ssl_dir }} {{ domain }}
|
|
@ -1,94 +0,0 @@
|
|||
# -*- mode: nginx -*-
|
||||
# Auto generated at {{ current_timestr }}
|
||||
{% if https -%}
|
||||
server {
|
||||
listen 80;
|
||||
server_name _ default_server;
|
||||
|
||||
# allow certbot to connect to challenge location via HTTP Port 80
|
||||
# otherwise renewal request will fail
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
location / {
|
||||
rewrite ^ https://{{ domain }}$request_uri? permanent;
|
||||
}
|
||||
}
|
||||
{% endif -%}
|
||||
|
||||
server {
|
||||
{% if https -%}
|
||||
listen 443;
|
||||
ssl on;
|
||||
ssl_certificate /shared/ssl/{{ domain }}.crt;
|
||||
ssl_certificate_key /shared/ssl/{{ domain }}.key;
|
||||
|
||||
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
|
||||
|
||||
# TODO: More SSL security hardening: ssl_session_tickets & ssl_dhparam
|
||||
# ssl_session_tickets on;
|
||||
# ssl_session_ticket_key /etc/nginx/sessionticket.key;
|
||||
# ssl_session_cache shared:SSL:10m;
|
||||
# ssl_session_timeout 10m;
|
||||
{% else -%}
|
||||
listen 80;
|
||||
{% endif -%}
|
||||
|
||||
server_name {{ domain }};
|
||||
|
||||
client_max_body_size 10m;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000/;
|
||||
proxy_read_timeout 310s;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header Connection "";
|
||||
proxy_http_version 1.1;
|
||||
|
||||
client_max_body_size 0;
|
||||
access_log /var/log/nginx/seahub.access.log seafileformat;
|
||||
error_log /var/log/nginx/seahub.error.log;
|
||||
}
|
||||
|
||||
location /seafhttp {
|
||||
rewrite ^/seafhttp(.*)$ $1 break;
|
||||
proxy_pass http://127.0.0.1:8082;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
client_max_body_size 0;
|
||||
proxy_connect_timeout 36000s;
|
||||
proxy_read_timeout 36000s;
|
||||
proxy_request_buffering off;
|
||||
access_log /var/log/nginx/seafhttp.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafhttp.error.log;
|
||||
}
|
||||
|
||||
location /seafdav {
|
||||
proxy_pass http://127.0.0.1:8080;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Host $server_name;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_read_timeout 1200s;
|
||||
client_max_body_size 0;
|
||||
|
||||
access_log /var/log/nginx/seafdav.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafdav.error.log;
|
||||
}
|
||||
|
||||
location /media {
|
||||
root /opt/seafile/seafile-server-latest/seahub;
|
||||
}
|
||||
|
||||
# For letsencrypt
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
}
|
|
@ -1,37 +0,0 @@
|
|||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
ssldir=${1:?"error params"}
|
||||
domain=${2:?"error params"}
|
||||
|
||||
letsencryptdir=$ssldir/letsencrypt
|
||||
letsencrypt_script=$letsencryptdir/acme_tiny.py
|
||||
|
||||
ssl_account_key=${domain}.account.key
|
||||
ssl_csr=${domain}.csr
|
||||
ssl_key=${domain}.key
|
||||
ssl_crt=${domain}.crt
|
||||
renew_cert_script=/scripts/renew_cert.sh
|
||||
|
||||
if [[ ! -x ${renew_cert_script} ]]; then
|
||||
cat > ${renew_cert_script} << EOF
|
||||
#!/bin/bash
|
||||
python3 ${letsencrypt_script} --account-key ${ssldir}/${ssl_account_key} --csr ${ssldir}/${ssl_csr} --acme-dir /var/www/challenges/ > ${ssldir}/${ssl_crt} || exit
|
||||
$(which nginx) -s reload
|
||||
EOF
|
||||
|
||||
chmod u+x ${renew_cert_script}
|
||||
|
||||
if [[ ! -d "/var/www/challenges" ]]; then
|
||||
mkdir -p /var/www/challenges
|
||||
fi
|
||||
|
||||
cat >> /etc/crontab << EOF
|
||||
00 1 1 * * root /scripts/renew_cert.sh 2>> /var/log/acme_tiny.log
|
||||
EOF
|
||||
|
||||
echo 'Created a crontab to auto renew the cert for letsencrypt.'
|
||||
else
|
||||
echo 'Found existing the script for renew the cert.'
|
||||
echo 'Skip create the crontab for letscncrypt since maybe we have created before.'
|
||||
fi
|
|
@ -1,234 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
Bootstraping seafile server, letsencrypt (verification & cron job).
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import uuid
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_conf, get_install_dir, loginfo,
|
||||
get_script, render_template, get_seafile_version, eprint,
|
||||
cert_has_valid_days, get_version_stamp_file, update_version_stamp,
|
||||
wait_for_mysql, wait_for_nginx, read_version_stamp
|
||||
)
|
||||
|
||||
seafile_version = get_seafile_version()
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
shared_seafiledir = '/shared/seafile'
|
||||
ssl_dir = '/shared/ssl'
|
||||
generated_dir = '/bootstrap/generated'
|
||||
|
||||
def init_letsencrypt():
|
||||
loginfo('Preparing for letsencrypt ...')
|
||||
wait_for_nginx()
|
||||
|
||||
if not exists(ssl_dir):
|
||||
os.mkdir(ssl_dir)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'ssl_dir': ssl_dir,
|
||||
'domain': domain,
|
||||
}
|
||||
render_template(
|
||||
'/templates/letsencrypt.cron.template',
|
||||
join(generated_dir, 'letsencrypt.cron'),
|
||||
context
|
||||
)
|
||||
|
||||
ssl_crt = '/shared/ssl/{}.crt'.format(domain)
|
||||
if exists(ssl_crt):
|
||||
loginfo('Found existing cert file {}'.format(ssl_crt))
|
||||
if cert_has_valid_days(ssl_crt, 30):
|
||||
loginfo('Skip letsencrypt verification since we have a valid certificate')
|
||||
if exists(join(ssl_dir, 'letsencrypt')):
|
||||
# Create a crontab to auto renew the cert for letsencrypt.
|
||||
call('/scripts/auto_renew_crt.sh {0} {1}'.format(ssl_dir, domain))
|
||||
return
|
||||
|
||||
loginfo('Starting letsencrypt verification')
|
||||
# Create a temporary nginx conf to start a server, which would accessed by letsencrypt
|
||||
context = {
|
||||
'https': False,
|
||||
'domain': domain,
|
||||
}
|
||||
if not os.path.isfile('/shared/nginx/conf/seafile.nginx.conf'):
|
||||
render_template('/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf', context)
|
||||
|
||||
call('nginx -s reload')
|
||||
time.sleep(2)
|
||||
|
||||
call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain))
|
||||
# if call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain), check_call=False) != 0:
|
||||
# eprint('Now waiting 1000s for postmortem')
|
||||
# time.sleep(1000)
|
||||
# sys.exit(1)
|
||||
|
||||
call('/scripts/auto_renew_crt.sh {0} {1}'.format(ssl_dir, domain))
|
||||
# Create a crontab to auto renew the cert for letsencrypt.
|
||||
|
||||
|
||||
def generate_local_nginx_conf():
|
||||
# Now create the final nginx configuratin
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'https': is_https(),
|
||||
'domain': domain,
|
||||
}
|
||||
|
||||
if not os.path.isfile('/shared/nginx/conf/seafile.nginx.conf'):
|
||||
render_template(
|
||||
'/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf',
|
||||
context
|
||||
)
|
||||
nginx_etc_file = '/etc/nginx/sites-enabled/seafile.nginx.conf'
|
||||
nginx_shared_file = '/shared/nginx/conf/seafile.nginx.conf'
|
||||
call('mv {0} {1} && ln -sf {1} {0}'.format(nginx_etc_file, nginx_shared_file))
|
||||
|
||||
def is_https():
|
||||
return get_conf('SEAFILE_SERVER_LETSENCRYPT', 'false').lower() == 'true'
|
||||
|
||||
def parse_args():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument('--parse-ports', action='store_true')
|
||||
|
||||
return ap.parse_args()
|
||||
|
||||
def init_seafile_server():
|
||||
version_stamp_file = get_version_stamp_file()
|
||||
if exists(join(shared_seafiledir, 'seafile-data')):
|
||||
if not exists(version_stamp_file):
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
||||
# sysbol link unlink after docker finish.
|
||||
latest_version_dir='/opt/seafile/seafile-server-latest'
|
||||
current_version_dir='/opt/seafile/' + get_conf('SEAFILE_SERVER', 'seafile-server') + '-' + read_version_stamp()
|
||||
if not exists(latest_version_dir):
|
||||
call('ln -sf ' + current_version_dir + ' ' + latest_version_dir)
|
||||
loginfo('Skip running setup-seafile-mysql.py because there is existing seafile-data folder.')
|
||||
return
|
||||
|
||||
loginfo('Now running setup-seafile-mysql.py in auto mode.')
|
||||
env = {
|
||||
'SERVER_NAME': 'seafile',
|
||||
'SERVER_IP': get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com'),
|
||||
'MYSQL_USER': 'seafile',
|
||||
'MYSQL_USER_PASSWD': str(uuid.uuid4()),
|
||||
'MYSQL_USER_HOST': '%.%.%.%',
|
||||
'MYSQL_HOST': get_conf('DB_HOST','127.0.0.1'),
|
||||
# Default MariaDB root user has empty password and can only connect from localhost.
|
||||
'MYSQL_ROOT_PASSWD': get_conf('DB_ROOT_PASSWD', ''),
|
||||
}
|
||||
|
||||
# Change the script to allow mysql root password to be empty
|
||||
# call('''sed -i -e 's/if not mysql_root_passwd/if not mysql_root_passwd and "MYSQL_ROOT_PASSWD" not in os.environ/g' {}'''
|
||||
# .format(get_script('setup-seafile-mysql.py')))
|
||||
|
||||
# Change the script to disable check MYSQL_USER_HOST
|
||||
call('''sed -i -e '/def validate_mysql_user_host(self, host)/a \ \ \ \ \ \ \ \ return host' {}'''
|
||||
.format(get_script('setup-seafile-mysql.py')))
|
||||
|
||||
call('''sed -i -e '/def validate_mysql_host(self, host)/a \ \ \ \ \ \ \ \ return host' {}'''
|
||||
.format(get_script('setup-seafile-mysql.py')))
|
||||
|
||||
setup_script = get_script('setup-seafile-mysql.sh')
|
||||
call('{} auto -n seafile'.format(setup_script), env=env)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
proto = 'https' if is_https() else 'http'
|
||||
with open(join(topdir, 'conf', 'seahub_settings.py'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write("""CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
|
||||
'LOCATION': 'memcached:11211',
|
||||
},
|
||||
'locmem': {
|
||||
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
|
||||
},
|
||||
}
|
||||
COMPRESS_CACHE_BACKEND = 'locmem'""")
|
||||
fp.write('\n')
|
||||
fp.write("TIME_ZONE = '{time_zone}'".format(time_zone=os.getenv('TIME_ZONE',default='Etc/UTC')))
|
||||
fp.write('\n')
|
||||
fp.write('FILE_SERVER_ROOT = "{proto}://{domain}/seafhttp"'.format(proto=proto, domain=domain))
|
||||
fp.write('\n')
|
||||
|
||||
# By default ccnet-server binds to the unix socket file
|
||||
# "/opt/seafile/ccnet/ccnet.sock", but /opt/seafile/ccnet/ is a mounted
|
||||
# volume from the docker host, and on windows and some linux environment
|
||||
# it's not possible to create unix sockets in an external-mounted
|
||||
# directories. So we change the unix socket file path to
|
||||
# "/opt/seafile/ccnet.sock" to avoid this problem.
|
||||
with open(join(topdir, 'conf', 'ccnet.conf'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write('[Client]\n')
|
||||
fp.write('UNIX_SOCKET = /opt/seafile/ccnet.sock\n')
|
||||
fp.write('\n')
|
||||
|
||||
# Disabled the Elasticsearch process on Seafile-container
|
||||
# Connection to the Elasticsearch-container
|
||||
if os.path.exists(join(topdir, 'conf', 'seafevents.conf')):
|
||||
with open(join(topdir, 'conf', 'seafevents.conf'), 'r') as fp:
|
||||
fp_lines = fp.readlines()
|
||||
if '[INDEX FILES]\n' in fp_lines:
|
||||
insert_index = fp_lines.index('[INDEX FILES]\n') + 1
|
||||
insert_lines = ['es_port = 9200\n', 'es_host = elasticsearch\n', 'external_es_server = true\n']
|
||||
for line in insert_lines:
|
||||
fp_lines.insert(insert_index, line)
|
||||
|
||||
# office
|
||||
if '[OFFICE CONVERTER]\n' in fp_lines:
|
||||
insert_index = fp_lines.index('[OFFICE CONVERTER]\n') + 1
|
||||
insert_lines = ['host = 127.0.0.1\n', 'port = 6000\n']
|
||||
for line in insert_lines:
|
||||
fp_lines.insert(insert_index, line)
|
||||
|
||||
with open(join(topdir, 'conf', 'seafevents.conf'), 'w') as fp:
|
||||
fp.writelines(fp_lines)
|
||||
|
||||
# office
|
||||
with open(join(topdir, 'conf', 'seahub_settings.py'), 'r') as fp:
|
||||
fp_lines = fp.readlines()
|
||||
if "OFFICE_CONVERTOR_ROOT = 'http://127.0.0.1:6000/'\n" not in fp_lines:
|
||||
fp_lines.append("OFFICE_CONVERTOR_ROOT = 'http://127.0.0.1:6000/'\n")
|
||||
|
||||
with open(join(topdir, 'conf', 'seahub_settings.py'), 'w') as fp:
|
||||
fp.writelines(fp_lines)
|
||||
|
||||
# Modify seafdav config
|
||||
if os.path.exists(join(topdir, 'conf', 'seafdav.conf')):
|
||||
with open(join(topdir, 'conf', 'seafdav.conf'), 'r') as fp:
|
||||
fp_lines = fp.readlines()
|
||||
if 'share_name = /\n' in fp_lines:
|
||||
replace_index = fp_lines.index('share_name = /\n')
|
||||
replace_line = 'share_name = /seafdav\n'
|
||||
fp_lines[replace_index] = replace_line
|
||||
|
||||
with open(join(topdir, 'conf', 'seafdav.conf'), 'w') as fp:
|
||||
fp.writelines(fp_lines)
|
||||
|
||||
# After the setup script creates all the files inside the
|
||||
# container, we need to move them to the shared volume
|
||||
#
|
||||
# e.g move "/opt/seafile/seafile-data" to "/shared/seafile/seafile-data"
|
||||
files_to_copy = ['conf', 'ccnet', 'seafile-data', 'seahub-data', 'pro-data']
|
||||
for fn in files_to_copy:
|
||||
src = join(topdir, fn)
|
||||
dst = join(shared_seafiledir, fn)
|
||||
if not exists(dst) and exists(src):
|
||||
shutil.move(src, shared_seafiledir)
|
||||
call('ln -sf ' + join(shared_seafiledir, fn) + ' ' + src)
|
||||
|
||||
loginfo('Updating version stamp')
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
|
@ -1,37 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
# Before
|
||||
SEAFILE_DIR=/opt/seafile/seafile-server-latest
|
||||
|
||||
if [[ $SEAFILE_SERVER != *"pro"* ]]; then
|
||||
echo "Seafile CE: Stop Seafile to perform offline garbage collection."
|
||||
$SEAFILE_DIR/seafile.sh stop
|
||||
|
||||
echo "Waiting for the server to shut down properly..."
|
||||
sleep 5
|
||||
else
|
||||
echo "Seafile Pro: Perform online garbage collection."
|
||||
fi
|
||||
|
||||
# Do it
|
||||
(
|
||||
set +e
|
||||
$SEAFILE_DIR/seaf-gc.sh "$@" | tee -a /var/log/gc.log
|
||||
# We want to presevent the exit code of seaf-gc.sh
|
||||
exit "${PIPESTATUS[0]}"
|
||||
)
|
||||
|
||||
gc_exit_code=$?
|
||||
|
||||
# After
|
||||
|
||||
if [[ $SEAFILE_SERVER != *"pro"* ]]; then
|
||||
echo "Giving the server some time..."
|
||||
sleep 3
|
||||
|
||||
$SEAFILE_DIR/seafile.sh start
|
||||
fi
|
||||
|
||||
exit $gc_exit_code
|
|
@ -1,46 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
ssldir=${1:?"error params"}
|
||||
domain=${2:?"error params"}
|
||||
|
||||
letsencryptdir=$ssldir/letsencrypt
|
||||
letsencrypt_script=$letsencryptdir/acme_tiny.py
|
||||
|
||||
ssl_account_key=${domain}.account.key
|
||||
ssl_csr=${domain}.csr
|
||||
ssl_key=${domain}.key
|
||||
ssl_crt=${domain}.crt
|
||||
|
||||
mkdir -p /var/www/challenges && chmod -R 777 /var/www/challenges
|
||||
mkdir -p $ssldir
|
||||
|
||||
if ! [[ -d $letsencryptdir ]]; then
|
||||
git clone git://github.com/diafygi/acme-tiny.git $letsencryptdir
|
||||
else
|
||||
cd $letsencryptdir
|
||||
git pull origin master:master
|
||||
fi
|
||||
|
||||
cd $ssldir
|
||||
|
||||
if [[ ! -e ${ssl_account_key} ]]; then
|
||||
openssl genrsa 4096 > ${ssl_account_key}
|
||||
fi
|
||||
|
||||
if [[ ! -e ${ssl_key} ]]; then
|
||||
openssl genrsa 4096 > ${ssl_key}
|
||||
fi
|
||||
|
||||
if [[ ! -e ${ssl_csr} ]]; then
|
||||
openssl req -new -sha256 -key ${ssl_key} -subj "/CN=$domain" > $ssl_csr
|
||||
fi
|
||||
|
||||
python3 $letsencrypt_script --account-key ${ssl_account_key} --csr $ssl_csr --acme-dir /var/www/challenges/ > ./signed.crt
|
||||
curl -sSL -o intermediate.pem https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem
|
||||
cat signed.crt intermediate.pem > ${ssl_crt}
|
||||
|
||||
nginx -s reload
|
||||
|
||||
echo "Nginx reloaded."
|
|
@ -1,34 +0,0 @@
|
|||
daemon off;
|
||||
user www-data;
|
||||
worker_processes auto;
|
||||
|
||||
events {
|
||||
worker_connections 768;
|
||||
}
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
server_names_hash_bucket_size 256;
|
||||
server_names_hash_max_size 1024;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
keepalive_timeout 65;
|
||||
types_hash_max_size 2048;
|
||||
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $upstream_response_time';
|
||||
|
||||
access_log /var/log/nginx/access.log seafileformat;
|
||||
error_log /var/log/nginx/error.log info;
|
||||
|
||||
gzip on;
|
||||
gzip_types text/plain text/css application/javascript application/json text/javascript;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
location / {
|
||||
return 444;
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,3 +0,0 @@
|
|||
#!/bin/bash
|
||||
exec 2>&1
|
||||
exec /usr/sbin/nginx
|
|
@ -1,3 +0,0 @@
|
|||
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
|
||||
# min hour dayofmonth month dayofweek command
|
||||
0 0 1 * * root /scripts/ssl.sh {{ ssl_dir }} {{ domain }}
|
|
@ -1,94 +0,0 @@
|
|||
# -*- mode: nginx -*-
|
||||
# Auto generated at {{ current_timestr }}
|
||||
{% if https -%}
|
||||
server {
|
||||
listen 80;
|
||||
server_name _ default_server;
|
||||
|
||||
# allow certbot to connect to challenge location via HTTP Port 80
|
||||
# otherwise renewal request will fail
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
location / {
|
||||
rewrite ^ https://{{ domain }}$request_uri? permanent;
|
||||
}
|
||||
}
|
||||
{% endif -%}
|
||||
|
||||
server {
|
||||
{% if https -%}
|
||||
listen 443;
|
||||
ssl on;
|
||||
ssl_certificate /shared/ssl/{{ domain }}.crt;
|
||||
ssl_certificate_key /shared/ssl/{{ domain }}.key;
|
||||
|
||||
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
|
||||
|
||||
# TODO: More SSL security hardening: ssl_session_tickets & ssl_dhparam
|
||||
# ssl_session_tickets on;
|
||||
# ssl_session_ticket_key /etc/nginx/sessionticket.key;
|
||||
# ssl_session_cache shared:SSL:10m;
|
||||
# ssl_session_timeout 10m;
|
||||
{% else -%}
|
||||
listen 80;
|
||||
{% endif -%}
|
||||
|
||||
server_name {{ domain }};
|
||||
|
||||
client_max_body_size 10m;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8000/;
|
||||
proxy_read_timeout 310s;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header Forwarded "for=$remote_addr;proto=$scheme";
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header Connection "";
|
||||
proxy_http_version 1.1;
|
||||
|
||||
client_max_body_size 0;
|
||||
access_log /var/log/nginx/seahub.access.log seafileformat;
|
||||
error_log /var/log/nginx/seahub.error.log;
|
||||
}
|
||||
|
||||
location /seafhttp {
|
||||
rewrite ^/seafhttp(.*)$ $1 break;
|
||||
proxy_pass http://127.0.0.1:8082;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
client_max_body_size 0;
|
||||
proxy_connect_timeout 36000s;
|
||||
proxy_read_timeout 36000s;
|
||||
proxy_request_buffering off;
|
||||
access_log /var/log/nginx/seafhttp.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafhttp.error.log;
|
||||
}
|
||||
|
||||
location /seafdav {
|
||||
proxy_pass http://127.0.0.1:8080;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Host $server_name;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_read_timeout 1200s;
|
||||
client_max_body_size 0;
|
||||
|
||||
access_log /var/log/nginx/seafdav.access.log seafileformat;
|
||||
error_log /var/log/nginx/seafdav.error.log;
|
||||
}
|
||||
|
||||
location /media {
|
||||
root /opt/seafile/seafile-server-latest/seahub;
|
||||
}
|
||||
|
||||
# For letsencrypt
|
||||
location /.well-known/acme-challenge/ {
|
||||
alias /var/www/challenges/;
|
||||
try_files $uri =404;
|
||||
}
|
||||
}
|
12
launcher
12
launcher
|
@ -1,12 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
echo "================================================================================================================================="
|
||||
echo "The launcher script is now deprecated. Please see https://github.com/haiwen/seafile-docker/blob/master/upgrade_from_old_format.md"
|
||||
echo "================================================================================================================================="
|
||||
echo
|
||||
echo "Or run this command directly:"
|
||||
echo
|
||||
echo " docker rm -f seafile"
|
||||
echo " docker run -d -it --name seafile -v $PWD/shared:/shared -p 80:80 -p 443:443 seafileltd/seafile"
|
||||
echo
|
||||
echo "================================================================================================================================="
|
|
@ -1,3 +0,0 @@
|
|||
[pytest]
|
||||
addopts = -vv -s
|
||||
log_format = %(asctime)s:%(name)s:%(levelname)s:%(message)s
|
10
run-tests.sh
10
run-tests.sh
|
@ -1,10 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
cd "$( dirname "${BASH_SOURCE[0]}" )"
|
||||
|
||||
export PYTHONPATH=$PWD/scripts:$PYTHONPATH
|
||||
# This env is required by all the scripts
|
||||
export SEAFILE_VERSION=$(cat image/seafile/Dockerfile | sed -r -n 's/.*SEAFILE_VERSION=([0-9.]+).*/\1/p')
|
||||
pytest tests/unit
|
|
@ -1,8 +0,0 @@
|
|||
# If you edit this file, remember to run ./launcher rebuild
|
||||
[server]
|
||||
server.hostname = seafile.example.com
|
||||
|
||||
admin.email = me@example.com
|
||||
admin.password = asecret
|
||||
|
||||
server.letsencrypt = true
|
|
@ -1,8 +0,0 @@
|
|||
# If you edit this file, remember to run ./launcher rebuild
|
||||
[server]
|
||||
server.hostname = seafile.example.com
|
||||
admin.email = me@example.com
|
||||
admin.password = asecret
|
||||
|
||||
# uncomment the one lines below to use letsencrypt SSL certificate
|
||||
# server.letsencrypt = true
|
|
@ -1,37 +0,0 @@
|
|||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
ssldir=${1:?"error params"}
|
||||
domain=${2:?"error params"}
|
||||
|
||||
letsencryptdir=$ssldir/letsencrypt
|
||||
letsencrypt_script=$letsencryptdir/acme_tiny.py
|
||||
|
||||
ssl_account_key=${domain}.account.key
|
||||
ssl_csr=${domain}.csr
|
||||
ssl_key=${domain}.key
|
||||
ssl_crt=${domain}.crt
|
||||
renew_cert_script=/scripts/renew_cert.sh
|
||||
|
||||
if [[ ! -x ${renew_cert_script} ]]; then
|
||||
cat > ${renew_cert_script} << EOF
|
||||
#!/bin/bash
|
||||
python ${letsencrypt_script} --account-key ${ssldir}/${ssl_account_key} --csr ${ssldir}/${ssl_csr} --acme-dir /var/www/challenges/ > ${ssldir}/${ssl_crt} || exit
|
||||
$(which nginx) -s reload
|
||||
EOF
|
||||
|
||||
chmod u+x ${renew_cert_script}
|
||||
|
||||
if [[ ! -d "/var/www/challenges" ]]; then
|
||||
mkdir -p /var/www/challenges
|
||||
fi
|
||||
|
||||
cat >> /etc/crontab << EOF
|
||||
00 1 1 * * root /scripts/renew_cert.sh 2>> /var/log/acme_tiny.log
|
||||
EOF
|
||||
|
||||
echo 'Created a crontab to auto renew the cert for letsencrypt.'
|
||||
else
|
||||
echo 'Found existing the script for renew the cert.'
|
||||
echo 'Skip create the crontab for letscncrypt since maybe we have created before.'
|
||||
fi
|
|
@ -1,206 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#coding: UTF-8
|
||||
|
||||
"""
|
||||
Bootstraping seafile server, letsencrypt (verification & cron job).
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
from os.path import abspath, basename, exists, dirname, join, isdir
|
||||
import shutil
|
||||
import sys
|
||||
import uuid
|
||||
import time
|
||||
|
||||
from utils import (
|
||||
call, get_conf, get_install_dir, loginfo,
|
||||
get_script, render_template, get_seafile_version, eprint,
|
||||
cert_has_valid_days, get_version_stamp_file, update_version_stamp,
|
||||
wait_for_mysql, wait_for_nginx, read_version_stamp
|
||||
)
|
||||
|
||||
seafile_version = get_seafile_version()
|
||||
installdir = get_install_dir()
|
||||
topdir = dirname(installdir)
|
||||
shared_seafiledir = '/shared/seafile'
|
||||
ssl_dir = '/shared/ssl'
|
||||
generated_dir = '/bootstrap/generated'
|
||||
|
||||
def init_letsencrypt():
|
||||
loginfo('Preparing for letsencrypt ...')
|
||||
wait_for_nginx()
|
||||
|
||||
if not exists(ssl_dir):
|
||||
os.mkdir(ssl_dir)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'ssl_dir': ssl_dir,
|
||||
'domain': domain,
|
||||
}
|
||||
render_template(
|
||||
'/templates/letsencrypt.cron.template',
|
||||
join(generated_dir, 'letsencrypt.cron'),
|
||||
context
|
||||
)
|
||||
|
||||
ssl_crt = '/shared/ssl/{}.crt'.format(domain)
|
||||
if exists(ssl_crt):
|
||||
loginfo('Found existing cert file {}'.format(ssl_crt))
|
||||
if cert_has_valid_days(ssl_crt, 30):
|
||||
loginfo('Skip letsencrypt verification since we have a valid certificate')
|
||||
if exists(join(ssl_dir, 'letsencrypt')):
|
||||
# Create a crontab to auto renew the cert for letsencrypt.
|
||||
call('/scripts/auto_renew_crt.sh {0} {1}'.format(ssl_dir, domain))
|
||||
return
|
||||
|
||||
loginfo('Starting letsencrypt verification')
|
||||
# Create a temporary nginx conf to start a server, which would accessed by letsencrypt
|
||||
context = {
|
||||
'https': False,
|
||||
'domain': domain,
|
||||
}
|
||||
if not os.path.isfile('/shared/nginx/conf/seafile.nginx.conf'):
|
||||
render_template('/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf', context)
|
||||
|
||||
call('nginx -s reload')
|
||||
time.sleep(2)
|
||||
|
||||
call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain))
|
||||
# if call('/scripts/ssl.sh {0} {1}'.format(ssl_dir, domain), check_call=False) != 0:
|
||||
# eprint('Now waiting 1000s for postmortem')
|
||||
# time.sleep(1000)
|
||||
# sys.exit(1)
|
||||
|
||||
call('/scripts/auto_renew_crt.sh {0} {1}'.format(ssl_dir, domain))
|
||||
# Create a crontab to auto renew the cert for letsencrypt.
|
||||
|
||||
|
||||
def generate_local_nginx_conf():
|
||||
# Now create the final nginx configuratin
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
context = {
|
||||
'https': is_https(),
|
||||
'domain': domain,
|
||||
}
|
||||
|
||||
if not os.path.isfile('/shared/nginx/conf/seafile.nginx.conf'):
|
||||
render_template(
|
||||
'/templates/seafile.nginx.conf.template',
|
||||
'/etc/nginx/sites-enabled/seafile.nginx.conf',
|
||||
context
|
||||
)
|
||||
nginx_etc_file = '/etc/nginx/sites-enabled/seafile.nginx.conf'
|
||||
nginx_shared_file = '/shared/nginx/conf/seafile.nginx.conf'
|
||||
call('mv {0} {1} && ln -sf {1} {0}'.format(nginx_etc_file, nginx_shared_file))
|
||||
|
||||
def is_https():
|
||||
return get_conf('SEAFILE_SERVER_LETSENCRYPT', 'false').lower() == 'true'
|
||||
|
||||
def parse_args():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument('--parse-ports', action='store_true')
|
||||
|
||||
return ap.parse_args()
|
||||
|
||||
def init_seafile_server():
|
||||
version_stamp_file = get_version_stamp_file()
|
||||
if exists(join(shared_seafiledir, 'seafile-data')):
|
||||
if not exists(version_stamp_file):
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
||||
# sysbol link unlink after docker finish.
|
||||
latest_version_dir='/opt/seafile/seafile-server-latest'
|
||||
current_version_dir='/opt/seafile/' + get_conf('SEAFILE_SERVER', 'seafile-server') + '-' + read_version_stamp()
|
||||
if not exists(latest_version_dir):
|
||||
call('ln -sf ' + current_version_dir + ' ' + latest_version_dir)
|
||||
loginfo('Skip running setup-seafile-mysql.py because there is existing seafile-data folder.')
|
||||
return
|
||||
|
||||
loginfo('Now running setup-seafile-mysql.py in auto mode.')
|
||||
env = {
|
||||
'SERVER_NAME': 'seafile',
|
||||
'SERVER_IP': get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com'),
|
||||
'MYSQL_USER': 'seafile',
|
||||
'MYSQL_USER_PASSWD': str(uuid.uuid4()),
|
||||
'MYSQL_USER_HOST': '%.%.%.%',
|
||||
'MYSQL_HOST': get_conf('DB_HOST','127.0.0.1'),
|
||||
# Default MariaDB root user has empty password and can only connect from localhost.
|
||||
'MYSQL_ROOT_PASSWD': get_conf('DB_ROOT_PASSWD', ''),
|
||||
}
|
||||
|
||||
# Change the script to allow mysql root password to be empty
|
||||
# call('''sed -i -e 's/if not mysql_root_passwd/if not mysql_root_passwd and "MYSQL_ROOT_PASSWD" not in os.environ/g' {}'''
|
||||
# .format(get_script('setup-seafile-mysql.py')))
|
||||
|
||||
# Change the script to disable check MYSQL_USER_HOST
|
||||
call('''sed -i -e '/def validate_mysql_user_host(self, host)/a \ \ \ \ \ \ \ \ return host' {}'''
|
||||
.format(get_script('setup-seafile-mysql.py')))
|
||||
|
||||
call('''sed -i -e '/def validate_mysql_host(self, host)/a \ \ \ \ \ \ \ \ return host' {}'''
|
||||
.format(get_script('setup-seafile-mysql.py')))
|
||||
|
||||
setup_script = get_script('setup-seafile-mysql.sh')
|
||||
call('{} auto -n seafile'.format(setup_script), env=env)
|
||||
|
||||
domain = get_conf('SEAFILE_SERVER_HOSTNAME', 'seafile.example.com')
|
||||
proto = 'https' if is_https() else 'http'
|
||||
with open(join(topdir, 'conf', 'seahub_settings.py'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write("""CACHES = {
|
||||
'default': {
|
||||
'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
|
||||
'LOCATION': 'memcached:11211',
|
||||
},
|
||||
'locmem': {
|
||||
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
|
||||
},
|
||||
}
|
||||
COMPRESS_CACHE_BACKEND = 'locmem'""")
|
||||
fp.write('\n')
|
||||
fp.write("TIME_ZONE = '{time_zone}'".format(time_zone=os.getenv('TIME_ZONE',default='Etc/UTC')))
|
||||
fp.write('\n')
|
||||
fp.write('FILE_SERVER_ROOT = "{proto}://{domain}/seafhttp"'.format(proto=proto, domain=domain))
|
||||
fp.write('\n')
|
||||
|
||||
# By default ccnet-server binds to the unix socket file
|
||||
# "/opt/seafile/ccnet/ccnet.sock", but /opt/seafile/ccnet/ is a mounted
|
||||
# volume from the docker host, and on windows and some linux environment
|
||||
# it's not possible to create unix sockets in an external-mounted
|
||||
# directories. So we change the unix socket file path to
|
||||
# "/opt/seafile/ccnet.sock" to avoid this problem.
|
||||
with open(join(topdir, 'conf', 'ccnet.conf'), 'a+') as fp:
|
||||
fp.write('\n')
|
||||
fp.write('[Client]\n')
|
||||
fp.write('UNIX_SOCKET = /opt/seafile/ccnet.sock\n')
|
||||
fp.write('\n')
|
||||
|
||||
# Disabled the Elasticsearch process on Seafile-container
|
||||
# Connection to the Elasticsearch-container
|
||||
if os.path.exists(join(topdir, 'conf', 'seafevents.conf')):
|
||||
with open(join(topdir, 'conf', 'seafevents.conf'), 'r') as fp:
|
||||
fp_lines = fp.readlines()
|
||||
if '[INDEX FILES]\n' in fp_lines:
|
||||
insert_index = fp_lines.index('[INDEX FILES]\n') + 1
|
||||
insert_lines = ['es_port = 9200\n', 'es_host = elasticsearch\n', 'external_es_server = true\n']
|
||||
for line in insert_lines:
|
||||
fp_lines.insert(insert_index, line)
|
||||
|
||||
with open(join(topdir, 'conf', 'seafevents.conf'), 'w') as fp:
|
||||
fp.writelines(fp_lines)
|
||||
|
||||
# After the setup script creates all the files inside the
|
||||
# container, we need to move them to the shared volume
|
||||
#
|
||||
# e.g move "/opt/seafile/seafile-data" to "/shared/seafile/seafile-data"
|
||||
files_to_copy = ['conf', 'ccnet', 'seafile-data', 'seahub-data', 'pro-data']
|
||||
for fn in files_to_copy:
|
||||
src = join(topdir, fn)
|
||||
dst = join(shared_seafiledir, fn)
|
||||
if not exists(dst) and exists(src):
|
||||
shutil.move(src, shared_seafiledir)
|
||||
call('ln -sf ' + join(shared_seafiledir, fn) + ' ' + src)
|
||||
|
||||
loginfo('Updating version stamp')
|
||||
update_version_stamp(os.environ['SEAFILE_VERSION'])
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue