Compare commits

..

No commits in common. "develop" and "v0.15.2" have entirely different histories.

129 changed files with 1338 additions and 3557 deletions

View file

@ -1,40 +0,0 @@
# Configuration for Label Actions - https://github.com/dessant/label-actions
community support:
comment: |
Hey @{issue-author}, thank you for raising this issue with us.
After a first review we noticed that this does not seem to be a technical issue, but rather a configuration issue or general question about how Portmaster works.
Thus, we invite the community to help with configuration and/or answering this questions.
If you are in a hurry or haven't received an answer, a good place to ask is in [our Discord community](https://discord.gg/safing).
If your problem or question has been resolved or answered, please come back and give an update here for other users encountering the same and then close this issue.
If you are a paying subscriber and want this issue to be checked out by Safing, please send us a message [on Discord](https://discord.gg/safing) or [via Email](mailto:support@safing.io) with your username and the link to this issue, so we can prioritize accordingly.
needs debug info:
comment: |
Hey @{issue-author}, thank you for raising this issue with us.
After a first review we noticed that we will require the Debug Info for further investigation. However, you haven't supplied any Debug Info in your report.
Please [collect Debug Info](https://wiki.safing.io/en/FAQ/DebugInfo) from Portmaster _while_ the reported issue is present.
in/compatibility:
comment: |
Hey @{issue-author}, thank you for reporting on a compatibility.
We keep a list of compatible software and user provided guides for improving compatibility [in the wiki - please have a look there](https://wiki.safing.io/en/Portmaster/App/Compatibility).
If you can't find your software in the list, then a good starting point is our guide on [How do I make software compatible with Portmaster](https://wiki.safing.io/en/FAQ/MakeSoftwareCompatibleWithPortmaster).
If you have managed to establish compatibility with an application, please share your findings here. This will greatly help other users encountering the same issues.
fixed:
comment: |
This issue has been fixed by the recently referenced commit or PR.
However, the fix is not released yet.
It is expected to go into the [Beta Release Channel](https://wiki.safing.io/en/FAQ/SwitchReleaseChannel) for testing within the next two weeks and will be available for everyone within the next four weeks. While this is the typical timeline we work with, things are subject to change.

72
.github/workflows/codeql-analysis.yml vendored Normal file
View file

@ -0,0 +1,72 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ "develop", master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ "develop" ]
schedule:
- cron: '17 17 * * 1'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'go' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v2
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# Details on CodeQL's query packs refer to : https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v2
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
# If the Autobuild fails above, remove it and uncomment the following three lines.
# modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
# - run: |
# echo "Run, Build Application using script"
# ./location_of_script_within_repo/buildscript.sh
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2

View file

@ -15,24 +15,23 @@ jobs:
name: Linter
runs-on: ubuntu-latest
steps:
- name: Check out code
- name: Check out code into the Go module directory
uses: actions/checkout@v3
- name: Setup Go
uses: actions/setup-go@v4
- uses: actions/setup-go@v3
with:
go-version: '^1.21'
- name: Get dependencies
run: go mod download
go-version: '^1.19'
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: v1.52.2
version: v1.49.0
only-new-issues: true
args: -c ./.golangci.yml --timeout 15m
- name: Get dependencies
run: go mod download
- name: Run go vet
run: go vet ./...
@ -44,9 +43,9 @@ jobs:
uses: actions/checkout@v3
- name: Setup Go
uses: actions/setup-go@v4
uses: actions/setup-go@v3
with:
go-version: '^1.21'
go-version: '^1.19'
- name: Get dependencies
run: go mod download

50
.github/workflows/issue-manager.yml vendored Normal file
View file

@ -0,0 +1,50 @@
name: Issue Manager
on:
workflow_dispatch:
schedule:
- cron: "17 5 * * 1-5" # run at 5:17 on Monday to Friday
# We only use the issue manager for auto-closing, so we only need the cron trigger.
# issue_comment:
# types:
# - created
# - edited
# issues:
# types:
# - labeled
jobs:
issue-manager:
runs-on: ubuntu-latest
steps:
- uses: tiangolo/issue-manager@0.4.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
config: >
{
"$schema": "https://raw.githubusercontent.com/tiangolo/issue-manager/master/schema.json",
"waiting for input": {
"delay": "P30DT0H0M0S",
"message": "Auto-closing this issue after waiting for input for a month. If anyone finds the time to provide the requested information, please re-open the issue and we will continue handling it.",
"remove_label_on_comment": true,
"remove_label_on_close": false
},
"waiting for fix confirmation": {
"delay": "P30DT0H0M0S",
"message": "Auto-closing this issue after waiting for a fix confirmation for a month. If anyone still experiences this issue, please re-open the issue with updated information so we can continue working on a fix.",
"remove_label_on_comment": true,
"remove_label_on_close": false
},
"waiting for release": {
"delay": "P3650DT0H0M0S",
"message": "That was 10 years ago, I think we can close this now.",
"remove_label_on_comment": true,
"remove_label_on_close": false
},
"waiting for resources": {
"delay": "P3650DT0H0M0S",
"message": "That was 10 years ago, I think we can close this now.",
"remove_label_on_comment": true,
"remove_label_on_close": false
}
}

View file

@ -1,26 +0,0 @@
# This workflow responds to first time posters with a greeting message.
# Docs: https://github.com/actions/first-interaction
name: Greet New Users
# This workflow is triggered when a new issue is created.
on:
issues:
types: opened
permissions:
contents: read
issues: write
jobs:
greet:
runs-on: ubuntu-latest
steps:
- uses: actions/first-interaction@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
# Respond to first time issue raisers.
issue-message: |
Greetings and welcome to our community! As this is the first issue you opened here, we wanted to share some useful infos with you:
- 🗣️ Our community on [Discord](https://discord.gg/safing) is super helpful and active. We also have an AI-enabled support bot that knows Portmaster well and can give you immediate help.
- 📖 The [Wiki](https://wiki.safing.io/) answers all common questions and has many important details. If you can't find an answer there, let us know, so we can add anything that's missing.

View file

@ -1,22 +0,0 @@
# This workflow responds with a message when certain labels are added to an issue or PR.
# Docs: https://github.com/dessant/label-actions
name: Label Actions
# This workflow is triggered when a label is added to an issue.
on:
issues:
types: labeled
permissions:
contents: read
issues: write
jobs:
action:
runs-on: ubuntu-latest
steps:
- uses: dessant/label-actions@v3
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
config-path: ".github/label-actions.yml"
process-only: "issues"

View file

@ -1,42 +0,0 @@
# This workflow warns and then closes stale issues and PRs.
# Docs: https://github.com/actions/stale
name: Close Stale Issues
on:
schedule:
- cron: "17 5 * * 1-5" # run at 5:17 (UTC) on Monday to Friday
workflow_dispatch:
permissions:
contents: read
issues: write
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v8
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
# Increase max operations.
# When using GITHUB_TOKEN, the rate limit is 1,000 requests per hour per repository.
operations-per-run: 500
# Handle stale issues
stale-issue-label: 'stale'
# Exemptions
exempt-all-issue-assignees: true
exempt-issue-labels: 'support,dependencies,pinned,security'
# Mark as stale
days-before-issue-stale: 63 # 2 months / 9 weeks
stale-issue-message: |
This issue has been automatically marked as inactive because it has not had activity in the past two months.
If no further activity occurs, this issue will be automatically closed in one week in order to increase our focus on active topics.
# Close
days-before-issue-close: 7 # 1 week
close-issue-message: |
This issue has been automatically closed because it has not had recent activity. Thank you for your contributions.
If the issue has not been resolved, you can [find more information in our Wiki](https://wiki.safing.io/) or [continue the conversation on our Discord](https://discord.gg/safing).
# TODO: Handle stale PRs
days-before-pr-stale: 36500 # 100 years - effectively disabled.

2
.gitignore vendored
View file

@ -4,5 +4,3 @@ misc
go.mod.*
vendor
go.work
go.work.sum

View file

@ -7,7 +7,6 @@ linters:
- containedctx
- contextcheck
- cyclop
- depguard
- exhaustivestruct
- exhaustruct
- forbidigo
@ -23,7 +22,6 @@ linters:
- interfacer
- ireturn
- lll
- musttag
- nestif
- nilnil
- nlreturn

View file

@ -151,7 +151,7 @@ func authenticateRequest(w http.ResponseWriter, r *http.Request, targetHandler h
switch requiredPermission { //nolint:exhaustive
case NotFound:
// Not found.
tracer.Debug("api: no API endpoint registered for this path")
tracer.Trace("api: authenticated handler reported: not found")
http.Error(w, "Not found.", http.StatusNotFound)
return nil
case NotSupported:

View file

@ -64,7 +64,7 @@ func registerConfig() error {
err = config.Register(&config.Option{
Name: "API Keys",
Key: CfgAPIKeys,
Description: "Define API keys for privileged access to the API. Every entry is a separate API key with respective permissions. Format is `<key>?read=<perm>&write=<perm>`. Permissions are `anyone`, `user` and `admin`, and may be omitted.",
Description: "Define API keys for priviledged access to the API. Every entry is a separate API key with respective permissions. Format is `<key>?read=<perm>&write=<perm>`. Permissions are `anyone`, `user` and `admin`, and may be omitted.",
Sensitive: true,
OptType: config.OptTypeStringArray,
ExpertiseLevel: config.ExpertiseLevelDeveloper,

View file

@ -44,7 +44,7 @@ var (
func init() {
RegisterHandler("/api/database/v1", WrapInAuthHandler(
startDatabaseWebsocketAPI,
startDatabaseAPI,
// Default to admin read/write permissions until the database gets support
// for api permissions.
dbCompatibilityPermission,
@ -52,8 +52,11 @@ func init() {
))
}
// DatabaseAPI is a generic database API interface.
// DatabaseAPI is a database API instance.
type DatabaseAPI struct {
conn *websocket.Conn
sendQueue chan []byte
queriesLock sync.Mutex
queries map[string]*iterator.Iterator
@ -63,35 +66,13 @@ type DatabaseAPI struct {
shutdownSignal chan struct{}
shuttingDown *abool.AtomicBool
db *database.Interface
sendBytes func(data []byte)
}
// DatabaseWebsocketAPI is a database websocket API interface.
type DatabaseWebsocketAPI struct {
DatabaseAPI
sendQueue chan []byte
conn *websocket.Conn
}
func allowAnyOrigin(r *http.Request) bool {
return true
}
// CreateDatabaseAPI creates a new database interface.
func CreateDatabaseAPI(sendFunction func(data []byte)) DatabaseAPI {
return DatabaseAPI{
queries: make(map[string]*iterator.Iterator),
subs: make(map[string]*database.Subscription),
shutdownSignal: make(chan struct{}),
shuttingDown: abool.NewBool(false),
db: database.NewInterface(nil),
sendBytes: sendFunction,
}
}
func startDatabaseWebsocketAPI(w http.ResponseWriter, r *http.Request) {
func startDatabaseAPI(w http.ResponseWriter, r *http.Request) {
upgrader := websocket.Upgrader{
CheckOrigin: allowAnyOrigin,
ReadBufferSize: 1024,
@ -105,21 +86,14 @@ func startDatabaseWebsocketAPI(w http.ResponseWriter, r *http.Request) {
return
}
newDBAPI := &DatabaseWebsocketAPI{
DatabaseAPI: DatabaseAPI{
queries: make(map[string]*iterator.Iterator),
subs: make(map[string]*database.Subscription),
shutdownSignal: make(chan struct{}),
shuttingDown: abool.NewBool(false),
db: database.NewInterface(nil),
},
sendQueue: make(chan []byte, 100),
conn: wsConn,
}
newDBAPI.sendBytes = func(data []byte) {
newDBAPI.sendQueue <- data
newDBAPI := &DatabaseAPI{
conn: wsConn,
sendQueue: make(chan []byte, 100),
queries: make(map[string]*iterator.Iterator),
subs: make(map[string]*database.Subscription),
shutdownSignal: make(chan struct{}),
shuttingDown: abool.NewBool(false),
db: database.NewInterface(nil),
}
module.StartWorker("database api handler", newDBAPI.handler)
@ -128,77 +102,7 @@ func startDatabaseWebsocketAPI(w http.ResponseWriter, r *http.Request) {
log.Tracer(r.Context()).Infof("api request: init websocket %s %s", r.RemoteAddr, r.RequestURI)
}
func (api *DatabaseWebsocketAPI) handler(context.Context) error {
defer func() {
_ = api.shutdown(nil)
}()
for {
_, msg, err := api.conn.ReadMessage()
if err != nil {
return api.shutdown(err)
}
api.Handle(msg)
}
}
func (api *DatabaseWebsocketAPI) writer(ctx context.Context) error {
defer func() {
_ = api.shutdown(nil)
}()
var data []byte
var err error
for {
select {
// prioritize direct writes
case data = <-api.sendQueue:
if len(data) == 0 {
return nil
}
case <-ctx.Done():
return nil
case <-api.shutdownSignal:
return nil
}
// log.Tracef("api: sending %s", string(*msg))
err = api.conn.WriteMessage(websocket.BinaryMessage, data)
if err != nil {
return api.shutdown(err)
}
}
}
func (api *DatabaseWebsocketAPI) shutdown(err error) error {
// Check if we are the first to shut down.
if !api.shuttingDown.SetToIf(false, true) {
return nil
}
// Check the given error.
if err != nil {
if websocket.IsCloseError(err,
websocket.CloseNormalClosure,
websocket.CloseGoingAway,
websocket.CloseAbnormalClosure,
) {
log.Infof("api: websocket connection to %s closed", api.conn.RemoteAddr())
} else {
log.Warningf("api: websocket connection error with %s: %s", api.conn.RemoteAddr(), err)
}
}
// Trigger shutdown.
close(api.shutdownSignal)
_ = api.conn.Close()
return nil
}
// Handle handles a message for the database API.
func (api *DatabaseAPI) Handle(msg []byte) {
func (api *DatabaseAPI) handler(context.Context) error {
// 123|get|<key>
// 123|ok|<key>|<data>
// 123|error|<message>
@ -237,62 +141,120 @@ func (api *DatabaseAPI) Handle(msg []byte) {
// 131|success
// 131|error|<message>
parts := bytes.SplitN(msg, []byte("|"), 3)
for {
// Handle special command "cancel"
if len(parts) == 2 && string(parts[1]) == "cancel" {
// 124|cancel
// 125|cancel
// 127|cancel
go api.handleCancel(parts[0])
return
}
_, msg, err := api.conn.ReadMessage()
if err != nil {
return api.shutdown(err)
}
if len(parts) != 3 {
api.send(nil, dbMsgTypeError, "bad request: malformed message", nil)
return
}
parts := bytes.SplitN(msg, []byte("|"), 3)
switch string(parts[1]) {
case "get":
// 123|get|<key>
go api.handleGet(parts[0], string(parts[2]))
case "query":
// 124|query|<query>
go api.handleQuery(parts[0], string(parts[2]))
case "sub":
// 125|sub|<query>
go api.handleSub(parts[0], string(parts[2]))
case "qsub":
// 127|qsub|<query>
go api.handleQsub(parts[0], string(parts[2]))
case "create", "update", "insert":
// split key and payload
dataParts := bytes.SplitN(parts[2], []byte("|"), 2)
if len(dataParts) != 2 {
// Handle special command "cancel"
if len(parts) == 2 && string(parts[1]) == "cancel" {
// 124|cancel
// 125|cancel
// 127|cancel
go api.handleCancel(parts[0])
continue
}
if len(parts) != 3 {
api.send(nil, dbMsgTypeError, "bad request: malformed message", nil)
return
continue
}
switch string(parts[1]) {
case "create":
// 128|create|<key>|<data>
go api.handlePut(parts[0], string(dataParts[0]), dataParts[1], true)
case "update":
// 129|update|<key>|<data>
go api.handlePut(parts[0], string(dataParts[0]), dataParts[1], false)
case "insert":
// 130|insert|<key>|<data>
go api.handleInsert(parts[0], string(dataParts[0]), dataParts[1])
case "get":
// 123|get|<key>
go api.handleGet(parts[0], string(parts[2]))
case "query":
// 124|query|<query>
go api.handleQuery(parts[0], string(parts[2]))
case "sub":
// 125|sub|<query>
go api.handleSub(parts[0], string(parts[2]))
case "qsub":
// 127|qsub|<query>
go api.handleQsub(parts[0], string(parts[2]))
case "create", "update", "insert":
// split key and payload
dataParts := bytes.SplitN(parts[2], []byte("|"), 2)
if len(dataParts) != 2 {
api.send(nil, dbMsgTypeError, "bad request: malformed message", nil)
continue
}
switch string(parts[1]) {
case "create":
// 128|create|<key>|<data>
go api.handlePut(parts[0], string(dataParts[0]), dataParts[1], true)
case "update":
// 129|update|<key>|<data>
go api.handlePut(parts[0], string(dataParts[0]), dataParts[1], false)
case "insert":
// 130|insert|<key>|<data>
go api.handleInsert(parts[0], string(dataParts[0]), dataParts[1])
}
case "delete":
// 131|delete|<key>
go api.handleDelete(parts[0], string(parts[2]))
default:
api.send(parts[0], dbMsgTypeError, "bad request: unknown method", nil)
}
case "delete":
// 131|delete|<key>
go api.handleDelete(parts[0], string(parts[2]))
default:
api.send(parts[0], dbMsgTypeError, "bad request: unknown method", nil)
}
}
func (api *DatabaseAPI) writer(ctx context.Context) error {
var data []byte
var err error
for {
select {
// prioritize direct writes
case data = <-api.sendQueue:
if len(data) == 0 {
return api.shutdown(nil)
}
case <-ctx.Done():
return api.shutdown(nil)
case <-api.shutdownSignal:
return api.shutdown(nil)
}
// log.Tracef("api: sending %s", string(*msg))
err = api.conn.WriteMessage(websocket.BinaryMessage, data)
if err != nil {
return api.shutdown(err)
}
}
}
func (api *DatabaseAPI) shutdown(err error) error {
// Check if we are the first to shut down.
if !api.shuttingDown.SetToIf(false, true) {
return nil
}
// Check the given error.
if err != nil {
if websocket.IsCloseError(err,
websocket.CloseNormalClosure,
websocket.CloseGoingAway,
websocket.CloseAbnormalClosure,
) {
log.Infof("api: websocket connection to %s closed", api.conn.RemoteAddr())
} else {
log.Warningf("api: websocket connection error with %s: %s", api.conn.RemoteAddr(), err)
}
}
// Trigger shutdown.
close(api.shutdownSignal)
_ = api.conn.Close()
return nil
}
func (api *DatabaseAPI) send(opID []byte, msgType string, msgOrKey string, data []byte) {
c := container.New(opID)
c.Append(dbAPISeperatorBytes)
@ -308,7 +270,7 @@ func (api *DatabaseAPI) send(opID []byte, msgType string, msgOrKey string, data
c.Append(data)
}
api.sendBytes(c.CompileData())
api.sendQueue <- c.CompileData()
}
func (api *DatabaseAPI) handleGet(opID []byte, key string) {
@ -320,7 +282,7 @@ func (api *DatabaseAPI) handleGet(opID []byte, key string) {
r, err := api.db.Get(key)
if err == nil {
data, err = MarshalRecord(r, true)
data, err = marshalRecord(r, true)
}
if err != nil {
api.send(opID, dbMsgTypeError, err.Error(), nil)
@ -373,12 +335,12 @@ func (api *DatabaseAPI) processQuery(opID []byte, q *query.Query) (ok bool) {
case <-api.shutdownSignal:
// cancel query and return
it.Cancel()
return false
return
case r := <-it.Next:
// process query feed
if r != nil {
// process record
data, err := MarshalRecord(r, true)
data, err := marshalRecord(r, true)
if err != nil {
api.send(opID, dbMsgTypeWarning, err.Error(), nil)
continue
@ -397,7 +359,7 @@ func (api *DatabaseAPI) processQuery(opID []byte, q *query.Query) (ok bool) {
}
}
// func (api *DatabaseWebsocketAPI) runQuery()
// func (api *DatabaseAPI) runQuery()
func (api *DatabaseAPI) handleSub(opID []byte, queryText string) {
// 125|sub|<query>
@ -455,7 +417,7 @@ func (api *DatabaseAPI) processSub(opID []byte, sub *database.Subscription) {
// process sub feed
if r != nil {
// process record
data, err := MarshalRecord(r, true)
data, err := marshalRecord(r, true)
if err != nil {
api.send(opID, dbMsgTypeWarning, err.Error(), nil)
continue
@ -659,9 +621,9 @@ func (api *DatabaseAPI) handleDelete(opID []byte, key string) {
api.send(opID, dbMsgTypeSuccess, emptyString, nil)
}
// MarshalRecord locks and marshals the given record, additionally adding
// marsharlRecords locks and marshals the given record, additionally adding
// metadata and returning it as json.
func MarshalRecord(r record.Record, withDSDIdentifier bool) ([]byte, error) {
func marshalRecord(r record.Record, withDSDIdentifier bool) ([]byte, error) {
r.Lock()
defer r.Unlock()

View file

@ -2,9 +2,11 @@ package api
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"sort"
"strconv"
@ -14,7 +16,6 @@ import (
"github.com/gorilla/mux"
"github.com/safing/portbase/database/record"
"github.com/safing/portbase/formats/dsd"
"github.com/safing/portbase/log"
"github.com/safing/portbase/modules"
)
@ -23,13 +24,6 @@ import (
// Path and at least one permission are required.
// As is exactly one function.
type Endpoint struct { //nolint:maligned
// Name is the human reabable name of the endpoint.
Name string
// Description is the human readable description and documentation of the endpoint.
Description string
// Parameters is the parameter documentation.
Parameters []Parameter `json:",omitempty"`
// Path describes the URL path of the endpoint.
Path string
@ -81,6 +75,12 @@ type Endpoint struct { //nolint:maligned
// HandlerFunc is the raw http handler.
HandlerFunc http.HandlerFunc `json:"-"`
// Documentation Metadata.
Name string
Description string
Parameters []Parameter `json:",omitempty"`
}
// Parameter describes a parameterized variation of an endpoint.
@ -209,7 +209,7 @@ func getAPIContext(r *http.Request) (apiEndpoint *Endpoint, apiRequest *Request)
// does not pass the sanity checks.
func RegisterEndpoint(e Endpoint) error {
if err := e.check(); err != nil {
return fmt.Errorf("%w: %w", ErrInvalidEndpoint, err)
return fmt.Errorf("%w: %s", ErrInvalidEndpoint, err)
}
endpointsLock.Lock()
@ -225,18 +225,6 @@ func RegisterEndpoint(e Endpoint) error {
return nil
}
// GetEndpointByPath returns the endpoint registered with the given path.
func GetEndpointByPath(path string) (*Endpoint, error) {
endpointsLock.Lock()
defer endpointsLock.Unlock()
endpoint, ok := endpoints[path]
if !ok {
return nil, fmt.Errorf("no registered endpoint on path: %q", path)
}
return endpoint, nil
}
func (e *Endpoint) check() error {
// Check path.
if strings.TrimSpace(e.Path) == "" {
@ -381,7 +369,7 @@ func (e *Endpoint) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Wait for the owning module to be ready.
if !moduleIsReady(e.BelongsTo) {
http.Error(w, "The API endpoint is not ready yet or the its module is not enabled. Reload (F5) to try again.", http.StatusServiceUnavailable)
http.Error(w, "The API endpoint is not ready yet or the its module is not enabled. Please try again later.", http.StatusServiceUnavailable)
return
}
@ -401,18 +389,18 @@ func (e *Endpoint) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if eMethod != e.ReadMethod {
log.Tracer(r.Context()).Warningf(
"api: method %q does not match required read method %q%s",
" - this will be an error and abort the request in the future",
r.Method,
e.ReadMethod,
" - this will be an error and abort the request in the future",
)
}
} else {
if eMethod != e.WriteMethod {
log.Tracer(r.Context()).Warningf(
"api: method %q does not match required write method %q%s",
r.Method,
e.WriteMethod,
" - this will be an error and abort the request in the future",
r.Method,
e.ReadMethod,
)
}
}
@ -436,9 +424,6 @@ func (e *Endpoint) ServeHTTP(w http.ResponseWriter, r *http.Request) {
return
}
// Add response headers to request struct so that the endpoint can work with them.
apiRequest.ResponseHeader = w.Header()
// Execute action function and get response data
var responseData []byte
var err error
@ -461,18 +446,14 @@ func (e *Endpoint) ServeHTTP(w http.ResponseWriter, r *http.Request) {
var v interface{}
v, err = e.StructFunc(apiRequest)
if err == nil && v != nil {
var mimeType string
responseData, mimeType, _, err = dsd.MimeDump(v, r.Header.Get("Accept"))
if err == nil {
w.Header().Set("Content-Type", mimeType)
}
responseData, err = json.Marshal(v)
}
case e.RecordFunc != nil:
var rec record.Record
rec, err = e.RecordFunc(apiRequest)
if err == nil && r != nil {
responseData, err = MarshalRecord(rec, false)
responseData, err = marshalRecord(rec, false)
}
case e.HandlerFunc != nil:
@ -486,6 +467,7 @@ func (e *Endpoint) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Check for handler error.
if err != nil {
// if statusProvider, ok := err.(HTTPStatusProvider); ok {
var statusProvider HTTPStatusProvider
if errors.As(err, &statusProvider) {
http.Error(w, err.Error(), statusProvider.HTTPStatus())
@ -501,12 +483,8 @@ func (e *Endpoint) ServeHTTP(w http.ResponseWriter, r *http.Request) {
return
}
// Set content type if not yet set.
if w.Header().Get("Content-Type") == "" {
w.Header().Set("Content-Type", e.MimeType+"; charset=utf-8")
}
// Write response.
w.Header().Set("Content-Type", e.MimeType+"; charset=utf-8")
w.Header().Set("Content-Length", strconv.Itoa(len(responseData)))
w.WriteHeader(http.StatusOK)
_, err = w.Write(responseData)
@ -523,7 +501,7 @@ func readBody(w http.ResponseWriter, r *http.Request) (inputData []byte, ok bool
}
// Read and close body.
inputData, err := io.ReadAll(r.Body)
inputData, err := ioutil.ReadAll(r.Body)
if err != nil {
http.Error(w, "failed to read body"+err.Error(), http.StatusInternalServerError)
return nil, false

View file

@ -3,7 +3,6 @@ package api
import (
"bytes"
"context"
"errors"
"fmt"
"net/http"
"os"
@ -11,8 +10,6 @@ import (
"strings"
"time"
"github.com/safing/portbase/info"
"github.com/safing/portbase/modules"
"github.com/safing/portbase/utils/debug"
)
@ -27,16 +24,6 @@ func registerDebugEndpoints() error {
return err
}
if err := RegisterEndpoint(Endpoint{
Path: "ready",
Read: PermitAnyone,
ActionFunc: ready,
Name: "Ready",
Description: "Check if Portmaster has completed starting and is ready.",
}); err != nil {
return err
}
if err := RegisterEndpoint(Endpoint{
Path: "debug/stack",
Read: PermitAnyone,
@ -59,7 +46,6 @@ func registerDebugEndpoints() error {
if err := RegisterEndpoint(Endpoint{
Path: "debug/cpu",
MimeType: "application/octet-stream",
Read: PermitAnyone,
DataFunc: handleCPUProfile,
Name: "Get CPU Profile",
@ -81,7 +67,6 @@ You can easily view this data in your browser with this command (with Go install
if err := RegisterEndpoint(Endpoint{
Path: "debug/heap",
MimeType: "application/octet-stream",
Read: PermitAnyone,
DataFunc: handleHeapProfile,
Name: "Get Heap Profile",
@ -96,7 +81,6 @@ You can easily view this data in your browser with this command (with Go install
if err := RegisterEndpoint(Endpoint{
Path: "debug/allocs",
MimeType: "application/octet-stream",
Read: PermitAnyone,
DataFunc: handleAllocsProfile,
Name: "Get Allocs Profile",
@ -130,22 +114,9 @@ You can easily view this data in your browser with this command (with Go install
// ping responds with pong.
func ping(ar *Request) (msg string, err error) {
// TODO: Remove upgrade to "ready" when all UI components have transitioned.
if modules.IsStarting() || modules.IsShuttingDown() {
return "", ErrorWithStatus(errors.New("portmaster is not ready, reload (F5) to try again"), http.StatusTooEarly)
}
return "Pong.", nil
}
// ready checks if Portmaster has completed starting.
func ready(ar *Request) (msg string, err error) {
if modules.IsStarting() || modules.IsShuttingDown() {
return "", ErrorWithStatus(errors.New("portmaster is not ready, reload (F5) to try again"), http.StatusTooEarly)
}
return "Portmaster is ready.", nil
}
// getStack returns the current goroutine stack.
func getStack(_ *Request) (data []byte, err error) {
buf := &bytes.Buffer{}
@ -183,12 +154,6 @@ func handleCPUProfile(ar *Request) (data []byte, err error) {
duration = parsedDuration
}
// Indicate download and filename.
ar.ResponseHeader.Set(
"Content-Disposition",
fmt.Sprintf(`attachment; filename="portmaster-cpu-profile_v%s.pprof"`, info.Version()),
)
// Start CPU profiling.
buf := new(bytes.Buffer)
if err := pprof.StartCPUProfile(buf); err != nil {
@ -210,12 +175,6 @@ func handleCPUProfile(ar *Request) (data []byte, err error) {
// handleHeapProfile returns the Heap profile.
func handleHeapProfile(ar *Request) (data []byte, err error) {
// Indicate download and filename.
ar.ResponseHeader.Set(
"Content-Disposition",
fmt.Sprintf(`attachment; filename="portmaster-memory-heap-profile_v%s.pprof"`, info.Version()),
)
buf := new(bytes.Buffer)
if err := pprof.Lookup("heap").WriteTo(buf, 0); err != nil {
return nil, fmt.Errorf("failed to write heap profile: %w", err)
@ -225,12 +184,6 @@ func handleHeapProfile(ar *Request) (data []byte, err error) {
// handleAllocsProfile returns the Allocs profile.
func handleAllocsProfile(ar *Request) (data []byte, err error) {
// Indicate download and filename.
ar.ResponseHeader.Set(
"Content-Disposition",
fmt.Sprintf(`attachment; filename="portmaster-memory-allocs-profile_v%s.pprof"`, info.Version()),
)
buf := new(bytes.Buffer)
if err := pprof.Lookup("allocs").WriteTo(buf, 0); err != nil {
return nil, fmt.Errorf("failed to write allocs profile: %w", err)

View file

@ -3,21 +3,9 @@ package api
import (
"errors"
"fmt"
"github.com/safing/portbase/modules"
)
func registerModulesEndpoints() error {
if err := RegisterEndpoint(Endpoint{
Path: "modules/status",
Read: PermitUser,
StructFunc: getStatusfunc,
Name: "Get Module Status",
Description: "Returns status information of all modules.",
}); err != nil {
return err
}
if err := RegisterEndpoint(Endpoint{
Path: "modules/{moduleName:.+}/trigger/{eventName:.+}",
Write: PermitSelf,
@ -31,14 +19,6 @@ func registerModulesEndpoints() error {
return nil
}
func getStatusfunc(ar *Request) (i interface{}, err error) {
status := modules.GetStatus()
if status == nil {
return nil, errors.New("modules not yet initialized")
}
return status, nil
}
func triggerEvent(ar *Request) (msg string, err error) {
// Get parameters.
moduleName := ar.URLVars["moduleName"]

View file

@ -1,6 +1,7 @@
package api
import (
"context"
"encoding/json"
"errors"
"flag"
@ -57,7 +58,7 @@ func prep() error {
}
func start() error {
startServer()
go Serve()
_ = updateAPIKeys(module.Ctx, nil)
err := module.RegisterEventHook("config", "config change", "update API keys", updateAPIKeys)
@ -74,7 +75,10 @@ func start() error {
}
func stop() error {
return stopServer()
if server != nil {
return server.Shutdown(context.Background())
}
return nil
}
func exportEndpointsCmd() error {

View file

@ -2,6 +2,7 @@ package api
import (
"fmt"
"io/ioutil"
"os"
"testing"
@ -20,7 +21,7 @@ func TestMain(m *testing.M) {
module.Enable()
// tmp dir for data root (db & config)
tmpDir, err := os.MkdirTemp("", "portbase-testing-")
tmpDir, err := ioutil.TempDir("", "portbase-testing-")
if err != nil {
fmt.Fprintf(os.Stderr, "failed to create tmp dir: %s\n", err)
os.Exit(1)

View file

@ -26,9 +26,6 @@ type Request struct {
// AuthToken is the request-side authentication token assigned.
AuthToken *AuthToken
// ResponseHeader holds the response header.
ResponseHeader http.Header
// HandlerCache can be used by handlers to cache data between handlers within a request.
HandlerCache interface{}
}
@ -36,12 +33,11 @@ type Request struct {
// apiRequestContextKey is a key used for the context key/value storage.
type apiRequestContextKey struct{}
// RequestContextKey is the key used to add the API request to the context.
var RequestContextKey = apiRequestContextKey{}
var requestContextKey = apiRequestContextKey{}
// GetAPIRequest returns the API Request of the given http request.
func GetAPIRequest(r *http.Request) *Request {
ar, ok := r.Context().Value(RequestContextKey).(*Request)
ar, ok := r.Context().Value(requestContextKey).(*Request)
if ok {
return ar
}

View file

@ -18,9 +18,6 @@ import (
"github.com/safing/portbase/utils"
)
// EnableServer defines if the HTTP server should be started.
var EnableServer = true
var (
// mainMux is the main mux router.
mainMux = mux.NewRouter()
@ -37,52 +34,29 @@ var (
}
)
// RegisterHandler registers a handler with the API endpoint.
// RegisterHandler registers a handler with the API endoint.
func RegisterHandler(path string, handler http.Handler) *mux.Route {
handlerLock.Lock()
defer handlerLock.Unlock()
return mainMux.Handle(path, handler)
}
// RegisterHandleFunc registers a handle function with the API endpoint.
// RegisterHandleFunc registers a handle function with the API endoint.
func RegisterHandleFunc(path string, handleFunc func(http.ResponseWriter, *http.Request)) *mux.Route {
handlerLock.Lock()
defer handlerLock.Unlock()
return mainMux.HandleFunc(path, handleFunc)
}
func startServer() {
// Check if server is enabled.
if !EnableServer {
return
}
// Configure server.
// Serve starts serving the API endpoint.
func Serve() {
// configure server
server.Addr = listenAddressConfig()
server.Handler = &mainHandler{
// TODO: mainMux should not be modified anymore.
mux: mainMux,
}
// Start server manager.
module.StartServiceWorker("http server manager", 0, serverManager)
}
func stopServer() error {
// Check if server is enabled.
if !EnableServer {
return nil
}
if server.Addr != "" {
return server.Shutdown(context.Background())
}
return nil
}
// Serve starts serving the API endpoint.
func serverManager(_ context.Context) error {
// start serving
log.Infof("api: starting to listen on %s", server.Addr)
backoffDuration := 10 * time.Second
@ -93,7 +67,7 @@ func serverManager(_ context.Context) error {
})
// return on shutdown error
if errors.Is(err, http.ErrServerClosed) {
return nil
return
}
// log error and restart
log.Errorf("api: http endpoint failed: %s - restarting in %s", err, backoffDuration)
@ -118,7 +92,7 @@ func (mh *mainHandler) handle(w http.ResponseWriter, r *http.Request) error {
apiRequest := &Request{
Request: r,
}
ctx = context.WithValue(ctx, RequestContextKey, apiRequest)
ctx = context.WithValue(ctx, requestContextKey, apiRequest)
// Add context back to request.
r = r.WithContext(ctx)
lrw := NewLoggingResponseWriter(w, r)
@ -134,7 +108,7 @@ func (mh *mainHandler) handle(w http.ResponseWriter, r *http.Request) error {
}()
// Add security headers.
w.Header().Set("Referrer-Policy", "same-origin")
w.Header().Set("Referrer-Policy", "no-referrer")
w.Header().Set("X-Content-Type-Options", "nosniff")
w.Header().Set("X-Frame-Options", "deny")
w.Header().Set("X-XSS-Protection", "1; mode=block")
@ -147,7 +121,7 @@ func (mh *mainHandler) handle(w http.ResponseWriter, r *http.Request) error {
"default-src 'self'; "+
"connect-src https://*.safing.io 'self'; "+
"style-src 'self' 'unsafe-inline'; "+
"img-src 'self' data: blob:",
"img-src 'self' data:",
)
}
@ -235,7 +209,6 @@ func (mh *mainHandler) handle(w http.ResponseWriter, r *http.Request) error {
http.Error(lrw, "Method not allowed.", http.StatusMethodNotAllowed)
return nil
default:
tracer.Debug("api: no handler registered for this path")
http.Error(lrw, "Not found.", http.StatusNotFound)
return nil
}
@ -271,7 +244,7 @@ func (mh *mainHandler) handle(w http.ResponseWriter, r *http.Request) error {
// Wait for the owning module to be ready.
if moduleHandler, ok := handler.(ModuleHandler); ok {
if !moduleIsReady(moduleHandler.BelongsTo()) {
http.Error(lrw, "The API endpoint is not ready yet. Reload (F5) to try again.", http.StatusServiceUnavailable)
http.Error(lrw, "The API endpoint is not ready yet. Please try again later.", http.StatusServiceUnavailable)
return nil
}
}
@ -285,10 +258,6 @@ func (mh *mainHandler) handle(w http.ResponseWriter, r *http.Request) error {
// Format panics in handler.
defer func() {
if panicValue := recover(); panicValue != nil {
// Report failure via module system.
me := module.NewPanicError("api request", "custom", panicValue)
me.Report()
// Respond with a server error.
if devMode() {
http.Error(
lrw,

View file

@ -1,167 +0,0 @@
package apprise
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"sync"
"github.com/safing/portbase/utils"
)
// Notifier sends messsages to an Apprise API.
type Notifier struct {
// URL defines the Apprise API endpoint.
URL string
// DefaultType defines the default message type.
DefaultType MsgType
// DefaultTag defines the default message tag.
DefaultTag string
// DefaultFormat defines the default message format.
DefaultFormat MsgFormat
// AllowUntagged defines if untagged messages are allowed,
// which are sent to all configured apprise endpoints.
AllowUntagged bool
client *http.Client
clientLock sync.Mutex
}
// Message represents the message to be sent to the Apprise API.
type Message struct {
// Title is an optional title to go along with the body.
Title string `json:"title,omitempty"`
// Body is the main message content. This is the only required field.
Body string `json:"body"`
// Type defines the message type you want to send as.
// The valid options are info, success, warning, and failure.
// If no type is specified then info is the default value used.
Type MsgType `json:"type,omitempty"`
// Tag is used to notify only those tagged accordingly.
// Use a comma (,) to OR your tags and a space ( ) to AND them.
Tag string `json:"tag,omitempty"`
// Format optionally identifies the text format of the data you're feeding Apprise.
// The valid options are text, markdown, html.
// The default value if nothing is specified is text.
Format MsgFormat `json:"format,omitempty"`
}
// MsgType defines the message type.
type MsgType string
// Message Types.
const (
TypeInfo MsgType = "info"
TypeSuccess MsgType = "success"
TypeWarning MsgType = "warning"
TypeFailure MsgType = "failure"
)
// MsgFormat defines the message format.
type MsgFormat string
// Message Formats.
const (
FormatText MsgFormat = "text"
FormatMarkdown MsgFormat = "markdown"
FormatHTML MsgFormat = "html"
)
type errorResponse struct {
Error string `json:"error"`
}
// Send sends a message to the Apprise API.
func (n *Notifier) Send(ctx context.Context, m *Message) error {
// Check if the message has a body.
if m.Body == "" {
return errors.New("the message must have a body")
}
// Apply notifier defaults.
n.applyDefaults(m)
// Check if the message is tagged.
if m.Tag == "" && !n.AllowUntagged {
return errors.New("the message must have a tag")
}
// Marshal the message to JSON.
payload, err := json.Marshal(m)
if err != nil {
return fmt.Errorf("failed to marshal message: %w", err)
}
// Create request.
request, err := http.NewRequestWithContext(ctx, http.MethodPost, n.URL, bytes.NewReader(payload))
if err != nil {
return fmt.Errorf("failed to create request: %w", err)
}
request.Header.Set("Content-Type", "application/json")
// Send message to API.
resp, err := n.getClient().Do(request)
if err != nil {
return fmt.Errorf("failed to send message: %w", err)
}
defer resp.Body.Close() //nolint:errcheck,gosec
switch resp.StatusCode {
case http.StatusOK, http.StatusCreated, http.StatusNoContent, http.StatusAccepted:
return nil
default:
// Try to tease body contents.
if body, err := io.ReadAll(resp.Body); err == nil && len(body) > 0 {
// Try to parse json response.
errorResponse := &errorResponse{}
if err := json.Unmarshal(body, errorResponse); err == nil && errorResponse.Error != "" {
return fmt.Errorf("failed to send message: apprise returned %q with an error message: %s", resp.Status, errorResponse.Error)
}
return fmt.Errorf("failed to send message: %s (body teaser: %s)", resp.Status, utils.SafeFirst16Bytes(body))
}
return fmt.Errorf("failed to send message: %s", resp.Status)
}
}
func (n *Notifier) applyDefaults(m *Message) {
if m.Type == "" {
m.Type = n.DefaultType
}
if m.Tag == "" {
m.Tag = n.DefaultTag
}
if m.Format == "" {
m.Format = n.DefaultFormat
}
}
// SetClient sets a custom http client for accessing the Apprise API.
func (n *Notifier) SetClient(client *http.Client) {
n.clientLock.Lock()
defer n.clientLock.Unlock()
n.client = client
}
func (n *Notifier) getClient() *http.Client {
n.clientLock.Lock()
defer n.clientLock.Unlock()
// Create client if needed.
if n.client == nil {
n.client = &http.Client{}
}
return n.client
}

View file

@ -80,7 +80,7 @@ func registerBasicOptions() error {
// Register to hook to update the log level.
if err := module.RegisterEventHook(
"config",
ChangeEvent,
configChangeEvent,
"update log level",
setLogLevel,
); err != nil {

View file

@ -14,7 +14,7 @@ func parseAndReplaceConfig(jsonData string) error {
return err
}
validationErrors, _ := ReplaceConfig(m)
validationErrors := replaceConfig(m)
if len(validationErrors) > 0 {
return fmt.Errorf("%d errors, first: %w", len(validationErrors), validationErrors[0])
}
@ -27,7 +27,7 @@ func parseAndReplaceDefaultConfig(jsonData string) error {
return err
}
validationErrors, _ := ReplaceDefaultConfig(m)
validationErrors := replaceDefaultConfig(m)
if len(validationErrors) > 0 {
return fmt.Errorf("%d errors, first: %w", len(validationErrors), validationErrors[0])
}

View file

@ -5,7 +5,6 @@ import (
"errors"
"flag"
"fmt"
"io/fs"
"os"
"path/filepath"
"sort"
@ -16,8 +15,9 @@ import (
"github.com/safing/portbase/utils/debug"
)
// ChangeEvent is the name of the config change event.
const ChangeEvent = "config change"
const (
configChangeEvent = "config change"
)
var (
module *modules.Module
@ -35,7 +35,7 @@ func SetDataRoot(root *utils.DirStructure) {
func init() {
module = modules.Register("config", prep, start, nil, "database")
module.RegisterEvent(ChangeEvent, true)
module.RegisterEvent(configChangeEvent, true)
flag.BoolVar(&exportConfig, "export-config-options", false, "export configuration registry and exit")
}
@ -63,13 +63,13 @@ func start() error {
}
err = registerAsDatabase()
if err != nil && !errors.Is(err, fs.ErrNotExist) {
if err != nil && !os.IsNotExist(err) {
return err
}
err = loadConfig(false)
if err != nil && !errors.Is(err, fs.ErrNotExist) {
return fmt.Errorf("failed to load config file: %w", err)
if err != nil && !os.IsNotExist(err) {
return err
}
return nil
}

View file

@ -3,7 +3,6 @@ package config
import (
"encoding/json"
"fmt"
"reflect"
"regexp"
"sync"
@ -66,9 +65,6 @@ type PossibleValue struct {
// Format: <vendor/package>:<scope>:<identifier> //.
type Annotations map[string]interface{}
// MigrationFunc is a function that migrates a config option value.
type MigrationFunc func(option *Option, value any) any
// Well known annotations defined by this package.
const (
// DisplayHintAnnotation provides a hint for the user
@ -112,19 +108,6 @@ const (
// requirement. The type of RequiresAnnotation is []ValueRequirement
// or ValueRequirement.
RequiresAnnotation = "safing/portbase:config:requires"
// RequiresFeatureIDAnnotation can be used to mark a setting as only available
// when the user has a certain feature ID in the subscription plan.
// The type is []string or string.
RequiresFeatureIDAnnotation = "safing/portmaster:ui:config:requires-feature"
// SettablePerAppAnnotation can be used to mark a setting as settable per-app and
// is a boolean.
SettablePerAppAnnotation = "safing/portmaster:settable-per-app"
// RequiresUIReloadAnnotation can be used to inform the UI that changing the value
// of the annotated setting requires a full reload of the user interface.
// The value of this annotation does not matter as the sole presence of
// the annotation key is enough. Though, users are advised to set the value
// of this annotation to true.
RequiresUIReloadAnnotation = "safing/portmaster:ui:requires-reload"
)
// QuickSettingsAction defines the action of a quick setting.
@ -262,9 +245,6 @@ type Option struct {
// Annotations is considered mutable and setting/reading annotation keys
// must be performed while the option is locked.
Annotations Annotations
// Migrations holds migration functions that are given the raw option value
// before any validation is run. The returned value is then used.
Migrations []MigrationFunc `json:"-"`
activeValue *valueCache // runtime value (loaded from config file or set by user)
activeDefaultValue *valueCache // runtime default value (may be set internally)
@ -317,22 +297,6 @@ func (option *Option) GetAnnotation(key string) (interface{}, bool) {
return val, ok
}
// AnnotationEquals returns whether the annotation of the given key matches the
// given value.
func (option *Option) AnnotationEquals(key string, value any) bool {
option.Lock()
defer option.Unlock()
if option.Annotations == nil {
return false
}
setValue, ok := option.Annotations[key]
if !ok {
return false
}
return reflect.DeepEqual(value, setValue)
}
// copyOrNil returns a copy of the option, or nil if copying failed.
func (option *Option) copyOrNil() *Option {
copied, err := copystructure.Copy(option)
@ -342,38 +306,6 @@ func (option *Option) copyOrNil() *Option {
return copied.(*Option) //nolint:forcetypeassert
}
// IsSetByUser returns whether the option has been set by the user.
func (option *Option) IsSetByUser() bool {
option.Lock()
defer option.Unlock()
return option.activeValue != nil
}
// UserValue returns the value set by the user or nil if the value has not
// been changed from the default.
func (option *Option) UserValue() any {
option.Lock()
defer option.Unlock()
if option.activeValue == nil {
return nil
}
return option.activeValue.getData(option)
}
// ValidateValue checks if the given value is valid for the option.
func (option *Option) ValidateValue(value any) error {
option.Lock()
defer option.Unlock()
value = migrateValue(option, value)
if _, err := validateValue(option, value); err != nil {
return err
}
return nil
}
// Export expors an option to a Record.
func (option *Option) Export() (record.Record, error) {
option.Lock()

View file

@ -3,7 +3,7 @@ package config
import (
"encoding/json"
"fmt"
"os"
"io/ioutil"
"path"
"strings"
"sync"
@ -34,7 +34,7 @@ func loadConfig(requireValidConfig bool) error {
}
// read config file
data, err := os.ReadFile(configFilePath)
data, err := ioutil.ReadFile(configFilePath)
if err != nil {
return err
}
@ -45,7 +45,7 @@ func loadConfig(requireValidConfig bool) error {
return err
}
validationErrors, _ := ReplaceConfig(newValues)
validationErrors := replaceConfig(newValues)
if requireValidConfig && len(validationErrors) > 0 {
return fmt.Errorf("encountered %d validation errors during config loading", len(validationErrors))
}
@ -58,10 +58,10 @@ func loadConfig(requireValidConfig bool) error {
return nil
}
// SaveConfig saves the current configuration to file.
// saveConfig saves the current configuration to file.
// It will acquire a read-lock on the global options registry
// lock and must lock each option!
func SaveConfig() error {
func saveConfig() error {
optionsLock.RLock()
defer optionsLock.RUnlock()
@ -93,7 +93,7 @@ func SaveConfig() error {
}
// write file
return os.WriteFile(configFilePath, data, 0o0600)
return ioutil.WriteFile(configFilePath, data, 0o0600)
}
// JSONToMap parses and flattens a hierarchical json object.

View file

@ -35,8 +35,6 @@ optionsLoop:
if !ok {
continue
}
// migrate value
configValue = migrateValue(option, configValue)
// validate value
valueCache, err := validateValue(option, configValue)
if err != nil {

View file

@ -34,126 +34,80 @@ func signalChanges() {
validityFlag = abool.NewBool(true)
validityFlagLock.Unlock()
module.TriggerEvent(ChangeEvent, nil)
module.TriggerEvent(configChangeEvent, nil)
}
// ValidateConfig validates the given configuration and returns all validation
// errors as well as whether the given configuration contains unknown keys.
func ValidateConfig(newValues map[string]interface{}) (validationErrors []*ValidationError, requiresRestart bool, containsUnknown bool) {
// replaceConfig sets the (prioritized) user defined config.
func replaceConfig(newValues map[string]interface{}) []*ValidationError {
var validationErrors []*ValidationError
// RLock the options because we are not adding or removing
// options from the registration but rather only checking the
// options value which is guarded by the option's lock itself.
// options from the registration but rather only update the
// options value which is guarded by the option's lock itself
optionsLock.RLock()
defer optionsLock.RUnlock()
var checked int
for key, option := range options {
newValue, ok := newValues[key]
option.Lock()
option.activeValue = nil
if ok {
checked++
func() {
option.Lock()
defer option.Unlock()
newValue = migrateValue(option, newValue)
_, err := validateValue(option, newValue)
if err != nil {
validationErrors = append(validationErrors, err)
}
if option.RequiresRestart {
requiresRestart = true
}
}()
valueCache, err := validateValue(option, newValue)
if err == nil {
option.activeValue = valueCache
} else {
validationErrors = append(validationErrors, err)
}
}
handleOptionUpdate(option, true)
option.Unlock()
}
return validationErrors, requiresRestart, checked < len(newValues)
signalChanges()
return validationErrors
}
// ReplaceConfig sets the (prioritized) user defined config.
func ReplaceConfig(newValues map[string]interface{}) (validationErrors []*ValidationError, requiresRestart bool) {
// replaceDefaultConfig sets the (fallback) default config.
func replaceDefaultConfig(newValues map[string]interface{}) []*ValidationError {
var validationErrors []*ValidationError
// RLock the options because we are not adding or removing
// options from the registration but rather only update the
// options value which is guarded by the option's lock itself.
// options value which is guarded by the option's lock itself
optionsLock.RLock()
defer optionsLock.RUnlock()
for key, option := range options {
newValue, ok := newValues[key]
func() {
option.Lock()
defer option.Unlock()
option.activeValue = nil
if ok {
newValue = migrateValue(option, newValue)
valueCache, err := validateValue(option, newValue)
if err == nil {
option.activeValue = valueCache
} else {
validationErrors = append(validationErrors, err)
}
option.Lock()
option.activeDefaultValue = nil
if ok {
valueCache, err := validateValue(option, newValue)
if err == nil {
option.activeDefaultValue = valueCache
} else {
validationErrors = append(validationErrors, err)
}
handleOptionUpdate(option, true)
if option.RequiresRestart {
requiresRestart = true
}
}()
}
handleOptionUpdate(option, true)
option.Unlock()
}
signalChanges()
return validationErrors, requiresRestart
}
// ReplaceDefaultConfig sets the (fallback) default config.
func ReplaceDefaultConfig(newValues map[string]interface{}) (validationErrors []*ValidationError, requiresRestart bool) {
// RLock the options because we are not adding or removing
// options from the registration but rather only update the
// options value which is guarded by the option's lock itself.
optionsLock.RLock()
defer optionsLock.RUnlock()
for key, option := range options {
newValue, ok := newValues[key]
func() {
option.Lock()
defer option.Unlock()
option.activeDefaultValue = nil
if ok {
newValue = migrateValue(option, newValue)
valueCache, err := validateValue(option, newValue)
if err == nil {
option.activeDefaultValue = valueCache
} else {
validationErrors = append(validationErrors, err)
}
}
handleOptionUpdate(option, true)
if option.RequiresRestart {
requiresRestart = true
}
}()
}
signalChanges()
return validationErrors, requiresRestart
return validationErrors
}
// SetConfigOption sets a single value in the (prioritized) user defined config.
func SetConfigOption(key string, value any) error {
func SetConfigOption(key string, value interface{}) error {
return setConfigOption(key, value, true)
}
func setConfigOption(key string, value any, push bool) (err error) {
func setConfigOption(key string, value interface{}, push bool) (err error) {
option, err := GetOption(key)
if err != nil {
return err
@ -163,7 +117,6 @@ func setConfigOption(key string, value any, push bool) (err error) {
if value == nil {
option.activeValue = nil
} else {
value = migrateValue(option, value)
valueCache, vErr := validateValue(option, value)
if vErr == nil {
option.activeValue = valueCache
@ -187,7 +140,7 @@ func setConfigOption(key string, value any, push bool) (err error) {
// finalize change, activate triggers
signalChanges()
return SaveConfig()
return saveConfig()
}
// SetDefaultConfigOption sets a single value in the (fallback) default config.
@ -205,7 +158,6 @@ func setDefaultConfigOption(key string, value interface{}, push bool) (err error
if value == nil {
option.activeDefaultValue = nil
} else {
value = migrateValue(option, value)
valueCache, vErr := validateValue(option, value)
if vErr == nil {
option.activeDefaultValue = valueCache

View file

@ -24,7 +24,7 @@ func TestLayersGetters(t *testing.T) { //nolint:paralleltest
t.Fatal(err)
}
validationErrors, _ := ReplaceConfig(mapData)
validationErrors := replaceConfig(mapData)
if len(validationErrors) > 0 {
t.Fatalf("%d errors, first: %s", len(validationErrors), validationErrors[0].Error())
}

View file

@ -5,8 +5,6 @@ import (
"fmt"
"math"
"reflect"
"github.com/safing/portbase/log"
)
type valueCache struct {
@ -66,18 +64,6 @@ func isAllowedPossibleValue(opt *Option, value interface{}) error {
return errors.New("value is not allowed")
}
// migrateValue runs all value migrations.
func migrateValue(option *Option, value any) any {
for _, migration := range option.Migrations {
newValue := migration(option, value)
if newValue != value {
log.Debugf("config: migrated %s value from %v to %v", option.Key, value, newValue)
}
value = newValue
}
return value
}
// validateValue ensures that value matches the expected type of option.
// It does not create a copy of the value!
func validateValue(option *Option, value interface{}) (*valueCache, *ValidationError) { //nolint:gocyclo
@ -90,6 +76,8 @@ func validateValue(option *Option, value interface{}) (*valueCache, *ValidationE
}
}
reflect.TypeOf(value).ConvertibleTo(reflect.TypeOf(""))
var validated *valueCache
switch v := value.(type) {
case string:

View file

@ -5,22 +5,23 @@
// Byte slices added to the Container are not changed or appended, to not corrupt any other data that may be before and after the given slice.
// If interested, consider the following example to understand why this is important:
//
// package main
// package main
//
// import (
// "fmt"
// )
// import (
// "fmt"
// )
//
// func main() {
// a := []byte{0, 1,2,3,4,5,6,7,8,9}
// fmt.Printf("a: %+v\n", a)
// fmt.Printf("\nmaking changes...\n(we are not changing a directly)\n\n")
// b := a[2:6]
// c := append(b, 10, 11)
// fmt.Printf("b: %+v\n", b)
// fmt.Printf("c: %+v\n", c)
// fmt.Printf("a: %+v\n", a)
// }
// func main() {
// a := []byte{0, 1,2,3,4,5,6,7,8,9}
// fmt.Printf("a: %+v\n", a)
// fmt.Printf("\nmaking changes...\n(we are not changing a directly)\n\n")
// b := a[2:6]
// c := append(b, 10, 11)
// fmt.Printf("b: %+v\n", b)
// fmt.Printf("c: %+v\n", c)
// fmt.Printf("a: %+v\n", a)
// }
//
// run it here: https://play.golang.org/p/xu1BXT3QYeE
//
package container

View file

@ -4,6 +4,7 @@ import (
"context"
"errors"
"fmt"
"io/ioutil"
"log"
"os"
"reflect"
@ -21,7 +22,7 @@ import (
)
func TestMain(m *testing.M) {
testDir, err := os.MkdirTemp("", "portbase-database-testing-")
testDir, err := ioutil.TempDir("", "portbase-database-testing-")
if err != nil {
panic(err)
}

View file

@ -1,62 +1,63 @@
/*
Package database provides a universal interface for interacting with the database.
# A Lazy Database
A Lazy Database
The database system can handle Go structs as well as serialized data by the dsd package.
While data is in transit within the system, it does not know which form it currently has. Only when it reaches its destination, it must ensure that it is either of a certain type or dump it.
# Record Interface
Record Interface
The database system uses the Record interface to transparently handle all types of structs that get saved in the database. Structs include the Base struct to fulfill most parts of the Record interface.
Boilerplate Code:
type Example struct {
record.Base
sync.Mutex
type Example struct {
record.Base
sync.Mutex
Name string
Score int
}
Name string
Score int
}
var (
db = database.NewInterface(nil)
)
var (
db = database.NewInterface(nil)
)
// GetExample gets an Example from the database.
func GetExample(key string) (*Example, error) {
r, err := db.Get(key)
if err != nil {
return nil, err
}
// GetExample gets an Example from the database.
func GetExample(key string) (*Example, error) {
r, err := db.Get(key)
if err != nil {
return nil, err
}
// unwrap
if r.IsWrapped() {
// only allocate a new struct, if we need it
new := &Example{}
err = record.Unwrap(r, new)
if err != nil {
return nil, err
}
return new, nil
}
// unwrap
if r.IsWrapped() {
// only allocate a new struct, if we need it
new := &Example{}
err = record.Unwrap(r, new)
if err != nil {
return nil, err
}
return new, nil
}
// or adjust type
new, ok := r.(*Example)
if !ok {
return nil, fmt.Errorf("record not of type *Example, but %T", r)
}
return new, nil
}
// or adjust type
new, ok := r.(*Example)
if !ok {
return nil, fmt.Errorf("record not of type *Example, but %T", r)
}
return new, nil
}
func (e *Example) Save() error {
return db.Put(e)
}
func (e *Example) Save() error {
return db.Put(e)
}
func (e *Example) SaveAs(key string) error {
e.SetKey(key)
return db.PutNew(e)
}
func (e *Example) SaveAs(key string) error {
e.SetKey(key)
return db.PutNew(e)
}
*/
package database

View file

@ -45,7 +45,7 @@ func (i *Interface) DelayedCacheWriter(ctx context.Context) error {
i.flushWriteCache(0)
case <-thresholdWriteTicker.C:
// Often check if the write cache has filled up to a certain degree and
// Often check if the the write cache has filled up to a certain degree and
// flush it to storage before we start evicting to-be-written entries and
// slow down the hot path again.
i.flushWriteCache(percentThreshold)

View file

@ -114,7 +114,7 @@ func (reg *Registry) Migrate(ctx context.Context) (err error) {
if err := m.MigrateFunc(migrationCtx, lastAppliedMigration, target, db); err != nil {
diag.Wrapped = err
diag.FailedMigration = m.Description
tracer.Errorf("migration: migration for %s failed: %s - %s", reg.key, target.String(), m.Description)
tracer.Infof("migration: applied migration for %s: %s - %s", reg.key, target.String(), m.Description)
tracer.Submit()
return diag
}

View file

@ -14,7 +14,6 @@ type snippet struct {
}
// ParseQuery parses a plaintext query. Special characters (that must be escaped with a '\') are: `\()` and any whitespaces.
//
//nolint:gocognit
func ParseQuery(query string) (*Query, error) {
snippets, err := extractSnippets(query)

View file

@ -44,13 +44,6 @@ func (b *Base) SetKey(key string) {
}
}
// ResetKey resets the database name and key.
// Use with caution!
func (b *Base) ResetKey() {
b.dbName = ""
b.dbKey = ""
}
// Key returns the key of the database record.
// As the key must be set before any usage and can only be set once, this
// function may be used without locking the record.

View file

@ -4,7 +4,7 @@ import (
"encoding/json"
"errors"
"fmt"
"io/fs"
"io/ioutil"
"os"
"path"
"regexp"
@ -115,9 +115,9 @@ func loadRegistry() error {
// read file
filePath := path.Join(rootStructure.Path, registryFileName)
data, err := os.ReadFile(filePath)
data, err := ioutil.ReadFile(filePath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
if os.IsNotExist(err) {
return nil
}
return err
@ -150,7 +150,7 @@ func saveRegistry(lock bool) error {
// write file
// TODO: write atomically (best effort)
filePath := path.Join(rootStructure.Path, registryFileName)
return os.WriteFile(filePath, data, 0o0600)
return ioutil.WriteFile(filePath, data, 0o0600)
}
func registryWriter() {

View file

@ -2,6 +2,7 @@ package badger
import (
"context"
"io/ioutil"
"os"
"reflect"
"sync"
@ -40,7 +41,7 @@ type TestRecord struct { //nolint:maligned
func TestBadger(t *testing.T) {
t.Parallel()
testDir, err := os.MkdirTemp("", "testing-")
testDir, err := ioutil.TempDir("", "testing-")
if err != nil {
t.Fatal(err)
}

View file

@ -2,6 +2,7 @@ package bbolt
import (
"context"
"io/ioutil"
"os"
"reflect"
"sync"
@ -42,7 +43,7 @@ type TestRecord struct { //nolint:maligned
func TestBBolt(t *testing.T) {
t.Parallel()
testDir, err := os.MkdirTemp("", "testing-")
testDir, err := ioutil.TempDir("", "testing-")
if err != nil {
t.Fatal(err)
}

View file

@ -8,7 +8,7 @@ import (
"context"
"errors"
"fmt"
"io/fs"
"io/ioutil"
"os"
"path/filepath"
"runtime"
@ -47,7 +47,7 @@ func NewFSTree(name, location string) (storage.Interface, error) {
file, err := os.Stat(basePath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
if os.IsNotExist(err) {
err = os.MkdirAll(basePath, defaultDirMode)
if err != nil {
return nil, fmt.Errorf("fstree: failed to create directory %s: %w", basePath, err)
@ -88,9 +88,9 @@ func (fst *FSTree) Get(key string) (record.Record, error) {
return nil, err
}
data, err := os.ReadFile(dstPath)
data, err := ioutil.ReadFile(dstPath)
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
if os.IsNotExist(err) {
return nil, storage.ErrNotFound
}
return nil, fmt.Errorf("fstree: failed to read file %s: %w", dstPath, err)
@ -177,7 +177,7 @@ func (fst *FSTree) Query(q *query.Query, local, internal bool) (*iterator.Iterat
walkRoot = walkPrefix
case err == nil:
walkRoot = filepath.Dir(walkPrefix)
case errors.Is(err, fs.ErrNotExist):
case os.IsNotExist(err):
walkRoot = filepath.Dir(walkPrefix)
default: // err != nil
return nil, fmt.Errorf("fstree: could not stat query root %s: %w", walkPrefix, err)
@ -210,9 +210,9 @@ func (fst *FSTree) queryExecutor(walkRoot string, queryIter *iterator.Iterator,
}
// read file
data, err := os.ReadFile(path)
data, err := ioutil.ReadFile(path)
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("fstree: failed to read file %s: %w", path, err)
@ -275,7 +275,7 @@ func (fst *FSTree) Shutdown() error {
return nil
}
// writeFile mirrors os.WriteFile, replacing an existing file with the same
// writeFile mirrors ioutil.WriteFile, replacing an existing file with the same
// name atomically. This is not atomic on Windows, but still an improvement.
// TODO: Replace with github.com/google/renamio.WriteFile as soon as it is fixed on Windows.
// TODO: This has become a wont-fix. Explore other options.

View file

@ -62,7 +62,7 @@ func (s *Sinkhole) PutMany(shadowDelete bool) (chan<- record.Record, <-chan erro
// start handler
go func() {
for range batch {
// discard everything
// nom, nom, nom
}
errs <- nil
}()

View file

@ -10,7 +10,6 @@ import (
"io"
"github.com/fxamacker/cbor/v2"
"github.com/ghodss/yaml"
"github.com/vmihailenco/msgpack/v5"
"github.com/safing/portbase/formats/varint"
@ -42,12 +41,6 @@ func LoadAsFormat(data []byte, format uint8, t interface{}) (err error) {
return fmt.Errorf("dsd: failed to unpack json: %w, data: %s", err, utils.SafeFirst16Bytes(data))
}
return nil
case YAML:
err = yaml.Unmarshal(data, t)
if err != nil {
return fmt.Errorf("dsd: failed to unpack yaml: %w, data: %s", err, utils.SafeFirst16Bytes(data))
}
return nil
case CBOR:
err = cbor.Unmarshal(data, t)
if err != nil {
@ -128,11 +121,6 @@ func dumpWithoutIdentifier(t interface{}, format uint8, indent string) ([]byte,
if err != nil {
return nil, err
}
case YAML:
data, err = yaml.Marshal(t)
if err != nil {
return nil, err
}
case CBOR:
data, err = cbor.Marshal(t)
if err != nil {

View file

@ -19,7 +19,6 @@ const (
GenCode = 71 // G
JSON = 74 // J
MsgPack = 77 // M
YAML = 89 // Y
// Compression types.
GZIP = 90 // Z
@ -49,8 +48,6 @@ func ValidateSerializationFormat(format uint8) (validatedFormat uint8, ok bool)
return format, true
case JSON:
return format, true
case YAML:
return format, true
case MsgPack:
return format, true
default:

View file

@ -5,8 +5,9 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"mime"
"net/http"
"strings"
)
// HTTP Related Errors.
@ -32,13 +33,26 @@ func LoadFromHTTPResponse(resp *http.Response, t interface{}) (format uint8, err
func loadFromHTTP(body io.Reader, mimeType string, t interface{}) (format uint8, err error) {
// Read full body.
data, err := io.ReadAll(body)
data, err := ioutil.ReadAll(body)
if err != nil {
return 0, fmt.Errorf("dsd: failed to read http body: %w", err)
}
// Load depending on mime type.
return MimeLoad(data, mimeType, t)
// Get mime type from header, then check, clean and verify it.
if mimeType == "" {
return 0, ErrMissingContentType
}
mimeType, _, err = mime.ParseMediaType(mimeType)
if err != nil {
return 0, fmt.Errorf("dsd: failed to parse content type: %w", err)
}
format, ok := MimeTypeToFormat[mimeType]
if !ok {
return 0, ErrIncompatibleFormat
}
// Parse data..
return format, LoadAsFormat(data, format, t)
}
// RequestHTTPResponseFormat sets the Accept header to the given format.
@ -48,6 +62,11 @@ func RequestHTTPResponseFormat(r *http.Request, format uint8) (mimeType string,
if !ok {
return "", ErrIncompatibleFormat
}
// Omit charset.
mimeType, _, err = mime.ParseMediaType(mimeType)
if err != nil {
return "", fmt.Errorf("dsd: failed to parse content type: %w", err)
}
// Request response format.
r.Header.Set("Accept", mimeType)
@ -58,7 +77,6 @@ func RequestHTTPResponseFormat(r *http.Request, format uint8) (mimeType string,
// DumpToHTTPRequest dumps the given data to the HTTP request using the given
// format. It also sets the Accept header to the same format.
func DumpToHTTPRequest(r *http.Request, t interface{}, format uint8) error {
// Get mime type and set request format.
mimeType, err := RequestHTTPResponseFormat(r, format)
if err != nil {
return err
@ -70,9 +88,9 @@ func DumpToHTTPRequest(r *http.Request, t interface{}, format uint8) error {
return fmt.Errorf("dsd: failed to serialize: %w", err)
}
// Add data to request.
// Set body.
r.Header.Set("Content-Type", mimeType)
r.Body = io.NopCloser(bytes.NewReader(data))
r.Body = ioutil.NopCloser(bytes.NewReader(data))
return nil
}
@ -80,8 +98,16 @@ func DumpToHTTPRequest(r *http.Request, t interface{}, format uint8) error {
// DumpToHTTPResponse dumpts the given data to the HTTP response, using the
// format defined in the request's Accept header.
func DumpToHTTPResponse(w http.ResponseWriter, r *http.Request, t interface{}) error {
// Serialize data based on accept header.
data, mimeType, _, err := MimeDump(t, r.Header.Get("Accept"))
// Get format from Accept header.
// TODO: Improve parsing of Accept header.
mimeType := r.Header.Get("Accept")
format, ok := MimeTypeToFormat[mimeType]
if !ok {
return ErrIncompatibleFormat
}
// Serialize data.
data, err := dumpWithoutIdentifier(t, format, "")
if err != nil {
return fmt.Errorf("dsd: failed to serialize: %w", err)
}
@ -95,84 +121,16 @@ func DumpToHTTPResponse(w http.ResponseWriter, r *http.Request, t interface{}) e
return nil
}
// MimeLoad loads the given data into the interface based on the given mime type accept header.
func MimeLoad(data []byte, accept string, t interface{}) (format uint8, err error) {
// Find format.
format = FormatFromAccept(accept)
if format == 0 {
return 0, ErrIncompatibleFormat
}
// Load data.
err = LoadAsFormat(data, format, t)
return format, err
}
// MimeDump dumps the given interface based on the given mime type accept header.
func MimeDump(t any, accept string) (data []byte, mimeType string, format uint8, err error) {
// Find format.
format = FormatFromAccept(accept)
if format == AUTO {
return nil, "", 0, ErrIncompatibleFormat
}
// Serialize and return.
data, err = dumpWithoutIdentifier(t, format, "")
return data, mimeType, format, err
}
// FormatFromAccept returns the format for the given accept definition.
// The accept parameter matches the format of the HTTP Accept header.
// Special cases, in this order:
// - If accept is an empty string: returns default serialization format.
// - If accept contains no supported format, but a wildcard: returns default serialization format.
// - If accept contains no supported format, and no wildcard: returns AUTO format.
func FormatFromAccept(accept string) (format uint8) {
if accept == "" {
return DefaultSerializationFormat
}
var foundWildcard bool
for _, mimeType := range strings.Split(accept, ",") {
// Clean mime type.
mimeType = strings.TrimSpace(mimeType)
mimeType, _, _ = strings.Cut(mimeType, ";")
if strings.Contains(mimeType, "/") {
_, mimeType, _ = strings.Cut(mimeType, "/")
}
mimeType = strings.ToLower(mimeType)
// Check if mime type is supported.
format, ok := MimeTypeToFormat[mimeType]
if ok {
return format
}
// Return default mime type as fallback if any mimetype is okay.
if mimeType == "*" {
foundWildcard = true
}
}
if foundWildcard {
return DefaultSerializationFormat
}
return AUTO
}
// Format and MimeType mappings.
var (
FormatToMimeType = map[uint8]string{
JSON: "application/json; charset=utf-8",
CBOR: "application/cbor",
JSON: "application/json",
MsgPack: "application/msgpack",
YAML: "application/yaml",
}
MimeTypeToFormat = map[string]uint8{
"cbor": CBOR,
"json": JSON,
"msgpack": MsgPack,
"yaml": YAML,
"yml": YAML,
"application/json": JSON,
"application/cbor": CBOR,
"application/msgpack": MsgPack,
}
)

View file

@ -1,45 +0,0 @@
package dsd
import (
"mime"
"testing"
"github.com/stretchr/testify/assert"
)
func TestMimeTypes(t *testing.T) {
t.Parallel()
// Test static maps.
for _, mimeType := range FormatToMimeType {
cleaned, _, err := mime.ParseMediaType(mimeType)
assert.NoError(t, err, "mime type must be parse-able")
assert.Equal(t, mimeType, cleaned, "mime type should be clean in map already")
}
for mimeType := range MimeTypeToFormat {
cleaned, _, err := mime.ParseMediaType(mimeType)
assert.NoError(t, err, "mime type must be parse-able")
assert.Equal(t, mimeType, cleaned, "mime type should be clean in map already")
}
// Test assumptions.
for accept, format := range map[string]uint8{
"application/json, image/webp": JSON,
"image/webp, application/json": JSON,
"application/json;q=0.9, image/webp": JSON,
"*": DefaultSerializationFormat,
"*/*": DefaultSerializationFormat,
"text/yAMl": YAML,
" * , yaml ": YAML,
"yaml;charset ,*": YAML,
"xml,*": DefaultSerializationFormat,
"text/xml, text/other": AUTO,
"text/*": DefaultSerializationFormat,
"yaml ;charset": AUTO, // Invalid mimetype format.
"": DefaultSerializationFormat,
"x": AUTO,
} {
derivedFormat := FormatFromAccept(accept)
assert.Equal(t, format, derivedFormat, "assumption for %q should hold", accept)
}
}

79
go.mod
View file

@ -1,73 +1,40 @@
module github.com/safing/portbase
go 1.21.1
toolchain go1.21.2
go 1.15
require (
github.com/VictoriaMetrics/metrics v1.29.0
github.com/VictoriaMetrics/metrics v1.22.2
github.com/aead/serpent v0.0.0-20160714141033-fba169763ea6
github.com/armon/go-radix v1.0.0
github.com/bluele/gcache v0.0.2
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/davecgh/go-spew v1.1.1
github.com/dgraph-io/badger v1.6.2
github.com/fxamacker/cbor/v2 v2.5.0
github.com/ghodss/yaml v1.0.0
github.com/gofrs/uuid v4.4.0+incompatible
github.com/gorilla/mux v1.8.1
github.com/gorilla/websocket v1.5.1
github.com/dgraph-io/ristretto v0.1.0 // indirect
github.com/fxamacker/cbor/v2 v2.4.0
github.com/gofrs/uuid v4.2.0+incompatible
github.com/golang/glog v1.0.0 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/gorilla/mux v1.8.0
github.com/gorilla/websocket v1.5.0
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1
github.com/hashicorp/go-version v1.6.0
github.com/mitchellh/copystructure v1.2.0
github.com/safing/jess v0.3.3
github.com/safing/portmaster-android/go v0.0.0-20230830120134-3226ceac3bec
github.com/seehuhn/fortuna v1.0.1
github.com/shirou/gopsutil v3.21.11+incompatible
github.com/stretchr/testify v1.8.4
github.com/stretchr/testify v1.8.0
github.com/tevino/abool v1.2.0
github.com/tidwall/gjson v1.17.0
github.com/tidwall/gjson v1.14.3
github.com/tidwall/sjson v1.2.5
github.com/vmihailenco/msgpack/v5 v5.4.1
go.etcd.io/bbolt v1.3.8
golang.org/x/exp v0.0.0-20231219180239-dc181d75b848
golang.org/x/sync v0.5.0
golang.org/x/sys v0.15.0
)
require (
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 // indirect
github.com/aead/ecdh v0.2.0 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/dgraph-io/ristretto v0.1.1 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/fxamacker/cbor v1.5.1 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/golang/glog v1.2.0 // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/btree v1.1.2 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.6 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/satori/go.uuid v1.2.0 // indirect
github.com/seehuhn/sha256d v1.0.0 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.1 // indirect
github.com/tklauser/go-sysconf v0.3.13 // indirect
github.com/tklauser/numcpus v0.7.0 // indirect
github.com/valyala/fastrand v1.1.0 // indirect
github.com/valyala/histogram v1.2.0 // indirect
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect
github.com/zeebo/blake3 v0.2.3 // indirect
golang.org/x/crypto v0.17.0 // indirect
golang.org/x/net v0.19.0 // indirect
golang.org/x/time v0.5.0 // indirect
google.golang.org/protobuf v1.32.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
gvisor.dev/gvisor v0.0.0-20231222013827-149350e5c428 // indirect
github.com/tklauser/go-sysconf v0.3.9 // indirect
github.com/tklauser/numcpus v0.4.0 // indirect
github.com/vmihailenco/msgpack/v5 v5.3.5
github.com/yusufpapurcu/wmi v1.2.2 // indirect
go.etcd.io/bbolt v1.3.6
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd // indirect
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/sys v0.0.0-20220209214540-3681064d5158
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/protobuf v1.27.1 // indirect
)

150
go.sum
View file

@ -2,10 +2,8 @@ github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 h1:cTp8I5+VIo
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/VictoriaMetrics/metrics v1.29.0 h1:3qC+jcvymGJaQKt6wsXIlJieVFQwD/par9J1Bxul+Mc=
github.com/VictoriaMetrics/metrics v1.29.0/go.mod h1:r7hveu6xMdUACXvB8TYdAj8WEsKzWB0EkpJN+RDtOf8=
github.com/aead/ecdh v0.2.0 h1:pYop54xVaq/CEREFEcukHRZfTdjiWvYIsZDXXrBapQQ=
github.com/aead/ecdh v0.2.0/go.mod h1:a9HHtXuSo8J1Js1MwLQx2mBhkXMT6YwUmVVEY4tTB8U=
github.com/VictoriaMetrics/metrics v1.22.2 h1:A6LsNidYwkAHetxsvNFaUWjtzu5ltdgNEoS6i7Bn+6I=
github.com/VictoriaMetrics/metrics v1.22.2/go.mod h1:rAr/llLpEnAdTehiNlUxKgnjcOuROSzpw0GvjpEbvFc=
github.com/aead/serpent v0.0.0-20160714141033-fba169763ea6 h1:5L8Mj9Co9sJVgW3TpYk2gxGJnDjsYuboNTcRmbtGKGs=
github.com/aead/serpent v0.0.0-20160714141033-fba169763ea6/go.mod h1:3HgLJ9d18kXMLQlJvIY3+FszZYMxCz8WfE2MQ7hDY0w=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
@ -13,10 +11,11 @@ github.com/armon/go-radix v1.0.0 h1:F4z6KzEeeQIMeLFa97iZU6vupzoecKdU5TX24SNppXI=
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/bluele/gcache v0.0.2 h1:WcbfdXICg7G/DGBh1PFfcirkWOQV+v077yF1pSy3DGw=
github.com/bluele/gcache v0.0.2/go.mod h1:m15KV+ECjptwSPxKhOhQoAFQVtUFjTVkc3H8o0t/fp0=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
@ -27,41 +26,32 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/dgraph-io/badger v1.6.2 h1:mNw0qs90GVgGGWylh0umH5iag1j6n/PeJtNvL6KY/x8=
github.com/dgraph-io/badger v1.6.2/go.mod h1:JW2yswe3V058sS0kZ2h/AXeDSqFjxnZcRrVH//y2UQE=
github.com/dgraph-io/ristretto v0.0.2/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E=
github.com/dgraph-io/ristretto v0.1.1 h1:6CWw5tJNgpegArSHpNHJKldNeq03FQCwYvfMVWajOK8=
github.com/dgraph-io/ristretto v0.1.1/go.mod h1:S1GPSBCYCIhmVNfcth17y2zZtQT6wzkzgwUve0VDWWA=
github.com/dgraph-io/ristretto v0.1.0 h1:Jv3CGQHp9OjuMBSne1485aDpUkTKEcUqF+jm/LuerPI=
github.com/dgraph-io/ristretto v0.1.0/go.mod h1:fux0lOrBhrVCJd3lcTHsIJhq1T2rokOu6v9Vcb3Q9ug=
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2 h1:tdlZCpZ/P9DhczCTSixgIKmwPv6+wP5DGjqLYw5SUiA=
github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fxamacker/cbor v1.5.1 h1:XjQWBgdmQyqimslUh5r4tUGmoqzHmBFQOImkWGi2awg=
github.com/fxamacker/cbor v1.5.1/go.mod h1:3aPGItF174ni7dDzd6JZ206H8cmr4GDNBGpPa971zsU=
github.com/fxamacker/cbor/v2 v2.5.0 h1:oHsG0V/Q6E/wqTS2O1Cozzsy69nqCiguo5Q1a1ADivE=
github.com/fxamacker/cbor/v2 v2.5.0/go.mod h1:TA1xS00nchWmaBnEIxPSE5oHLuJBAVvqrtAnWBwBCVo=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/fxamacker/cbor/v2 v2.4.0 h1:ri0ArlOR+5XunOP8CRUowT0pSJOwhW098ZCUyskZD88=
github.com/fxamacker/cbor/v2 v2.4.0/go.mod h1:TA1xS00nchWmaBnEIxPSE5oHLuJBAVvqrtAnWBwBCVo=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/gofrs/uuid v4.4.0+incompatible h1:3qXRTX8/NbyulANqlc0lchS1gqAVxRgsuW1YrTJupqA=
github.com/gofrs/uuid v4.4.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gofrs/uuid v4.2.0+incompatible h1:yyYWMnhkhrKwwr8gAOcOCYxOOscHgDS9yZgBrnJfGa0=
github.com/gofrs/uuid v4.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/glog v1.2.0 h1:uCdmnmatrKCgMBlM4rMuJZWOkPDqdbZPnrMXDY4gI68=
github.com/golang/glog v1.2.0/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w=
github.com/golang/glog v1.0.0 h1:nfP3RFugxnNRyKgeWd4oI1nYvXpxrx8ck8ZrcizshdQ=
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v1.1.2 h1:xf4v41cLI2Z6FxbKm+8Bu+m8ifhj15JuZ9sa0jZCMUU=
github.com/google/btree v1.1.2/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.1 h1:gmztn0JnHVt9JZquRuzLw3g4wouNVzKL15iLr/zn/QY=
github.com/gorilla/websocket v1.5.1/go.mod h1:x3kM2JMyaluk02fnUJpQuwD2dCS5NDG2ZHL0uE0tcaY=
github.com/gorilla/mux v1.8.0 h1:i40aqfkR1h2SlN9hojwV5ZA91wcXFOvkdNIeFDP5koI=
github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
@ -71,9 +61,6 @@ github.com/hashicorp/go-version v1.6.0 h1:feTTfFNnjP967rlCxM/I9g701jU+RN74YKx2mO
github.com/hashicorp/go-version v1.6.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/klauspost/cpuid/v2 v2.0.12/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c=
github.com/klauspost/cpuid/v2 v2.2.6 h1:ndNyv040zDGIDh8thGkXYjnFtiN02M1PVVF+JE/48xc=
github.com/klauspost/cpuid/v2 v2.2.6/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/kr/pretty v0.2.0 h1:s5hAObm+yFO5uHYt5dYjxi2rXrsnmRpJx4OYvIWUaQs=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
@ -86,8 +73,6 @@ github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrk
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/reflectwalk v1.0.2 h1:G2LzWKi524PWgd3mLHV8Y5k7s6XUvT0Gef6zxSIeXaQ=
github.com/mitchellh/reflectwalk v1.0.2/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
@ -95,12 +80,6 @@ github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/safing/jess v0.3.3 h1:0U0bWdO0sFCgox+nMOqISFrnJpVmi+VFOW1xdX6q3qw=
github.com/safing/jess v0.3.3/go.mod h1:t63qHB+4xd1HIv9MKN/qI2rc7ytvx7d6l4hbX7zxer0=
github.com/safing/portmaster-android/go v0.0.0-20230830120134-3226ceac3bec h1:oSJY1seobofPwpMoJRkCgXnTwfiQWNfGMCPDfqgAEfg=
github.com/safing/portmaster-android/go v0.0.0-20230830120134-3226ceac3bec/go.mod h1:abwyAQrZGemWbSh/aCD9nnkp0SvFFf/mGWkAbOwPnFE=
github.com/satori/go.uuid v1.2.0 h1:0uYX9dsZ2yD7q2RtLRtPSdGDWzjeM3TbMJP9utgA0ww=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/seehuhn/fortuna v1.0.1 h1:lu9+CHsmR0bZnx5Ay646XvCSRJ8PJTi5UYJwDBX68H0=
github.com/seehuhn/fortuna v1.0.1/go.mod h1:LX8ubejCnUoT/hX+1aKUtbKls2H6DRkqzkc7TdR3iis=
github.com/seehuhn/sha256d v1.0.0 h1:TXTsAuEWr02QjRm153Fnvvb6fXXDo7Bmy1FizxarGYw=
@ -116,86 +95,79 @@ github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb6
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/tevino/abool v1.2.0 h1:heAkClL8H6w+mK5md9dzsuohKeXHUpY7Vw0ZCKW+huA=
github.com/tevino/abool v1.2.0/go.mod h1:qc66Pna1RiIsPa7O4Egxxs9OqkuxDX55zznh9K07Tzg=
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.17.0 h1:/Jocvlh98kcTfpN2+JzGQWQcqrPQwDrVEMApx/M5ZwM=
github.com/tidwall/gjson v1.17.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.14.3 h1:9jvXn7olKEHU1S9vwoMGliaT8jq1vJ7IH/n9zD9Dnlw=
github.com/tidwall/gjson v1.14.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
github.com/tklauser/go-sysconf v0.3.13 h1:GBUpcahXSpR2xN01jhkNAbTLRk2Yzgggk8IM08lq3r4=
github.com/tklauser/go-sysconf v0.3.13/go.mod h1:zwleP4Q4OehZHGn4CYZDipCgg9usW5IJePewFCGVEa0=
github.com/tklauser/numcpus v0.7.0 h1:yjuerZP127QG9m5Zh/mSO4wqurYil27tHrqwRoRjpr4=
github.com/tklauser/numcpus v0.7.0/go.mod h1:bb6dMVcj8A42tSE7i32fsIUCbQNllK5iDguyOZRUzAY=
github.com/tklauser/go-sysconf v0.3.9 h1:JeUVdAOWhhxVcU6Eqr/ATFHgXk/mmiItdKeJPev3vTo=
github.com/tklauser/go-sysconf v0.3.9/go.mod h1:11DU/5sG7UexIrp/O6g35hrWzu0JxlwQ3LSFUzyeuhs=
github.com/tklauser/numcpus v0.3.0/go.mod h1:yFGUr7TUHQRAhyqBcEg0Ge34zDBAsIvJJcyE6boqnA8=
github.com/tklauser/numcpus v0.4.0 h1:E53Dm1HjH1/R2/aoCtXtPgzmElmn51aOkhCFSuZq//o=
github.com/tklauser/numcpus v0.4.0/go.mod h1:1+UI3pD8NW14VMwdgJNJ1ESk2UnwhAnz5hMwiKKqXCQ=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/valyala/fastrand v1.1.0 h1:f+5HkLW4rsgzdNoleUOB69hyT9IlD2ZQh9GyDMfb5G8=
github.com/valyala/fastrand v1.1.0/go.mod h1:HWqCzkrkg6QXT8V2EXWvXCoow7vLwOFN002oeRzjapQ=
github.com/valyala/histogram v1.2.0 h1:wyYGAZZt3CpwUiIb9AU/Zbllg1llXyrtApRS815OLoQ=
github.com/valyala/histogram v1.2.0/go.mod h1:Hb4kBwb4UxsaNbbbh+RRz8ZR6pdodR57tzWUS3BUzXY=
github.com/vmihailenco/msgpack/v5 v5.4.1 h1:cQriyiUvjTwOHg8QZaPihLWeRAAVoCpE00IUPn0Bjt8=
github.com/vmihailenco/msgpack/v5 v5.4.1/go.mod h1:GaZTsDaehaPpQVyxrf5mtQlH+pc21PIudVV/E3rRQok=
github.com/vmihailenco/msgpack/v5 v5.3.5 h1:5gO0H1iULLWGhs2H5tbAHIZTV8/cYafcFOr9znI5mJU=
github.com/vmihailenco/msgpack/v5 v5.3.5/go.mod h1:7xyJ9e+0+9SaZT0Wt1RGleJXzli6Q/V5KbhBonMG9jc=
github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g=
github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yusufpapurcu/wmi v1.2.3 h1:E1ctvB7uKFMOJw3fdOW32DwGE9I7t++CRUEMKvFoFiw=
github.com/yusufpapurcu/wmi v1.2.3/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
github.com/zeebo/assert v1.1.0 h1:hU1L1vLTHsnO8x8c9KAR5GmM5QscxHg5RNU5z5qbUWY=
github.com/zeebo/assert v1.1.0/go.mod h1:Pq9JiuJQpG8JLJdtkwrJESF0Foym2/D9XMU5ciN/wJ0=
github.com/zeebo/blake3 v0.2.3 h1:TFoLXsjeXqRNFxSbk35Dk4YtszE/MQQGK10BH4ptoTg=
github.com/zeebo/blake3 v0.2.3/go.mod h1:mjJjZpnsyIVtVgTOSpJ9vmRE4wgDeyt2HU3qXvvKCaQ=
github.com/zeebo/pcg v1.0.1 h1:lyqfGeWiv4ahac6ttHs+I5hwtH/+1mrhlCtVNQM2kHo=
github.com/zeebo/pcg v1.0.1/go.mod h1:09F0S9iiKrwn9rlI5yjLkmrug154/YRW6KnnXVDM/l4=
go.etcd.io/bbolt v1.3.8 h1:xs88BrvEv273UsB79e0hcVrlUWmS0a8upikMFhSyAtA=
go.etcd.io/bbolt v1.3.8/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw=
github.com/yusufpapurcu/wmi v1.2.2 h1:KBNDSne4vP5mbSWnJbO+51IMOXJB67QiYCSBrubbPRg=
github.com/yusufpapurcu/wmi v1.2.2/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.etcd.io/bbolt v1.3.6 h1:/ecaJf0sk1l4l6V4awd65v2C3ILy7MSj+s/x1ADCIMU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k=
golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/exp v0.0.0-20231219180239-dc181d75b848 h1:+iq7lrkxmFNBM7xx+Rae2W6uyPfhPeDWD+n+JgppptE=
golang.org/x/exp v0.0.0-20231219180239-dc181d75b848/go.mod h1:iRJReGqOEeBhDZGkGbynYwcHlctCvnjTYIamk7uXpHI=
golang.org/x/mod v0.14.0 h1:dGoOF9QVLYng8IHTm7BAyWqCqSheQ5pYWGhzW00YJr0=
golang.org/x/mod v0.14.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c=
golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U=
golang.org/x/sync v0.5.0 h1:60k92dhOjHxJkrqnwsfl8KuaHbn/5dl0lUPUklKo3qE=
golang.org/x/sync v0.5.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd h1:O7DYs+zxREGLKzKoMQrtrEacpb0ZVXA5rIwylE2Xchk=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20221010170243-090e33056c14/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210816074244-15123e1e1f71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158 h1:rm+CHSpPEEW2IsXUib1ThaHIjuBVZjxNgSKmBLFfD4c=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.32.0 h1:pPC6BG5ex8PDFnkbrGU3EixyhKcQ2aDuBS36lqK/C7I=
google.golang.org/protobuf v1.32.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gvisor.dev/gvisor v0.0.0-20231222013827-149350e5c428 h1:UvBO2UZXf0d1zWJsfD8Robnxa2lyGm8Vnb+Nou5b1no=
gvisor.dev/gvisor v0.0.0-20231222013827-149350e5c428/go.mod h1:10sU+Uh5KKNv1+2x2A0Gvzt8FjD3ASIhorV3YsauXhk=

View file

@ -5,150 +5,82 @@ import (
"fmt"
"os"
"runtime"
"runtime/debug"
"strings"
"sync"
)
var (
name string
license string
name = "[NAME]"
version = "[version unknown]"
commit = "[commit unknown]"
license = "[license unknown]"
buildOptions = "[options unknown]"
buildUser = "[user unknown]"
buildHost = "[host unknown]"
buildDate = "[date unknown]"
buildSource = "[source unknown]"
version = "dev build"
versionNumber = "0.0.0"
buildSource = "unknown"
buildTime = "unknown"
info *Info
loadInfo sync.Once
compareVersion bool
)
func init() {
// Replace space placeholders.
buildSource = strings.ReplaceAll(buildSource, "_", " ")
buildTime = strings.ReplaceAll(buildTime, "_", " ")
// Convert version string from git tag to expected format.
version = strings.TrimSpace(strings.ReplaceAll(strings.TrimPrefix(version, "v"), "_", " "))
versionNumber = strings.TrimSpace(strings.TrimSuffix(version, "dev build"))
if versionNumber == "" {
versionNumber = "0.0.0"
}
// Get build info.
buildInfo, _ := debug.ReadBuildInfo()
buildSettings := make(map[string]string)
for _, setting := range buildInfo.Settings {
buildSettings[setting.Key] = setting.Value
}
// Add "dev build" to version if repo is dirty.
if buildSettings["vcs.modified"] == "true" &&
!strings.HasSuffix(version, "dev build") {
version += " dev build"
}
}
// Info holds the programs meta information.
type Info struct { //nolint:maligned
Name string
Version string
VersionNumber string
License string
Source string
BuildTime string
CGO bool
Commit string
CommitTime string
Dirty bool
debug.BuildInfo
type Info struct {
Name string
Version string
License string
Commit string
BuildOptions string
BuildUser string
BuildHost string
BuildDate string
BuildSource string
}
// Set sets meta information via the main routine. This should be the first thing your program calls.
func Set(setName string, setVersion string, setLicenseName string) {
func Set(setName string, setVersion string, setLicenseName string, compareVersionToTag bool) {
name = setName
version = setVersion
license = setLicenseName
if setVersion != "" {
version = setVersion
}
compareVersion = compareVersionToTag
}
// GetInfo returns all the meta information about the program.
func GetInfo() *Info {
loadInfo.Do(func() {
buildInfo, _ := debug.ReadBuildInfo()
buildSettings := make(map[string]string)
for _, setting := range buildInfo.Settings {
buildSettings[setting.Key] = setting.Value
}
info = &Info{
Name: name,
Version: version,
VersionNumber: versionNumber,
License: license,
Source: buildSource,
BuildTime: buildTime,
CGO: buildSettings["CGO_ENABLED"] == "1",
Commit: buildSettings["vcs.revision"],
CommitTime: buildSettings["vcs.time"],
Dirty: buildSettings["vcs.modified"] == "true",
BuildInfo: *buildInfo,
}
if info.Commit == "" {
info.Commit = "unknown"
}
if info.CommitTime == "" {
info.CommitTime = "unknown"
}
})
return info
return &Info{
Name: name,
Version: version,
Commit: commit,
License: license,
BuildOptions: buildOptions,
BuildUser: buildUser,
BuildHost: buildHost,
BuildDate: buildDate,
BuildSource: buildSource,
}
}
// Version returns the annotated version.
// Version returns the short version string.
func Version() string {
return version
}
// VersionNumber returns the version number only.
func VersionNumber() string {
return versionNumber
if !compareVersion || strings.HasPrefix(commit, fmt.Sprintf("tags/v%s-0-", version)) {
return version
}
return version + "*"
}
// FullVersion returns the full and detailed version string.
func FullVersion() string {
info := GetInfo()
builder := new(strings.Builder)
// Name and version.
builder.WriteString(fmt.Sprintf("%s %s\n", info.Name, version))
// Build info.
cgoInfo := "-cgo"
if info.CGO {
cgoInfo = "+cgo"
s := ""
if !compareVersion || strings.HasPrefix(commit, fmt.Sprintf("tags/v%s-0-", version)) {
s += fmt.Sprintf("%s\nversion %s\n", name, version)
} else {
s += fmt.Sprintf("%s\ndevelopment build, built on top version %s\n", name, version)
}
builder.WriteString(fmt.Sprintf("\nbuilt with %s (%s %s) for %s/%s\n", runtime.Version(), runtime.Compiler, cgoInfo, runtime.GOOS, runtime.GOARCH))
builder.WriteString(fmt.Sprintf(" at %s\n", info.BuildTime))
// Commit info.
dirtyInfo := "clean"
if info.Dirty {
dirtyInfo = "dirty"
}
builder.WriteString(fmt.Sprintf("\ncommit %s (%s)\n", info.Commit, dirtyInfo))
builder.WriteString(fmt.Sprintf(" at %s\n", info.CommitTime))
builder.WriteString(fmt.Sprintf(" from %s\n", info.Source))
builder.WriteString(fmt.Sprintf("\nLicensed under the %s license.", license))
return builder.String()
s += fmt.Sprintf("\ncommit %s\n", commit)
s += fmt.Sprintf("built with %s (%s) %s/%s\n", runtime.Version(), runtime.Compiler, runtime.GOOS, runtime.GOARCH)
s += fmt.Sprintf(" using options %s\n", strings.ReplaceAll(buildOptions, "§", " "))
s += fmt.Sprintf(" by %s@%s\n", buildUser, buildHost)
s += fmt.Sprintf(" on %s\n", buildDate)
s += fmt.Sprintf("\nLicensed under the %s license.\nThe source code is available here: %s", license, buildSource)
return s
}
// CheckVersion checks if the metadata is ok.
@ -160,9 +92,19 @@ func CheckVersion() error {
return nil // testing on windows
default:
// check version information
if name == "" || license == "" {
if name == "[NAME]" {
return errors.New("must call SetInfo() before calling CheckVersion()")
}
if version == "[version unknown]" ||
commit == "[commit unknown]" ||
license == "[license unknown]" ||
buildOptions == "[options unknown]" ||
buildUser == "[user unknown]" ||
buildHost == "[host unknown]" ||
buildDate == "[date unknown]" ||
buildSource == "[source unknown]" {
return errors.New("please build using the supplied build script.\n$ ./build {main.go|...}")
}
}
return nil

View file

@ -93,10 +93,7 @@ func log(level Severity, msg string, tracer *ContextTracer) {
// wake up writer if necessary
if logsWaitingFlag.SetToIf(false, true) {
select {
case logsWaiting <- struct{}{}:
default:
}
logsWaiting <- struct{}{}
}
}
@ -184,7 +181,7 @@ func Errorf(format string, things ...interface{}) {
}
}
// Critical is used to log events that completely break the system. Operation cannot continue. User/Admin must be informed.
// Critical is used to log events that completely break the system. Operation connot continue. User/Admin must be informed.
func Critical(msg string) {
atomic.AddUint64(critLogLines, 1)
if fastcheck(CriticalLevel) {
@ -192,7 +189,7 @@ func Critical(msg string) {
}
}
// Criticalf is used to log events that completely break the system. Operation cannot continue. User/Admin must be informed.
// Criticalf is used to log events that completely break the system. Operation connot continue. User/Admin must be informed.
func Criticalf(format string, things ...interface{}) {
atomic.AddUint64(critLogLines, 1)
if fastcheck(CriticalLevel) {

View file

@ -109,7 +109,7 @@ var (
pkgLevels = make(map[string]Severity)
pkgLevelsLock sync.Mutex
logsWaiting = make(chan struct{}, 1)
logsWaiting = make(chan struct{}, 4)
logsWaitingFlag = abool.NewBool(false)
shutdownFlag = abool.NewBool(false)

View file

@ -17,10 +17,10 @@ type (
// AdapterFunc is a convenience type for implementing
// Adapter.
AdapterFunc func(msg Message, duplicates uint64)
AdapterFunc func(msg Message, duplciates uint64)
// FormatFunc formats msg into a string.
FormatFunc func(msg Message, duplicates uint64) string
FormatFunc func(msg Message, duplciates uint64) string
// SimpleFileAdapter implements Adapter and writes all
// messages to File.

View file

@ -3,8 +3,10 @@ package metrics
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"time"
@ -16,41 +18,20 @@ import (
func registerAPI() error {
api.RegisterHandler("/metrics", &metricsAPI{})
if err := api.RegisterEndpoint(api.Endpoint{
return api.RegisterEndpoint(api.Endpoint{
Path: "metrics/list",
Read: api.PermitAnyone,
MimeType: api.MimeTypeJSON,
BelongsTo: module,
DataFunc: func(*api.Request) ([]byte, error) {
registryLock.RLock()
defer registryLock.RUnlock()
return json.Marshal(registry)
},
Name: "Export Registered Metrics",
Description: "List all registered metrics with their metadata.",
Path: "metrics/list",
Read: api.Dynamic,
BelongsTo: module,
StructFunc: func(ar *api.Request) (any, error) {
return ExportMetrics(ar.AuthToken.Read), nil
},
}); err != nil {
return err
}
if err := api.RegisterEndpoint(api.Endpoint{
Name: "Export Metric Values",
Description: "List all exportable metric values.",
Path: "metrics/values",
Read: api.Dynamic,
Parameters: []api.Parameter{{
Method: http.MethodGet,
Field: "internal-only",
Description: "Specify to only return metrics with an alternative internal ID.",
}},
BelongsTo: module,
StructFunc: func(ar *api.Request) (any, error) {
return ExportValues(
ar.AuthToken.Read,
ar.Request.URL.Query().Has("internal-only"),
), nil
},
}); err != nil {
return err
}
return nil
})
}
type metricsAPI struct{}
@ -130,7 +111,7 @@ func writeMetricsTo(ctx context.Context, url string) error {
}
// Get and return error.
body, _ := io.ReadAll(resp.Body)
body, _ := ioutil.ReadAll(resp.Body)
return fmt.Errorf(
"got %s while writing metrics to %s: %s",
resp.Status,
@ -141,14 +122,14 @@ func writeMetricsTo(ctx context.Context, url string) error {
func metricsWriter(ctx context.Context) error {
pushURL := pushOption()
ticker := module.NewSleepyTicker(1*time.Minute, 0)
ticker := time.NewTicker(1 * time.Minute)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return nil
case <-ticker.Wait():
case <-ticker.C:
err := writeMetricsTo(ctx, pushURL)
if err != nil {
return err

View file

@ -43,10 +43,6 @@ type Options struct {
// Name defines an optional human readable name for the metric.
Name string
// InternalID specifies an alternative internal ID that will be used when
// exposing the metric via the API in a structured format.
InternalID string
// AlertLimit defines an upper limit that triggers an alert.
AlertLimit float64

View file

@ -42,8 +42,3 @@ func NewCounter(id string, labels map[string]string, opts *Options) (*Counter, e
return m, nil
}
// CurrentValue returns the current counter value.
func (c *Counter) CurrentValue() uint64 {
return c.Get()
}

View file

@ -50,11 +50,6 @@ func NewFetchingCounter(id string, labels map[string]string, fn func() uint64, o
return m, nil
}
// CurrentValue returns the current counter value.
func (fc *FetchingCounter) CurrentValue() uint64 {
return fc.fetchCnt()
}
// WritePrometheus writes the metric in the prometheus format to the given writer.
func (fc *FetchingCounter) WritePrometheus(w io.Writer) {
fc.counter.Set(fc.fetchCnt())

View file

@ -1,89 +0,0 @@
package metrics
import (
"github.com/safing/portbase/api"
)
// UIntMetric is an interface for special functions of uint metrics.
type UIntMetric interface {
CurrentValue() uint64
}
// FloatMetric is an interface for special functions of float metrics.
type FloatMetric interface {
CurrentValue() float64
}
// MetricExport is used to export a metric and its current value.
type MetricExport struct {
Metric
CurrentValue any
}
// ExportMetrics exports all registered metrics.
func ExportMetrics(requestPermission api.Permission) []*MetricExport {
registryLock.RLock()
defer registryLock.RUnlock()
export := make([]*MetricExport, 0, len(registry))
for _, metric := range registry {
// Check permission.
if requestPermission < metric.Opts().Permission {
continue
}
// Add metric with current value.
export = append(export, &MetricExport{
Metric: metric,
CurrentValue: getCurrentValue(metric),
})
}
return export
}
// ExportValues exports the values of all supported metrics.
func ExportValues(requestPermission api.Permission, internalOnly bool) map[string]any {
registryLock.RLock()
defer registryLock.RUnlock()
export := make(map[string]any, len(registry))
for _, metric := range registry {
// Check permission.
if requestPermission < metric.Opts().Permission {
continue
}
// Get Value.
v := getCurrentValue(metric)
if v == nil {
continue
}
// Get ID.
var id string
switch {
case metric.Opts().InternalID != "":
id = metric.Opts().InternalID
case internalOnly:
continue
default:
id = metric.LabeledID()
}
// Add to export
export[id] = v
}
return export
}
func getCurrentValue(metric Metric) any {
if m, ok := metric.(UIntMetric); ok {
return m.CurrentValue()
}
if m, ok := metric.(FloatMetric); ok {
return m.CurrentValue()
}
return nil
}

View file

@ -39,8 +39,3 @@ func NewGauge(id string, labels map[string]string, fn func() float64, opts *Opti
return m, nil
}
// CurrentValue returns the current gauge value.
func (g *Gauge) CurrentValue() float64 {
return g.Get()
}

View file

@ -16,7 +16,7 @@ import (
const hostStatTTL = 1 * time.Second
func registerHostMetrics() (err error) {
func registeHostMetrics() (err error) {
// Register load average metrics.
_, err = NewGauge("host/load/avg/1", nil, getFloat64HostStat(LoadAvg1), &Options{Name: "Host Load Avg 1min", Permission: api.PermitUser})
if err != nil {
@ -127,7 +127,7 @@ func LoadAvg5() (loadAvg float64, ok bool) {
return 0, false
}
// LoadAvg15 returns the 15-minute average system load.
// LoadAvg15 returns the 5-minute average system load.
func LoadAvg15() (loadAvg float64, ok bool) {
if stat := getLoadAvg(); stat != nil {
return stat.Load15 / float64(runtime.NumCPU()), true

View file

@ -3,33 +3,29 @@ package metrics
import (
"runtime"
"strings"
"sync/atomic"
"github.com/safing/portbase/info"
)
var reportedStart atomic.Bool
func registerInfoMetric() error {
meta := info.GetInfo()
_, err := NewGauge(
"info",
map[string]string{
"version": checkUnknown(meta.Version),
"commit": checkUnknown(meta.Commit),
"build_date": checkUnknown(meta.BuildTime),
"build_source": checkUnknown(meta.Source),
"go_os": runtime.GOOS,
"go_arch": runtime.GOARCH,
"go_version": runtime.Version(),
"go_compiler": runtime.Compiler,
"comment": commentOption(),
"version": checkUnknown(meta.Version),
"commit": checkUnknown(meta.Commit),
"build_options": checkUnknown(meta.BuildOptions),
"build_user": checkUnknown(meta.BuildUser),
"build_host": checkUnknown(meta.BuildHost),
"build_date": checkUnknown(meta.BuildDate),
"build_source": checkUnknown(meta.BuildSource),
"go_os": runtime.GOOS,
"go_arch": runtime.GOARCH,
"go_version": runtime.Version(),
"go_compiler": runtime.Compiler,
"comment": commentOption(),
},
func() float64 {
// Report as 0 the first time in order to detect (re)starts.
if reportedStart.CompareAndSwap(false, true) {
return 0
}
return 1
},
nil,

View file

@ -5,7 +5,7 @@ import (
"github.com/safing/portbase/log"
)
func registerLogMetrics() (err error) {
func registeLogMetrics() (err error) {
_, err = NewFetchingCounter(
"logs/warning/total",
nil,

View file

@ -58,11 +58,11 @@ func start() error {
return err
}
if err := registerHostMetrics(); err != nil {
if err := registeHostMetrics(); err != nil {
return err
}
if err := registerLogMetrics(); err != nil {
if err := registeLogMetrics(); err != nil {
return err
}
@ -78,17 +78,6 @@ func start() error {
}
func stop() error {
// Wait until the metrics pusher is done, as it may have started reporting
// and may report a higher number than we store to disk. For persistent
// metrics it can then happen that the first report is lower than the
// previous report, making prometheus think that all that happened since the
// last report, due to the automatic restart detection.
// The registry is read locked when writing metrics.
// Write lock the registry to make sure all writes are finished.
registryLock.Lock()
registryLock.Unlock() //nolint:staticcheck
storePersistentMetrics()
return nil
@ -103,10 +92,6 @@ func register(m Metric) error {
if m.LabeledID() == registeredMetric.LabeledID() {
return ErrAlreadyRegistered
}
if m.Opts().InternalID != "" &&
m.Opts().InternalID == registeredMetric.Opts().InternalID {
return fmt.Errorf("%w with this internal ID", ErrAlreadyRegistered)
}
}
// Add new metric to registry and sort it.
@ -116,10 +101,6 @@ func register(m Metric) error {
// Set flag that first metric is now registered.
firstMetricRegistered = true
if module.Status() < modules.StatusStarting {
return fmt.Errorf("registering metric %q too early", m.ID())
}
return nil
}

View file

@ -25,7 +25,7 @@ var (
})
// ErrAlreadyInitialized is returned when trying to initialize an option
// more than once or if the time window for initializing is over.
// more than once.
ErrAlreadyInitialized = errors.New("already initialized")
)
@ -55,7 +55,7 @@ func EnableMetricPersistence(key string) error {
// Load metrics from storage.
var err error
storage, err = getMetricsStorage(storageKey)
storage, err = getMetricsStorage(key)
switch {
case err == nil:
// Continue.

View file

@ -104,7 +104,7 @@ func (m *Module) InjectEvent(sourceEventName, targetModuleName, targetEventName
func (m *Module) runEventHook(hook *eventHook, event string, data interface{}) {
// check if source module is ready for handling
if m.Status() != StatusOnline {
// source module has not yet fully started, wait until start is complete
// target module has not yet fully started, wait until start is complete
select {
case <-m.StartCompleted():
// continue with hook execution

View file

@ -1,117 +0,0 @@
package modules
import "sync/atomic"
// Status holds an exported status summary of the modules system.
type Status struct {
Modules map[string]*ModuleStatus
Total struct {
Workers int
Tasks int
MicroTasks int
CtrlFuncRunning int
}
Config struct {
MicroTasksThreshhold int
MediumPriorityDelay string
LowPriorityDelay string
}
}
// ModuleStatus holds an exported status summary of one module.
type ModuleStatus struct { //nolint:maligned
Enabled bool
Status string
FailureType string
FailureID string
FailureMsg string
Workers int
Tasks int
MicroTasks int
CtrlFuncRunning bool
}
// GetStatus exports status data from the module system.
func GetStatus() *Status {
// Check if modules have been initialized.
if modulesLocked.IsNotSet() {
return nil
}
// Create new status.
status := &Status{
Modules: make(map[string]*ModuleStatus, len(modules)),
}
// Add config.
status.Config.MicroTasksThreshhold = int(atomic.LoadInt32(microTasksThreshhold))
status.Config.MediumPriorityDelay = defaultMediumPriorityMaxDelay.String()
status.Config.LowPriorityDelay = defaultLowPriorityMaxDelay.String()
// Gather status data.
for name, module := range modules {
moduleStatus := &ModuleStatus{
Enabled: module.Enabled(),
Status: getStatusName(module.Status()),
Workers: int(atomic.LoadInt32(module.workerCnt)),
Tasks: int(atomic.LoadInt32(module.taskCnt)),
MicroTasks: int(atomic.LoadInt32(module.microTaskCnt)),
CtrlFuncRunning: module.ctrlFuncRunning.IsSet(),
}
// Add failure status.
failureStatus, failureID, failureMsg := module.FailureStatus()
moduleStatus.FailureType = getFailureStatusName(failureStatus)
moduleStatus.FailureID = failureID
moduleStatus.FailureMsg = failureMsg
// Add to total counts.
status.Total.Workers += moduleStatus.Workers
status.Total.Tasks += moduleStatus.Tasks
status.Total.MicroTasks += moduleStatus.MicroTasks
if moduleStatus.CtrlFuncRunning {
status.Total.CtrlFuncRunning++
}
// Add to export.
status.Modules[name] = moduleStatus
}
return status
}
func getStatusName(status uint8) string {
switch status {
case StatusDead:
return "dead"
case StatusPreparing:
return "preparing"
case StatusOffline:
return "offline"
case StatusStopping:
return "stopping"
case StatusStarting:
return "starting"
case StatusOnline:
return "online"
default:
return "unknown"
}
}
func getFailureStatusName(status uint8) string {
switch status {
case FailureNone:
return ""
case FailureHint:
return "hint"
case FailureWarning:
return "warning"
case FailureError:
return "error"
default:
return "unknown"
}
}

View file

@ -18,9 +18,7 @@ func init() {
func parseFlags() error {
// parse flags
if !flag.Parsed() {
flag.Parse()
}
flag.Parse()
if HelpFlag {
flag.Usage()

View file

@ -55,13 +55,14 @@ func (m *Module) EnabledAsDependency() bool {
//
// Example:
//
// EnableModuleManagement(func(m *modules.Module) {
// // some module has changed ...
// // do what ever you like
// EnableModuleManagement(func(m *modules.Module) {
// // some module has changed ...
// // do what ever you like
//
// // Run the built-in module management
// modules.ManageModules()
// })
//
// // Run the built-in module management
// modules.ManageModules()
// })
func EnableModuleManagement(changeNotifyFn func(*Module)) bool {
if moduleMgmtEnabled.SetToIf(false, true) {
modulesChangeNotifyFn = changeNotifyFn

View file

@ -30,8 +30,7 @@ const (
func init() {
var microTasksVal int32
microTasks = &microTasksVal
microTasksThreshholdVal := int32(runtime.GOMAXPROCS(0) * 2)
var microTasksThreshholdVal int32
microTasksThreshhold = &microTasksThreshholdVal
}

View file

@ -20,8 +20,6 @@ var (
// modulesLocked locks `modules` during starting.
modulesLocked = abool.New()
sleepMode = abool.NewBool(false)
moduleStartTimeout = 2 * time.Minute
moduleStopTimeout = 1 * time.Minute
@ -30,7 +28,7 @@ var (
)
// Module represents a module.
type Module struct { //nolint:maligned
type Module struct {
sync.RWMutex
Name string
@ -39,8 +37,6 @@ type Module struct { //nolint:maligned
enabled *abool.AtomicBool
enabledAsDependency *abool.AtomicBool
status uint8
sleepMode *abool.AtomicBool
sleepWaitingChannel chan time.Time
// failure status
failureStatus uint8
@ -57,11 +53,10 @@ type Module struct { //nolint:maligned
// start
startComplete chan struct{}
// stop
Ctx context.Context
cancelCtx func()
stopFlag *abool.AtomicBool
stopCompleted *abool.AtomicBool
stopComplete chan struct{}
Ctx context.Context
cancelCtx func()
stopFlag *abool.AtomicBool
stopComplete chan struct{}
// workers/tasks
ctrlFuncRunning *abool.AtomicBool
@ -105,43 +100,6 @@ func (m *Module) Dependencies() []*Module {
return m.depModules
}
// Sleep enables or disables sleep mode.
func (m *Module) Sleep(enable bool) {
set := m.sleepMode.SetToIf(!enable, enable)
if !set {
return
}
m.Lock()
defer m.Unlock()
if enable {
m.sleepWaitingChannel = make(chan time.Time)
} else {
// Notify all waiting tasks that we are not sleeping anymore.
close(m.sleepWaitingChannel)
}
}
// IsSleeping returns true if sleep mode is enabled.
func (m *Module) IsSleeping() bool {
return m.sleepMode.IsSet()
}
// WaitIfSleeping returns channel that will signal when it exits sleep mode.
// The channel will always return a zero-value time.Time.
// It uses time.Time to be easier dropped in to replace a time.Ticker.
func (m *Module) WaitIfSleeping() <-chan time.Time {
m.RLock()
defer m.RUnlock()
return m.sleepWaitingChannel
}
// NewSleepyTicker returns new sleepyTicker that will respect the modules sleep mode.
func (m *Module) NewSleepyTicker(normalDuration, sleepDuration time.Duration) *SleepyTicker {
return newSleepyTicker(m, normalDuration, sleepDuration)
}
func (m *Module) prep(reports chan *report) {
// check and set intermediate status
m.Lock()
@ -256,10 +214,12 @@ func (m *Module) checkIfStopComplete() {
atomic.LoadInt32(m.taskCnt) == 0 &&
atomic.LoadInt32(m.microTaskCnt) == 0 {
if m.stopCompleted.SetToIf(false, true) {
m.Lock()
defer m.Unlock()
m.Lock()
defer m.Unlock()
if m.stopComplete != nil {
close(m.stopComplete)
m.stopComplete = nil
}
}
}
@ -282,56 +242,60 @@ func (m *Module) stop(reports chan *report) {
// Reset start/stop signal channels.
m.startComplete = make(chan struct{})
m.stopComplete = make(chan struct{})
m.stopCompleted.SetTo(false)
// Set status.
// Make a copy of the stop channel.
stopComplete := m.stopComplete
// Set status and cancel context.
m.status = StatusStopping
go m.stopAllTasks(reports)
}
func (m *Module) stopAllTasks(reports chan *report) {
// Manually set the control function flag in order to stop completion by race
// condition before stop function has even started.
m.ctrlFuncRunning.Set()
// Set stop flag for everyone checking this flag before we activate any stop trigger.
m.stopFlag.Set()
// Cancel the context to notify all workers and tasks.
m.cancelCtx()
// Start stop function.
stopFnError := m.startCtrlFn("stop module", m.stopFn)
go m.stopAllTasks(reports, stopComplete)
}
func (m *Module) stopAllTasks(reports chan *report, stopComplete chan struct{}) {
// start shutdown function
var stopFnError error
stopFuncRunning := abool.New()
if m.stopFn != nil {
stopFuncRunning.Set()
go func() {
stopFnError = m.runCtrlFn("stop module", m.stopFn)
stopFuncRunning.UnSet()
m.checkIfStopComplete()
}()
} else {
m.checkIfStopComplete()
}
// wait for results
select {
case <-m.stopComplete:
// Complete!
case <-stopComplete:
// case <-time.After(moduleStopTimeout):
case <-time.After(moduleStopTimeout):
log.Warningf(
"%s: timed out while waiting for stopfn/workers/tasks to finish: stopFn=%v workers=%d tasks=%d microtasks=%d, continuing shutdown...",
"%s: timed out while waiting for stopfn/workers/tasks to finish: stopFn=%v/%v workers=%d tasks=%d microtasks=%d, continuing shutdown...",
m.Name,
m.ctrlFuncRunning.IsSet(),
stopFuncRunning.IsSet(), m.ctrlFuncRunning.IsSet(),
atomic.LoadInt32(m.workerCnt),
atomic.LoadInt32(m.taskCnt),
atomic.LoadInt32(m.microTaskCnt),
)
}
// Check for stop fn status.
// collect error
var err error
select {
case err = <-stopFnError:
if err != nil {
// Set error as module error.
m.Error(
fmt.Sprintf("%s:stop-failed", m.Name),
fmt.Sprintf("Stopping module %s failed", m.Name),
fmt.Sprintf("Failed to stop module: %s", err.Error()),
)
}
default:
if stopFuncRunning.IsNotSet() && stopFnError != nil {
err = stopFnError
}
// set status
if err != nil {
m.Error(
fmt.Sprintf("%s:stop-failed", m.Name),
fmt.Sprintf("Stopping module %s failed", m.Name),
fmt.Sprintf("Failed to stop module: %s", err.Error()),
)
}
// Always set to offline in order to let other modules shutdown in order.
@ -340,9 +304,6 @@ func (m *Module) stopAllTasks(reports chan *report) {
m.Unlock()
m.notifyOfChange()
// Resolve any errors still on the module.
m.Resolve("")
// send report
reports <- &report{
module: m,
@ -379,8 +340,6 @@ func initNewModule(name string, prep, start, stop func() error, dependencies ...
Name: name,
enabled: abool.NewBool(false),
enabledAsDependency: abool.NewBool(false),
sleepMode: abool.NewBool(true), // Change (for init) is triggered below.
sleepWaitingChannel: make(chan time.Time),
prepFn: prep,
startFn: start,
stopFn: stop,
@ -388,7 +347,6 @@ func initNewModule(name string, prep, start, stop func() error, dependencies ...
Ctx: ctx,
cancelCtx: cancelCtx,
stopFlag: abool.NewBool(false),
stopCompleted: abool.NewBool(true),
ctrlFuncRunning: abool.NewBool(false),
workerCnt: &workerCnt,
taskCnt: &taskCnt,
@ -397,9 +355,6 @@ func initNewModule(name string, prep, start, stop func() error, dependencies ...
depNames: dependencies,
}
// Sleep mode is disabled by default.
newModule.Sleep(false)
return newModule
}
@ -422,21 +377,3 @@ func initDependencies() error {
return nil
}
// SetSleepMode enables or disables sleep mode for all the modules.
func SetSleepMode(enabled bool) {
// Update all modules
for _, m := range modules {
m.Sleep(enabled)
}
// Check if differs with the old state.
set := sleepMode.SetToIf(!enabled, enabled)
if set {
// Send signal to the task schedular.
select {
case notifyTaskScheduler <- struct{}{}:
default:
}
}
}

View file

@ -1,59 +0,0 @@
package modules
import "time"
// SleepyTicker is wrapper over time.Ticker that respects the sleep mode of the module.
type SleepyTicker struct {
ticker time.Ticker
module *Module
normalDuration time.Duration
sleepDuration time.Duration
sleepMode bool
}
// newSleepyTicker returns a new SleepyTicker. This is a wrapper of the standard time.Ticker but it respects modules.Module sleep mode. Check https://pkg.go.dev/time#Ticker.
// If sleepDuration is set to 0 ticker will not tick during sleep.
func newSleepyTicker(module *Module, normalDuration time.Duration, sleepDuration time.Duration) *SleepyTicker {
st := &SleepyTicker{
ticker: *time.NewTicker(normalDuration),
module: module,
normalDuration: normalDuration,
sleepDuration: sleepDuration,
sleepMode: false,
}
return st
}
// Wait waits until the module is not in sleep mode and returns time.Ticker.C channel.
func (st *SleepyTicker) Wait() <-chan time.Time {
sleepModeEnabled := st.module.sleepMode.IsSet()
// Update Sleep mode
if sleepModeEnabled != st.sleepMode {
st.enterSleepMode(sleepModeEnabled)
}
// Wait if until sleep mode exits only if sleepDuration is set to 0.
if sleepModeEnabled && st.sleepDuration == 0 {
return st.module.WaitIfSleeping()
}
return st.ticker.C
}
// Stop turns off a ticker. After Stop, no more ticks will be sent. Stop does not close the channel, to prevent a concurrent goroutine reading from the channel from seeing an erroneous "tick".
func (st *SleepyTicker) Stop() {
st.ticker.Stop()
}
func (st *SleepyTicker) enterSleepMode(enabled bool) {
st.sleepMode = enabled
if enabled {
if st.sleepDuration > 0 {
st.ticker.Reset(st.sleepDuration)
}
} else {
st.ticker.Reset(st.normalDuration)
}
}

View file

@ -4,6 +4,7 @@ import (
"errors"
"fmt"
"os"
"runtime"
"github.com/tevino/abool"
@ -24,11 +25,6 @@ func SetGlobalPrepFn(fn func() error) {
}
}
// IsStarting returns whether the initial global start is still in progress.
func IsStarting() bool {
return !initialStartCompleted.IsSet()
}
// Start starts all modules in the correct order. In case of an error, it will automatically shutdown again.
func Start() error {
if !modulesLocked.SetToIf(false, true) {
@ -40,6 +36,7 @@ func Start() error {
defer mgmtLock.Unlock()
// start microtask scheduler
SetMaxConcurrentMicroTasks(runtime.GOMAXPROCS(0))
go microTaskScheduler()
// inter-link modules

View file

@ -64,7 +64,7 @@ func prep() error {
}
// We need to listen for configuration changes so we can
// start/stop depended modules in case a subsystem is
// start/stop dependend modules in case a subsystem is
// (de-)activated.
if err := module.RegisterEventHook(
"config",

View file

@ -125,11 +125,11 @@ func (mng *Manager) Get(keyOrPrefix string) ([]record.Record, error) {
// you. Pass a nil option to force enable.
//
// TODO(ppacher): IMHO the subsystem package is not responsible of registering
// the "toggle option". This would also remove runtime
// dependency to the config package. Users should either pass
// the BoolOptionFunc and the expertise/release level directly
// or just pass the configuration key so those information can
// be looked up by the registry.
// the "toggle option". This would also remove runtime
// dependency to the config package. Users should either pass
// the BoolOptionFunc and the expertise/release level directly
// or just pass the configuration key so those information can
// be looked up by the registry.
func (mng *Manager) Register(id, name, description string, module *modules.Module, configKeySpace string, option *config.Option) error {
mng.l.Lock()
defer mng.l.Unlock()

View file

@ -1,6 +1,7 @@
package subsystems
import (
"io/ioutil"
"os"
"testing"
"time"
@ -13,7 +14,7 @@ import (
func TestSubsystems(t *testing.T) { //nolint:paralleltest // Too much interference expected.
// tmp dir for data root (db & config)
tmpDir, err := os.MkdirTemp("", "portbase-testing-")
tmpDir, err := ioutil.TempDir("", "portbase-testing-")
// initialize data dir
if err == nil {
err = dataroot.Initialize(tmpDir, 0o0755)

View file

@ -51,9 +51,9 @@ var (
waitForever chan time.Time
queueIsFilled = make(chan struct{}, 1) // kick off queue handler
notifyTaskScheduler = make(chan struct{}, 1)
taskTimeslot = make(chan struct{})
queueIsFilled = make(chan struct{}, 1) // kick off queue handler
recalculateNextScheduledTask = make(chan struct{}, 1)
taskTimeslot = make(chan struct{})
)
const (
@ -410,7 +410,7 @@ func (t *Task) addToSchedule(overtime bool) {
// notify scheduler
defer func() {
select {
case notifyTaskScheduler <- struct{}{}:
case recalculateNextScheduledTask <- struct{}{}:
default:
}
}()
@ -515,21 +515,10 @@ func taskScheduleHandler() {
}
for {
if sleepMode.IsSet() {
select {
case <-shutdownSignal:
return
case <-notifyTaskScheduler:
continue
}
}
select {
case <-shutdownSignal:
return
case <-notifyTaskScheduler:
continue
case <-recalculateNextScheduledTask:
case <-waitUntilNextScheduledTask():
scheduleLock.Lock()

View file

@ -53,7 +53,6 @@ func (m *Module) RunWorker(name string, fn func(context.Context) error) error {
}
// StartServiceWorker starts a generic worker, which is automatically restarted in case of an error. A call to StartServiceWorker runs the service-worker in a new goroutine and returns immediately. `backoffDuration` specifies how to long to wait before restarts, multiplied by the number of failed attempts. Pass `0` for the default backoff duration. For custom error remediation functionality, build your own error handling procedure using calls to RunWorker.
// Returning nil error or context.Canceled will stop the service worker.
func (m *Module) StartServiceWorker(name string, backoffDuration time.Duration, fn func(context.Context) error) {
if m == nil {
log.Errorf(`modules: cannot start service worker "%s" with nil module`, name)
@ -82,36 +81,34 @@ func (m *Module) runServiceWorker(name string, backoffDuration time.Duration, fn
}
err := m.runWorker(name, fn)
switch {
case err == nil:
// No error means that the worker is finished.
return
case errors.Is(err, context.Canceled):
// A canceled context also means that the worker is finished.
return
case errors.Is(err, ErrRestartNow):
// Worker requested a restart - silently continue with loop.
default:
// Any other errors triggers a restart with backoff.
// Reset fail counter if running without error for some time.
if time.Now().Add(-5 * time.Minute).After(lastFail) {
failCnt = 0
}
// Increase fail counter and set last failed time.
failCnt++
lastFail = time.Now()
// Log error and back off for some time.
sleepFor := time.Duration(failCnt) * backoffDuration
log.Errorf("%s: service-worker %s failed (%d): %s - restarting in %s", m.Name, name, failCnt, err, sleepFor)
select {
case <-time.After(sleepFor):
case <-m.Ctx.Done():
return
if err != nil {
if !errors.Is(err, ErrRestartNow) {
// reset fail counter if running without error for some time
if time.Now().Add(-5 * time.Minute).After(lastFail) {
failCnt = 0
}
// increase fail counter and set last failed time
failCnt++
lastFail = time.Now()
// log error
sleepFor := time.Duration(failCnt) * backoffDuration
if errors.Is(err, context.Canceled) {
log.Debugf("%s: service-worker %s was canceled (%d): %s - restarting in %s", m.Name, name, failCnt, err, sleepFor)
} else {
log.Errorf("%s: service-worker %s failed (%d): %s - restarting in %s", m.Name, name, failCnt, err, sleepFor)
}
select {
case <-time.After(sleepFor):
case <-m.Ctx.Done():
return
}
// loop to restart
} else {
log.Infof("%s: service-worker %s %s - restarting now", m.Name, name, err)
}
} else {
// finish
return
}
}
}
@ -128,14 +125,15 @@ func (m *Module) runWorker(name string, fn func(context.Context) error) (err err
}()
// run
// TODO: get cancel func for worker context and cancel when worker is done.
// This ensure that when the worker passes its context to another (async) function, it will also be shutdown when the worker finished or dies.
err = fn(m.Ctx)
return
}
func (m *Module) runCtrlFnWithTimeout(name string, timeout time.Duration, fn func() error) error {
stopFnError := m.startCtrlFn(name, fn)
stopFnError := make(chan error)
go func() {
stopFnError <- m.runCtrlFn(name, fn)
}()
// wait for results
select {
@ -146,44 +144,26 @@ func (m *Module) runCtrlFnWithTimeout(name string, timeout time.Duration, fn fun
}
}
func (m *Module) startCtrlFn(name string, fn func() error) chan error {
ctrlFnError := make(chan error, 1)
// If no function is given, still act as if it was run.
func (m *Module) runCtrlFn(name string, fn func() error) (err error) {
if fn == nil {
// Signal finish.
m.ctrlFuncRunning.UnSet()
m.checkIfStopComplete()
// Report nil error and return.
ctrlFnError <- nil
return ctrlFnError
return
}
// Signal that a control function is running.
m.ctrlFuncRunning.Set()
if m.ctrlFuncRunning.SetToIf(false, true) {
defer m.ctrlFuncRunning.SetToIf(true, false)
}
// Start control function in goroutine.
go func() {
// Recover from panic and reset control function signal.
defer func() {
// recover from panic
panicVal := recover()
if panicVal != nil {
me := m.NewPanicError(name, "module-control", panicVal)
me.Report()
ctrlFnError <- fmt.Errorf("panic: %s", panicVal)
}
// Signal finish.
m.ctrlFuncRunning.UnSet()
m.checkIfStopComplete()
}()
// Run control function and report error.
err := fn()
ctrlFnError <- err
defer func() {
// recover from panic
panicVal := recover()
if panicVal != nil {
me := m.NewPanicError(name, "module-control", panicVal)
me.Report()
err = me
}
}()
return ctrlFnError
// run
err = fn()
return
}

View file

@ -6,14 +6,14 @@ import (
)
func cleaner(ctx context.Context) error { //nolint:unparam // Conforms to worker interface
ticker := module.NewSleepyTicker(1*time.Second, 0)
ticker := time.NewTicker(1 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return nil
case <-ticker.Wait():
case <-ticker.C:
deleteExpiredNotifs()
}
}

View file

@ -1,7 +1,7 @@
/*
Package notifications provides a notification system.
# Notification Lifecycle
Notification Lifecycle
1. Create Notification with an ID and Message.
2. Set possible actions and save it.
@ -9,18 +9,19 @@ Package notifications provides a notification system.
Example
// create notification
n := notifications.New("update-available", "A new update is available. Restart to upgrade.")
// set actions and save
n.AddAction("later", "Later").AddAction("restart", "Restart now!").Save()
// create notification
n := notifications.New("update-available", "A new update is available. Restart to upgrade.")
// set actions and save
n.AddAction("later", "Later").AddAction("restart", "Restart now!").Save()
// wait for user action
selectedAction := <-n.Response()
switch selectedAction {
case "later":
log.Infof("user wants to upgrade later.")
case "restart":
log.Infof("user wants to restart now.")
}
// wait for user action
selectedAction := <-n.Response()
switch selectedAction {
case "later":
log.Infof("user wants to upgrade later.")
case "restart":
log.Infof("user wants to restart now.")
}
*/
package notifications

View file

@ -393,17 +393,6 @@ func (n *Notification) Update(expires int64) {
// Delete (prematurely) cancels and deletes a notification.
func (n *Notification) Delete() {
// Dismiss notification.
func() {
n.lock.Lock()
defer n.lock.Unlock()
if n.actionTrigger != nil {
close(n.actionTrigger)
n.actionTrigger = nil
}
}()
n.delete(true)
}

View file

@ -10,7 +10,7 @@ import (
func main() {
// Set Info
info.Set("Portbase", "0.0.1", "GPLv3")
info.Set("Portbase", "0.0.1", "GPLv3", false)
// Run
os.Exit(run.Run())

View file

@ -44,7 +44,7 @@ func (f *Feeder) NeedsEntropy() bool {
return f.needsEntropy.IsSet()
}
// SupplyEntropy supplies entropy to the Feeder, it will block until the Feeder has read from it.
// SupplyEntropy supplies entropy to to the Feeder, it will block until the Feeder has read from it.
func (f *Feeder) SupplyEntropy(data []byte, entropy int) {
f.input <- &entropyData{
data: data,
@ -52,7 +52,7 @@ func (f *Feeder) SupplyEntropy(data []byte, entropy int) {
}
}
// SupplyEntropyIfNeeded supplies entropy to the Feeder, but will not block if no entropy is currently needed.
// SupplyEntropyIfNeeded supplies entropy to to the Feeder, but will not block if no entropy is currently needed.
func (f *Feeder) SupplyEntropyIfNeeded(data []byte, entropy int) {
if f.needsEntropy.IsSet() {
return
@ -67,14 +67,14 @@ func (f *Feeder) SupplyEntropyIfNeeded(data []byte, entropy int) {
}
}
// SupplyEntropyAsInt supplies entropy to the Feeder, it will block until the Feeder has read from it.
// SupplyEntropyAsInt supplies entropy to to the Feeder, it will block until the Feeder has read from it.
func (f *Feeder) SupplyEntropyAsInt(n int64, entropy int) {
b := make([]byte, 8)
binary.LittleEndian.PutUint64(b, uint64(n))
f.SupplyEntropy(b, entropy)
}
// SupplyEntropyAsIntIfNeeded supplies entropy to the Feeder, but will not block if no entropy is currently needed.
// SupplyEntropyAsIntIfNeeded supplies entropy to to the Feeder, but will not block if no entropy is currently needed.
func (f *Feeder) SupplyEntropyAsIntIfNeeded(n int64, entropy int) {
if f.needsEntropy.IsSet() { // avoid allocating a slice if possible
b := make([]byte, 8)

View file

@ -15,16 +15,17 @@ type singleRecordReader struct {
//
// Example:
//
// type MyValue struct {
// record.Base
// Value string
// }
// r := new(MyValue)
// pushUpdate, _ := runtime.Register("my/key", ProvideRecord(r))
// r.Lock()
// r.Value = "foobar"
// pushUpdate(r)
// r.Unlock()
// type MyValue struct {
// record.Base
// Value string
// }
// r := new(MyValue)
// pushUpdate, _ := runtime.Register("my/key", ProvideRecord(r))
// r.Lock()
// r.Value = "foobar"
// pushUpdate(r)
// r.Unlock()
//
func ProvideRecord(r record.Record) ValueProvider {
return &singleRecordReader{r}
}

View file

@ -2,6 +2,7 @@ package template
import (
"fmt"
"io/ioutil"
"os"
"testing"
@ -18,7 +19,7 @@ func TestMain(m *testing.M) {
module.Enable()
// tmp dir for data root (db & config)
tmpDir, err := os.MkdirTemp("", "portbase-testing-")
tmpDir, err := ioutil.TempDir("", "portbase-testing-")
if err != nil {
fmt.Fprintf(os.Stderr, "failed to create tmp dir: %s\n", err)
os.Exit(1)

View file

@ -3,9 +3,7 @@ package updater
import (
"bytes"
"context"
"errors"
"fmt"
"hash"
"io"
"net/http"
"net/url"
@ -14,8 +12,6 @@ import (
"path/filepath"
"time"
"github.com/safing/jess/filesig"
"github.com/safing/jess/lhash"
"github.com/safing/portbase/log"
"github.com/safing/portbase/utils/renameio"
)
@ -37,31 +33,6 @@ func (reg *ResourceRegistry) fetchFile(ctx context.Context, client *http.Client,
return fmt.Errorf("could not create updates folder: %s", dirPath)
}
// If verification is enabled, download signature first.
var (
verifiedHash *lhash.LabeledHash
sigFileData []byte
)
if rv.resource.VerificationOptions != nil {
verifiedHash, sigFileData, err = reg.fetchAndVerifySigFile(
ctx, client,
rv.resource.VerificationOptions,
rv.versionedSigPath(), rv.SigningMetadata(),
tries,
)
if err != nil {
switch rv.resource.VerificationOptions.DownloadPolicy {
case SignaturePolicyRequire:
return fmt.Errorf("signature verification failed: %w", err)
case SignaturePolicyWarn:
log.Warningf("%s: failed to verify downloaded signature of %s: %s", reg.Name, rv.versionedPath(), err)
case SignaturePolicyDisable:
log.Debugf("%s: failed to verify downloaded signature of %s: %s", reg.Name, rv.versionedPath(), err)
}
}
}
// open file for writing
atomicFile, err := renameio.TempFile(reg.tmpDir.Path, rv.storagePath())
if err != nil {
@ -78,16 +49,8 @@ func (reg *ResourceRegistry) fetchFile(ctx context.Context, client *http.Client,
_ = resp.Body.Close()
}()
// Write to the hasher at the same time, if needed.
var hasher hash.Hash
var writeDst io.Writer = atomicFile
if verifiedHash != nil {
hasher = verifiedHash.Algorithm().RawHasher()
writeDst = io.MultiWriter(hasher, atomicFile)
}
// Download and write file.
n, err := io.Copy(writeDst, resp.Body)
// download and write file
n, err := io.Copy(atomicFile, resp.Body)
if err != nil {
return fmt.Errorf("failed to download %q: %w", downloadURL, err)
}
@ -95,42 +58,6 @@ func (reg *ResourceRegistry) fetchFile(ctx context.Context, client *http.Client,
return fmt.Errorf("failed to finish download of %q: written %d out of %d bytes", downloadURL, n, resp.ContentLength)
}
// Before file is finalized, check if hash, if available.
if hasher != nil {
downloadDigest := hasher.Sum(nil)
if verifiedHash.EqualRaw(downloadDigest) {
log.Infof("%s: verified signature of %s", reg.Name, downloadURL)
} else {
switch rv.resource.VerificationOptions.DownloadPolicy {
case SignaturePolicyRequire:
return errors.New("file does not match signed checksum")
case SignaturePolicyWarn:
log.Warningf("%s: checksum does not match file from %s", reg.Name, downloadURL)
case SignaturePolicyDisable:
log.Debugf("%s: checksum does not match file from %s", reg.Name, downloadURL)
}
// Reset hasher to signal that the sig should not be written.
hasher = nil
}
}
// Write signature file, if we have one and if verification succeeded.
if len(sigFileData) > 0 && hasher != nil {
sigFilePath := rv.storagePath() + filesig.Extension
err := os.WriteFile(sigFilePath, sigFileData, 0o0644) //nolint:gosec
if err != nil {
switch rv.resource.VerificationOptions.DownloadPolicy {
case SignaturePolicyRequire:
return fmt.Errorf("failed to write signature file %s: %w", sigFilePath, err)
case SignaturePolicyWarn:
log.Warningf("%s: failed to write signature file %s: %s", reg.Name, sigFilePath, err)
case SignaturePolicyDisable:
log.Debugf("%s: failed to write signature file %s: %s", reg.Name, sigFilePath, err)
}
}
}
// finalize file
err = atomicFile.CloseAtomicallyReplace()
if err != nil {
@ -145,144 +72,16 @@ func (reg *ResourceRegistry) fetchFile(ctx context.Context, client *http.Client,
}
}
log.Debugf("%s: fetched %s and stored to %s", reg.Name, downloadURL, rv.storagePath())
log.Infof("%s: fetched %s (stored to %s)", reg.Name, downloadURL, rv.storagePath())
return nil
}
func (reg *ResourceRegistry) fetchMissingSig(ctx context.Context, client *http.Client, rv *ResourceVersion, tries int) error {
func (reg *ResourceRegistry) fetchData(ctx context.Context, client *http.Client, downloadPath string, tries int) ([]byte, error) {
// backoff when retrying
if tries > 0 {
select {
case <-ctx.Done():
return nil // module is shutting down
case <-time.After(time.Duration(tries*tries) * time.Second):
}
}
// Check destination dir.
dirPath := filepath.Dir(rv.storagePath())
err := reg.storageDir.EnsureAbsPath(dirPath)
if err != nil {
return fmt.Errorf("could not create updates folder: %s", dirPath)
}
// Download and verify the missing signature.
verifiedHash, sigFileData, err := reg.fetchAndVerifySigFile(
ctx, client,
rv.resource.VerificationOptions,
rv.versionedSigPath(), rv.SigningMetadata(),
tries,
)
if err != nil {
switch rv.resource.VerificationOptions.DownloadPolicy {
case SignaturePolicyRequire:
return fmt.Errorf("signature verification failed: %w", err)
case SignaturePolicyWarn:
log.Warningf("%s: failed to verify downloaded signature of %s: %s", reg.Name, rv.versionedPath(), err)
case SignaturePolicyDisable:
log.Debugf("%s: failed to verify downloaded signature of %s: %s", reg.Name, rv.versionedPath(), err)
}
return nil
}
// Check if the signature matches the resource file.
ok, err := verifiedHash.MatchesFile(rv.storagePath())
if err != nil {
switch rv.resource.VerificationOptions.DownloadPolicy {
case SignaturePolicyRequire:
return fmt.Errorf("error while verifying resource file: %w", err)
case SignaturePolicyWarn:
log.Warningf("%s: error while verifying resource file %s", reg.Name, rv.storagePath())
case SignaturePolicyDisable:
log.Debugf("%s: error while verifying resource file %s", reg.Name, rv.storagePath())
}
return nil
}
if !ok {
switch rv.resource.VerificationOptions.DownloadPolicy {
case SignaturePolicyRequire:
return errors.New("resource file does not match signed checksum")
case SignaturePolicyWarn:
log.Warningf("%s: checksum does not match resource file from %s", reg.Name, rv.storagePath())
case SignaturePolicyDisable:
log.Debugf("%s: checksum does not match resource file from %s", reg.Name, rv.storagePath())
}
return nil
}
// Write signature file.
err = os.WriteFile(rv.storageSigPath(), sigFileData, 0o0644) //nolint:gosec
if err != nil {
switch rv.resource.VerificationOptions.DownloadPolicy {
case SignaturePolicyRequire:
return fmt.Errorf("failed to write signature file %s: %w", rv.storageSigPath(), err)
case SignaturePolicyWarn:
log.Warningf("%s: failed to write signature file %s: %s", reg.Name, rv.storageSigPath(), err)
case SignaturePolicyDisable:
log.Debugf("%s: failed to write signature file %s: %s", reg.Name, rv.storageSigPath(), err)
}
}
log.Debugf("%s: fetched %s and stored to %s", reg.Name, rv.versionedSigPath(), rv.storageSigPath())
return nil
}
func (reg *ResourceRegistry) fetchAndVerifySigFile(ctx context.Context, client *http.Client, verifOpts *VerificationOptions, sigFilePath string, requiredMetadata map[string]string, tries int) (*lhash.LabeledHash, []byte, error) {
// Download signature file.
resp, _, err := reg.makeRequest(ctx, client, sigFilePath, tries)
if err != nil {
return nil, nil, err
}
defer func() {
_ = resp.Body.Close()
}()
sigFileData, err := io.ReadAll(resp.Body)
if err != nil {
return nil, nil, err
}
// Extract all signatures.
sigs, err := filesig.ParseSigFile(sigFileData)
switch {
case len(sigs) == 0 && err != nil:
return nil, nil, fmt.Errorf("failed to parse signature file: %w", err)
case len(sigs) == 0:
return nil, nil, errors.New("no signatures found in signature file")
case err != nil:
return nil, nil, fmt.Errorf("failed to parse signature file: %w", err)
}
// Verify all signatures.
var verifiedHash *lhash.LabeledHash
for _, sig := range sigs {
fd, err := filesig.VerifyFileData(
sig,
requiredMetadata,
verifOpts.TrustStore,
)
if err != nil {
return nil, sigFileData, err
}
// Save or check verified hash.
if verifiedHash == nil {
verifiedHash = fd.FileHash()
} else if !fd.FileHash().Equal(verifiedHash) {
// Return an error if two valid hashes mismatch.
// For simplicity, all hash algorithms must be the same for now.
return nil, sigFileData, errors.New("file hashes from different signatures do not match")
}
}
return verifiedHash, sigFileData, nil
}
func (reg *ResourceRegistry) fetchData(ctx context.Context, client *http.Client, downloadPath string, tries int) (fileData []byte, downloadedFrom string, err error) {
// backoff when retrying
if tries > 0 {
select {
case <-ctx.Done():
return nil, "", nil // module is shutting down
return nil, nil // module is shutting down
case <-time.After(time.Duration(tries*tries) * time.Second):
}
}
@ -290,7 +89,7 @@ func (reg *ResourceRegistry) fetchData(ctx context.Context, client *http.Client,
// start file download
resp, downloadURL, err := reg.makeRequest(ctx, client, downloadPath, tries)
if err != nil {
return nil, downloadURL, err
return nil, err
}
defer func() {
_ = resp.Body.Close()
@ -300,13 +99,13 @@ func (reg *ResourceRegistry) fetchData(ctx context.Context, client *http.Client,
buf := bytes.NewBuffer(make([]byte, 0, resp.ContentLength))
n, err := io.Copy(buf, resp.Body)
if err != nil {
return nil, downloadURL, fmt.Errorf("failed to download %q: %w", downloadURL, err)
return nil, fmt.Errorf("failed to download %q: %w", downloadURL, err)
}
if resp.ContentLength != n {
return nil, downloadURL, fmt.Errorf("failed to finish download of %q: written %d out of %d bytes", downloadURL, n, resp.ContentLength)
return nil, fmt.Errorf("failed to finish download of %q: written %d out of %d bytes", downloadURL, n, resp.ContentLength)
}
return buf.Bytes(), downloadURL, nil
return buf.Bytes(), nil
}
func (reg *ResourceRegistry) makeRequest(ctx context.Context, client *http.Client, downloadPath string, tries int) (resp *http.Response, downloadURL string, err error) {
@ -322,7 +121,7 @@ func (reg *ResourceRegistry) makeRequest(ctx context.Context, client *http.Clien
downloadURL = u.String()
// create request
req, err := http.NewRequestWithContext(ctx, http.MethodGet, downloadURL, http.NoBody)
req, err := http.NewRequestWithContext(ctx, "GET", downloadURL, http.NoBody)
if err != nil {
return nil, "", fmt.Errorf("failed to create request for %q: %w", downloadURL, err)
}

View file

@ -1,15 +1,12 @@
package updater
import (
"errors"
"io"
"io/fs"
"os"
"strings"
semver "github.com/hashicorp/go-version"
"github.com/safing/jess/filesig"
"github.com/safing/portbase/log"
"github.com/safing/portbase/utils"
)
@ -48,42 +45,6 @@ func (file *File) Path() string {
return file.storagePath
}
// SigningMetadata returns the metadata to be included in signatures.
func (file *File) SigningMetadata() map[string]string {
return map[string]string{
"id": file.Identifier(),
"version": file.Version(),
}
}
// Verify verifies the given file.
func (file *File) Verify() ([]*filesig.FileData, error) {
// Check if verification is configured.
if file.resource.VerificationOptions == nil {
return nil, ErrVerificationNotConfigured
}
// Verify file.
fileData, err := filesig.VerifyFile(
file.storagePath,
file.storagePath+filesig.Extension,
file.SigningMetadata(),
file.resource.VerificationOptions.TrustStore,
)
if err != nil {
switch file.resource.VerificationOptions.DiskLoadPolicy {
case SignaturePolicyRequire:
return nil, err
case SignaturePolicyWarn:
log.Warningf("%s: failed to verify %s: %s", file.resource.registry.Name, file.storagePath, err)
case SignaturePolicyDisable:
log.Debugf("%s: failed to verify %s: %s", file.resource.registry.Name, file.storagePath, err)
}
}
return fileData, nil
}
// Blacklist notifies the update system that this file is somehow broken, and should be ignored from now on, until restarted.
func (file *File) Blacklist() error {
return file.resource.Blacklist(file.version.VersionNumber)
@ -123,7 +84,7 @@ func (file *File) Unpack(suffix string, unpacker Unpacker) (string, error) {
return path, nil
}
if !errors.Is(err, fs.ErrNotExist) {
if !os.IsNotExist(err) {
return "", err
}

View file

@ -11,9 +11,8 @@ import (
// Errors returned by the updater package.
var (
ErrNotFound = errors.New("the requested file could not be found")
ErrNotAvailableLocally = errors.New("the requested file is not available locally")
ErrVerificationNotConfigured = errors.New("verification not configured for this resource")
ErrNotFound = errors.New("the requested file could not be found")
ErrNotAvailableLocally = errors.New("the requested file is not available locally")
)
// GetFile returns the selected (mostly newest) file with the given
@ -30,14 +29,6 @@ func (reg *ResourceRegistry) GetFile(identifier string) (*File, error) {
// check if file is available locally
if file.version.Available {
file.markActiveWithLocking()
// Verify file, if configured.
_, err := file.Verify()
if err != nil && !errors.Is(err, ErrVerificationNotConfigured) {
// TODO: If verification is required, try deleting the resource and downloading it again.
return nil, fmt.Errorf("failed to verify file: %w", err)
}
return file, nil
}
@ -52,10 +43,6 @@ func (reg *ResourceRegistry) GetFile(identifier string) (*File, error) {
return nil, fmt.Errorf("could not prepare tmp directory for download: %w", err)
}
// Start registry operation.
reg.state.StartOperation(StateFetching)
defer reg.state.EndOperation()
// download file
log.Tracef("%s: starting download of %s", reg.Name, file.versionedPath)
client := &http.Client{}
@ -65,27 +52,9 @@ func (reg *ResourceRegistry) GetFile(identifier string) (*File, error) {
log.Tracef("%s: failed to download %s: %s, retrying (%d)", reg.Name, file.versionedPath, err, tries+1)
} else {
file.markActiveWithLocking()
// TODO: We just download the file - should we verify it again?
return file, nil
}
}
log.Warningf("%s: failed to download %s: %s", reg.Name, file.versionedPath, err)
return nil, err
}
// GetVersion returns the selected version of the given identifier.
// The returned resource version may not be modified.
func (reg *ResourceRegistry) GetVersion(identifier string) (*ResourceVersion, error) {
reg.RLock()
res, ok := reg.resources[identifier]
reg.RUnlock()
if !ok {
return nil, ErrNotFound
}
res.Lock()
defer res.Unlock()
return res.SelectedVersion, nil
}

View file

@ -1,109 +1,13 @@
package updater
import (
"encoding/json"
"errors"
"fmt"
"time"
)
const (
baseIndexExtension = ".json"
v2IndexExtension = ".v2.json"
)
// Index describes an index file pulled by the updater.
type Index struct {
// Path is the path to the index file
// on the update server.
Path string
// Channel holds the release channel name of the index.
// It must match the filename without extension.
Channel string
// PreRelease signifies that all versions of this index should be marked as
// pre-releases, no matter if the versions actually have a pre-release tag or
// not.
PreRelease bool
// AutoDownload specifies whether new versions should be automatically downloaded.
AutoDownload bool
// LastRelease holds the time of the last seen release of this index.
LastRelease time.Time
}
// IndexFile represents an index file.
type IndexFile struct {
Channel string
Published time.Time
Releases map[string]string
}
var (
// ErrIndexChecksumMismatch is returned when an index does not match its
// signed checksum.
ErrIndexChecksumMismatch = errors.New("index checksum does mot match signature")
// ErrIndexFromFuture is returned when an index is parsed with a
// Published timestamp that lies in the future.
ErrIndexFromFuture = errors.New("index is from the future")
// ErrIndexIsOlder is returned when an index is parsed with an older
// Published timestamp than the current Published timestamp.
ErrIndexIsOlder = errors.New("index is older than the current one")
// ErrIndexChannelMismatch is returned when an index is parsed with a
// different channel that the expected one.
ErrIndexChannelMismatch = errors.New("index does not match the expected channel")
)
// ParseIndexFile parses an index file and checks if it is valid.
func ParseIndexFile(indexData []byte, channel string, lastIndexRelease time.Time) (*IndexFile, error) {
// Load into struct.
indexFile := &IndexFile{}
err := json.Unmarshal(indexData, indexFile)
if err != nil {
return nil, fmt.Errorf("failed to parse signed index data: %w", err)
}
// Fallback to old format if there are no releases and no channel is defined.
// TODO: Remove in v1
if len(indexFile.Releases) == 0 && indexFile.Channel == "" {
return loadOldIndexFormat(indexData, channel)
}
// Check the index metadata.
switch {
case !indexFile.Published.IsZero() && time.Now().Before(indexFile.Published):
return indexFile, ErrIndexFromFuture
case !indexFile.Published.IsZero() &&
!lastIndexRelease.IsZero() &&
lastIndexRelease.After(indexFile.Published):
return indexFile, ErrIndexIsOlder
case channel != "" &&
indexFile.Channel != "" &&
channel != indexFile.Channel:
return indexFile, ErrIndexChannelMismatch
}
return indexFile, nil
}
func loadOldIndexFormat(indexData []byte, channel string) (*IndexFile, error) {
releases := make(map[string]string)
err := json.Unmarshal(indexData, &releases)
if err != nil {
return nil, err
}
return &IndexFile{
Channel: channel,
// Do NOT define `Published`, as this would break the "is newer" check.
Releases: releases,
}, nil
}

View file

@ -1,57 +0,0 @@
package updater
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
var (
oldFormat = `{
"all/ui/modules/assets.zip": "0.3.0",
"all/ui/modules/portmaster.zip": "0.2.4",
"linux_amd64/core/portmaster-core": "0.8.13"
}`
newFormat = `{
"Channel": "stable",
"Published": "2022-01-02T00:00:00Z",
"Releases": {
"all/ui/modules/assets.zip": "0.3.0",
"all/ui/modules/portmaster.zip": "0.2.4",
"linux_amd64/core/portmaster-core": "0.8.13"
}
}`
formatTestChannel = "stable"
formatTestReleases = map[string]string{
"all/ui/modules/assets.zip": "0.3.0",
"all/ui/modules/portmaster.zip": "0.2.4",
"linux_amd64/core/portmaster-core": "0.8.13",
}
)
func TestIndexParsing(t *testing.T) {
t.Parallel()
lastRelease, err := time.Parse(time.RFC3339, "2022-01-01T00:00:00Z")
if err != nil {
t.Fatal(err)
}
oldIndexFile, err := ParseIndexFile([]byte(oldFormat), formatTestChannel, lastRelease)
if err != nil {
t.Fatal(err)
}
newIndexFile, err := ParseIndexFile([]byte(newFormat), formatTestChannel, lastRelease)
if err != nil {
t.Fatal(err)
}
assert.Equal(t, formatTestChannel, oldIndexFile.Channel, "channel should be the same")
assert.Equal(t, formatTestChannel, newIndexFile.Channel, "channel should be the same")
assert.Equal(t, formatTestReleases, oldIndexFile.Releases, "releases should be the same")
assert.Equal(t, formatTestReleases, newIndexFile.Releases, "releases should be the same")
}

View file

@ -1,12 +1,8 @@
package updater
import (
"errors"
"fmt"
"os"
"path/filepath"
"runtime"
"strings"
"sync"
"github.com/safing/portbase/log"
@ -24,8 +20,7 @@ type ResourceRegistry struct {
Name string
storageDir *utils.DirStructure
tmpDir *utils.DirStructure
indexes []*Index
state *RegistryState
indexes []Index
resources map[string]*Resource
UpdateURLs []string
@ -33,27 +28,12 @@ type ResourceRegistry struct {
MandatoryUpdates []string
AutoUnpack []string
// Verification holds a map of VerificationOptions assigned to their
// applicable identifier path prefix.
// Use an empty string to denote the default.
// Use empty options to disable verification for a path prefix.
Verification map[string]*VerificationOptions
// UsePreReleases signifies that pre-releases should be used when selecting a
// version. Even if false, a pre-release version will still be used if it is
// defined as the current version by an index.
UsePreReleases bool
// DevMode specifies if a local 0.0.0 version should be always chosen, when available.
DevMode bool
// Online specifies if resources may be downloaded if not available locally.
Online bool
// StateNotifyFunc may be set to receive any changes to the registry state.
// The specified function may lock the state, but may not block or take a
// lot of time.
StateNotifyFunc func(*RegistryState)
DevMode bool
Online bool
}
// AddIndex adds a new index to the resource registry.
@ -63,24 +43,7 @@ func (reg *ResourceRegistry) AddIndex(idx Index) {
reg.Lock()
defer reg.Unlock()
// Get channel name from path.
idx.Channel = strings.TrimSuffix(
filepath.Base(idx.Path), filepath.Ext(idx.Path),
)
reg.indexes = append(reg.indexes, &idx)
}
// PreInitUpdateState sets the initial update state of the registry before initialization.
func (reg *ResourceRegistry) PreInitUpdateState(s UpdateState) error {
if reg.state != nil {
return errors.New("registry already initialized")
}
reg.state = &RegistryState{
Updates: s,
}
return nil
reg.indexes = append(reg.indexes, idx)
}
// Initialize initializes a raw registry struct and makes it ready for usage.
@ -100,11 +63,6 @@ func (reg *ResourceRegistry) Initialize(storageDir *utils.DirStructure) error {
reg.storageDir = storageDir
reg.tmpDir = storageDir.ChildDir("tmp", 0o0700)
reg.resources = make(map[string]*Resource)
if reg.state == nil {
reg.state = &RegistryState{}
}
reg.state.ID = StateReady
reg.state.reg = reg
// remove tmp dir to delete old entries
err = reg.Cleanup()
@ -118,32 +76,6 @@ func (reg *ResourceRegistry) Initialize(storageDir *utils.DirStructure) error {
log.Warningf("%s: failed to create tmp dir: %s", reg.Name, err)
}
// Check verification options.
if reg.Verification != nil {
for prefix, opts := range reg.Verification {
// Check if verification is disable for this prefix.
if opts == nil {
continue
}
// If enabled, a trust store is required.
if opts.TrustStore == nil {
return fmt.Errorf("verification enabled for prefix %q, but no trust store configured", prefix)
}
// DownloadPolicy must be equal or stricter than DiskLoadPolicy.
if opts.DiskLoadPolicy < opts.DownloadPolicy {
return errors.New("verification download policy must be equal or stricter than the disk load policy")
}
// Warn if all policies are disabled.
if opts.DownloadPolicy == SignaturePolicyDisable &&
opts.DiskLoadPolicy == SignaturePolicyDisable {
log.Warningf("%s: verification enabled for prefix %q, but all policies set to disable", reg.Name, prefix)
}
}
}
return nil
}
@ -174,34 +106,32 @@ func (reg *ResourceRegistry) SetUsePreReleases(yes bool) {
}
// AddResource adds a resource to the registry. Does _not_ select new version.
func (reg *ResourceRegistry) AddResource(identifier, version string, index *Index, available, currentRelease, preRelease bool) error {
func (reg *ResourceRegistry) AddResource(identifier, version string, available, currentRelease, preRelease bool) error {
reg.Lock()
defer reg.Unlock()
err := reg.addResource(identifier, version, index, available, currentRelease, preRelease)
err := reg.addResource(identifier, version, available, currentRelease, preRelease)
return err
}
func (reg *ResourceRegistry) addResource(identifier, version string, index *Index, available, currentRelease, preRelease bool) error {
func (reg *ResourceRegistry) addResource(identifier, version string, available, currentRelease, preRelease bool) error {
res, ok := reg.resources[identifier]
if !ok {
res = reg.newResource(identifier)
reg.resources[identifier] = res
}
res.Index = index
return res.AddVersion(version, available, currentRelease, preRelease)
}
// AddResources adds resources to the registry. Errors are logged, the last one is returned. Despite errors, non-failing resources are still added. Does _not_ select new versions.
func (reg *ResourceRegistry) AddResources(versions map[string]string, index *Index, available, currentRelease, preRelease bool) error {
func (reg *ResourceRegistry) AddResources(versions map[string]string, available, currentRelease, preRelease bool) error {
reg.Lock()
defer reg.Unlock()
// add versions and their flags to registry
var lastError error
for identifier, version := range versions {
lastError = reg.addResource(identifier, version, index, available, currentRelease, preRelease)
lastError = reg.addResource(identifier, version, available, currentRelease, preRelease)
if lastError != nil {
log.Warningf("%s: failed to add resource %s: %s", reg.Name, identifier, lastError)
}
@ -260,7 +190,7 @@ func (reg *ResourceRegistry) ResetIndexes() {
reg.Lock()
defer reg.Unlock()
reg.indexes = make([]*Index, 0, len(reg.indexes))
reg.indexes = make([]Index, 0, 5)
}
// Cleanup removes temporary files.

View file

@ -1,6 +1,7 @@
package updater
import (
"io/ioutil"
"os"
"testing"
@ -11,7 +12,7 @@ var registry *ResourceRegistry
func TestMain(m *testing.M) {
// setup
tmpDir, err := os.MkdirTemp("", "ci-portmaster-")
tmpDir, err := ioutil.TempDir("", "ci-portmaster-")
if err != nil {
panic(err)
}

View file

@ -2,7 +2,6 @@ package updater
import (
"errors"
"io/fs"
"os"
"path/filepath"
"sort"
@ -11,9 +10,7 @@ import (
semver "github.com/hashicorp/go-version"
"github.com/safing/jess/filesig"
"github.com/safing/portbase/log"
"github.com/safing/portbase/utils"
)
var devVersion *semver.Version
@ -52,13 +49,6 @@ type Resource struct {
// to download the latest version from the updates servers
// specified in the resource registry.
SelectedVersion *ResourceVersion
// VerificationOptions holds the verification options for this resource.
VerificationOptions *VerificationOptions
// Index holds a reference to the index this resource was last defined in.
// Will be nil if resource was only found on disk.
Index *Index
}
// ResourceVersion represents a single version of a resource.
@ -73,9 +63,6 @@ type ResourceVersion struct {
// Available indicates if this version is available locally.
Available bool
// SigAvailable indicates if the signature of this version is available locally.
SigAvailable bool
// CurrentRelease indicates that this is the current release that should be
// selected, if possible.
CurrentRelease bool
@ -93,7 +80,7 @@ func (rv *ResourceVersion) String() string {
return rv.VersionNumber
}
// SemVer returns the semantic version of the resource.
// SemVer returns the semantiv version of the resource.
func (rv *ResourceVersion) SemVer() *semver.Version {
return rv.semVer
}
@ -112,26 +99,7 @@ func (rv *ResourceVersion) EqualsVersion(version string) bool {
// A version is selectable if it's not blacklisted and either already locally
// available or ready to be downloaded.
func (rv *ResourceVersion) isSelectable() bool {
switch {
case rv.Blacklisted:
// Should not be used.
return false
case rv.Available:
// Is available locally, use!
return true
case !rv.resource.registry.Online:
// Cannot download, because registry is set to offline.
return false
case rv.resource.Index == nil:
// Cannot download, because resource is not part of an index.
return false
case !rv.resource.Index.AutoDownload:
// Cannot download, because index may not automatically download.
return false
default:
// Is not available locally, but we are allowed to download it on request!
return true
}
return !rv.Blacklisted && (rv.Available || rv.resource.registry.Online)
}
// isBetaVersionNumber checks if rv is marked as a beta version by checking
@ -164,7 +132,9 @@ func (res *Resource) Export() *Resource {
SelectedVersion: res.SelectedVersion,
}
// Copy Versions slice.
copy(export.Versions, res.Versions)
for i := 0; i < len(res.Versions); i++ {
export.Versions[i] = res.Versions[i]
}
return export
}
@ -214,10 +184,9 @@ func (res *Resource) AnyVersionAvailable() bool {
func (reg *ResourceRegistry) newResource(identifier string) *Resource {
return &Resource{
registry: reg,
Identifier: identifier,
Versions: make([]*ResourceVersion, 0, 1),
VerificationOptions: reg.GetVerificationOptions(identifier),
registry: reg,
Identifier: identifier,
Versions: make([]*ResourceVersion, 0, 1),
}
}
@ -261,12 +230,6 @@ func (res *Resource) AddVersion(version string, available, currentRelease, preRe
// set flags
if available {
rv.Available = true
// If available and signatures are enabled for this resource, check if the
// signature is available.
if res.VerificationOptions != nil && utils.PathExists(rv.storageSigPath()) {
rv.SigAvailable = true
}
}
if currentRelease {
rv.CurrentRelease = true
@ -309,13 +272,8 @@ func (res *Resource) selectVersion() {
sort.Sort(res)
// export after we finish
var fallback bool
defer func() {
if fallback {
log.Tracef("updater: selected version %s (as fallback) for resource %s", res.SelectedVersion, res.Identifier)
} else {
log.Debugf("updater: selected version %s for resource %s", res.SelectedVersion, res.Identifier)
}
log.Tracef("updater: selected version %s for resource %s", res.SelectedVersion, res.Identifier)
if res.inUse() &&
res.SelectedVersion != res.ActiveVersion && // new selected version does not match previously selected version
@ -380,7 +338,7 @@ func (res *Resource) selectVersion() {
// 5) Default to newest.
res.SelectedVersion = res.Versions[0]
fallback = true
log.Warningf("updater: falling back to version %s for %s because we failed to find a selectable one", res.SelectedVersion, res.Identifier)
}
// Blacklist blacklists the specified version and selects a new version.
@ -481,32 +439,15 @@ boundarySearch:
// Purge everything beyond the purge boundary.
for _, rv := range res.Versions[purgeBoundary:] {
// Only remove if resource file is actually available.
if !rv.Available {
continue
}
// Remove resource file.
storagePath := rv.storagePath()
// Remove resource file.
err := os.Remove(storagePath)
if err != nil {
if !errors.Is(err, fs.ErrNotExist) {
log.Warningf("%s: failed to purge resource %s v%s: %s", res.registry.Name, rv.resource.Identifier, rv.VersionNumber, err)
}
log.Warningf("%s: failed to purge resource %s v%s: %s", res.registry.Name, rv.resource.Identifier, rv.VersionNumber, err)
} else {
log.Tracef("%s: purged resource %s v%s", res.registry.Name, rv.resource.Identifier, rv.VersionNumber)
}
// Remove resource signature file.
err = os.Remove(rv.storageSigPath())
if err != nil {
if !errors.Is(err, fs.ErrNotExist) {
log.Warningf("%s: failed to purge resource signature %s v%s: %s", res.registry.Name, rv.resource.Identifier, rv.VersionNumber, err)
}
} else {
log.Tracef("%s: purged resource signature %s v%s", res.registry.Name, rv.resource.Identifier, rv.VersionNumber)
}
// Remove unpacked version of resource.
ext := filepath.Ext(storagePath)
if ext == "" {
@ -517,7 +458,7 @@ boundarySearch:
// Remove if it exists, or an error occurs on access.
_, err = os.Stat(unpackedPath)
if err == nil || !errors.Is(err, fs.ErrNotExist) {
if err == nil || !os.IsNotExist(err) {
err = os.Remove(unpackedPath)
if err != nil {
log.Warningf("%s: failed to purge unpacked resource %s v%s: %s", res.registry.Name, rv.resource.Identifier, rv.VersionNumber, err)
@ -531,52 +472,10 @@ boundarySearch:
res.Versions = res.Versions[purgeBoundary:]
}
// SigningMetadata returns the metadata to be included in signatures.
func (rv *ResourceVersion) SigningMetadata() map[string]string {
return map[string]string{
"id": rv.resource.Identifier,
"version": rv.VersionNumber,
}
}
// GetFile returns the version as a *File.
// It locks the resource for doing so.
func (rv *ResourceVersion) GetFile() *File {
rv.resource.Lock()
defer rv.resource.Unlock()
// check for notifier
if rv.resource.notifier == nil {
// create new notifier
rv.resource.notifier = newNotifier()
}
// create file
return &File{
resource: rv.resource,
version: rv,
notifier: rv.resource.notifier,
versionedPath: rv.versionedPath(),
storagePath: rv.storagePath(),
}
}
// versionedPath returns the versioned identifier.
func (rv *ResourceVersion) versionedPath() string {
return GetVersionedPath(rv.resource.Identifier, rv.VersionNumber)
}
// versionedSigPath returns the versioned identifier of the file signature.
func (rv *ResourceVersion) versionedSigPath() string {
return GetVersionedPath(rv.resource.Identifier, rv.VersionNumber) + filesig.Extension
}
// storagePath returns the absolute storage path.
func (rv *ResourceVersion) storagePath() string {
return filepath.Join(rv.resource.registry.storageDir.Path, filepath.FromSlash(rv.versionedPath()))
}
// storageSigPath returns the absolute storage path of the file signature.
func (rv *ResourceVersion) storageSigPath() string {
return rv.storagePath() + filesig.Extension
}

View file

@ -45,8 +45,6 @@ func TestVersionSelection(t *testing.T) {
registry.UsePreReleases = true
registry.DevMode = true
registry.Online = true
res.Index = &Index{AutoDownload: true}
res.selectVersion()
if res.SelectedVersion.VersionNumber != "0.0.0" {
t.Errorf("selected version should be 0.0.0, not %s", res.SelectedVersion.VersionNumber)

View file

@ -1,49 +0,0 @@
package updater
import (
"strings"
"github.com/safing/jess"
)
// VerificationOptions holds options for verification of files.
type VerificationOptions struct {
TrustStore jess.TrustStore
DownloadPolicy SignaturePolicy
DiskLoadPolicy SignaturePolicy
}
// GetVerificationOptions returns the verification options for the given identifier.
func (reg *ResourceRegistry) GetVerificationOptions(identifier string) *VerificationOptions {
if reg.Verification == nil {
return nil
}
var (
longestPrefix = -1
bestMatch *VerificationOptions
)
for prefix, opts := range reg.Verification {
if len(prefix) > longestPrefix && strings.HasPrefix(identifier, prefix) {
longestPrefix = len(prefix)
bestMatch = opts
}
}
return bestMatch
}
// SignaturePolicy defines behavior in case of errors.
type SignaturePolicy uint8
// Signature Policies.
const (
// SignaturePolicyRequire fails on any error.
SignaturePolicyRequire = iota
// SignaturePolicyWarn only warns on errors.
SignaturePolicyWarn
// SignaturePolicyDisable only downloads signatures, but does not verify them.
SignaturePolicyDisable
)

View file

@ -1,180 +0,0 @@
package updater
import (
"sort"
"sync"
"time"
"github.com/safing/portbase/utils"
)
// Registry States.
const (
StateReady = "ready" // Default idle state.
StateChecking = "checking" // Downloading indexes.
StateDownloading = "downloading" // Downloading updates.
StateFetching = "fetching" // Fetching a single file.
)
// RegistryState describes the registry state.
type RegistryState struct {
sync.Mutex
reg *ResourceRegistry
// ID holds the ID of the state the registry is currently in.
ID string
// Details holds further information about the current state.
Details any
// Updates holds generic information about the current status of pending
// and recently downloaded updates.
Updates UpdateState
// operationLock locks the operation of any state changing operation.
// This is separate from the registry lock, which locks access to the
// registry struct.
operationLock sync.Mutex
}
// StateDownloadingDetails holds details of the downloading state.
type StateDownloadingDetails struct {
// Resources holds the resource IDs that are being downloaded.
Resources []string
// FinishedUpTo holds the index of Resources that is currently being
// downloaded. Previous resources have finished downloading.
FinishedUpTo int
}
// UpdateState holds generic information about the current status of pending
// and recently downloaded updates.
type UpdateState struct {
// LastCheckAt holds the time of the last update check.
LastCheckAt *time.Time
// LastCheckError holds the error of the last check.
LastCheckError error
// PendingDownload holds the resources that are pending download.
PendingDownload []string
// LastDownloadAt holds the time when resources were downloaded the last time.
LastDownloadAt *time.Time
// LastDownloadError holds the error of the last download.
LastDownloadError error
// LastDownload holds the resources that we downloaded the last time updates
// were downloaded.
LastDownload []string
// LastSuccessAt holds the time of the last successful update (check).
LastSuccessAt *time.Time
}
// GetState returns the current registry state.
// The returned data must not be modified.
func (reg *ResourceRegistry) GetState() RegistryState {
reg.state.Lock()
defer reg.state.Unlock()
return RegistryState{
ID: reg.state.ID,
Details: reg.state.Details,
Updates: reg.state.Updates,
}
}
// StartOperation starts an operation.
func (s *RegistryState) StartOperation(id string) bool {
defer s.notify()
s.operationLock.Lock()
s.Lock()
defer s.Unlock()
s.ID = id
return true
}
// UpdateOperationDetails updates the details of an operation.
// The supplied struct should be a copy and must not be changed after calling
// this function.
func (s *RegistryState) UpdateOperationDetails(details any) {
defer s.notify()
s.Lock()
defer s.Unlock()
s.Details = details
}
// EndOperation ends an operation.
func (s *RegistryState) EndOperation() {
defer s.notify()
defer s.operationLock.Unlock()
s.Lock()
defer s.Unlock()
s.ID = StateReady
s.Details = nil
}
// ReportUpdateCheck reports an update check to the registry state.
func (s *RegistryState) ReportUpdateCheck(pendingDownload []string, failed error) {
defer s.notify()
sort.Strings(pendingDownload)
s.Lock()
defer s.Unlock()
now := time.Now()
s.Updates.LastCheckAt = &now
s.Updates.LastCheckError = failed
s.Updates.PendingDownload = pendingDownload
if failed == nil {
s.Updates.LastSuccessAt = &now
}
}
// ReportDownloads reports downloaded updates to the registry state.
func (s *RegistryState) ReportDownloads(downloaded []string, failed error) {
defer s.notify()
sort.Strings(downloaded)
s.Lock()
defer s.Unlock()
now := time.Now()
s.Updates.LastDownloadAt = &now
s.Updates.LastDownloadError = failed
s.Updates.LastDownload = downloaded
// Remove downloaded resources from the pending list.
if len(s.Updates.PendingDownload) > 0 {
newPendingDownload := make([]string, 0, len(s.Updates.PendingDownload))
for _, pending := range s.Updates.PendingDownload {
if !utils.StringInSlice(downloaded, pending) {
newPendingDownload = append(newPendingDownload, pending)
}
}
s.Updates.PendingDownload = newPendingDownload
}
if failed == nil {
s.Updates.LastSuccessAt = &now
}
}
func (s *RegistryState) notify() {
switch {
case s.reg == nil:
return
case s.reg.StateNotifyFunc == nil:
return
}
s.reg.StateNotifyFunc(s)
}

View file

@ -2,16 +2,15 @@ package updater
import (
"context"
"encoding/json"
"errors"
"fmt"
"io/fs"
"io/ioutil"
"net/http"
"os"
"path/filepath"
"strings"
"github.com/safing/jess/filesig"
"github.com/safing/jess/lhash"
"github.com/safing/portbase/log"
"github.com/safing/portbase/utils"
)
@ -52,11 +51,6 @@ func (reg *ResourceRegistry) ScanStorage(root string) error {
return nil
}
// Ignore file signatures.
if strings.HasSuffix(path, filesig.Extension) {
return nil
}
// get relative path to storage
relativePath, err := filepath.Rel(reg.storageDir.Path, path)
if err != nil {
@ -79,7 +73,7 @@ func (reg *ResourceRegistry) ScanStorage(root string) error {
}
// save
err = reg.AddResource(identifier, version, nil, true, false, false)
err = reg.AddResource(identifier, version, true, false, false)
if err != nil {
lastError = fmt.Errorf("%s: could not get add resource %s v%s: %w", reg.Name, identifier, version, err)
log.Warning(lastError.Error())
@ -103,7 +97,7 @@ func (reg *ResourceRegistry) LoadIndexes(ctx context.Context) error {
} else if reg.Online {
// try to download the index file if a local disk version
// does not exist or we don't have permission to read it.
if errors.Is(err, fs.ErrNotExist) || errors.Is(err, fs.ErrPermission) {
if os.IsNotExist(err) || os.IsPermission(err) {
err = reg.downloadIndex(ctx, client, idx)
}
}
@ -116,118 +110,39 @@ func (reg *ResourceRegistry) LoadIndexes(ctx context.Context) error {
return firstErr
}
// getIndexes returns a copy of the index.
// The indexes itself are references.
func (reg *ResourceRegistry) getIndexes() []*Index {
func (reg *ResourceRegistry) getIndexes() []Index {
reg.RLock()
defer reg.RUnlock()
indexes := make([]*Index, len(reg.indexes))
indexes := make([]Index, len(reg.indexes))
copy(indexes, reg.indexes)
return indexes
}
func (reg *ResourceRegistry) loadIndexFile(idx *Index) error {
indexPath := filepath.Join(reg.storageDir.Path, filepath.FromSlash(idx.Path))
indexData, err := os.ReadFile(indexPath)
func (reg *ResourceRegistry) loadIndexFile(idx Index) error {
path := filepath.FromSlash(idx.Path)
data, err := ioutil.ReadFile(filepath.Join(reg.storageDir.Path, path))
if err != nil {
return fmt.Errorf("failed to read index file %s: %w", idx.Path, err)
return err
}
// Verify signature, if enabled.
if verifOpts := reg.GetVerificationOptions(idx.Path); verifOpts != nil {
// Load and check signature.
verifiedHash, _, err := reg.loadAndVerifySigFile(verifOpts, indexPath+filesig.Extension)
if err != nil {
switch verifOpts.DiskLoadPolicy {
case SignaturePolicyRequire:
return fmt.Errorf("failed to verify signature of index %s: %w", idx.Path, err)
case SignaturePolicyWarn:
log.Warningf("%s: failed to verify signature of index %s: %s", reg.Name, idx.Path, err)
case SignaturePolicyDisable:
log.Debugf("%s: failed to verify signature of index %s: %s", reg.Name, idx.Path, err)
}
}
// Check if signature checksum matches the index data.
if err == nil && !verifiedHash.Matches(indexData) {
switch verifOpts.DiskLoadPolicy {
case SignaturePolicyRequire:
return fmt.Errorf("index file %s does not match signature", idx.Path)
case SignaturePolicyWarn:
log.Warningf("%s: index file %s does not match signature", reg.Name, idx.Path)
case SignaturePolicyDisable:
log.Debugf("%s: index file %s does not match signature", reg.Name, idx.Path)
}
}
}
// Parse the index file.
indexFile, err := ParseIndexFile(indexData, idx.Channel, idx.LastRelease)
releases := make(map[string]string)
err = json.Unmarshal(data, &releases)
if err != nil {
return fmt.Errorf("failed to parse index file %s: %w", idx.Path, err)
return err
}
// Update last seen release.
idx.LastRelease = indexFile.Published
// Warn if there aren't any releases in the index.
if len(indexFile.Releases) == 0 {
log.Debugf("%s: index %s has no releases", reg.Name, idx.Path)
if len(releases) == 0 {
log.Debugf("%s: index %s is empty", reg.Name, idx.Path)
return nil
}
// Add index releases to available resources.
err = reg.AddResources(indexFile.Releases, idx, false, true, idx.PreRelease)
err = reg.AddResources(releases, false, true, idx.PreRelease)
if err != nil {
log.Warningf("%s: failed to add resource: %s", reg.Name, err)
}
return nil
}
func (reg *ResourceRegistry) loadAndVerifySigFile(verifOpts *VerificationOptions, sigFilePath string) (*lhash.LabeledHash, []byte, error) {
// Load signature file.
sigFileData, err := os.ReadFile(sigFilePath)
if err != nil {
return nil, nil, fmt.Errorf("failed to read signature file: %w", err)
}
// Extract all signatures.
sigs, err := filesig.ParseSigFile(sigFileData)
switch {
case len(sigs) == 0 && err != nil:
return nil, nil, fmt.Errorf("failed to parse signature file: %w", err)
case len(sigs) == 0:
return nil, nil, errors.New("no signatures found in signature file")
case err != nil:
return nil, nil, fmt.Errorf("failed to parse signature file: %w", err)
}
// Verify all signatures.
var verifiedHash *lhash.LabeledHash
for _, sig := range sigs {
fd, err := filesig.VerifyFileData(
sig,
nil,
verifOpts.TrustStore,
)
if err != nil {
return nil, sigFileData, err
}
// Save or check verified hash.
if verifiedHash == nil {
verifiedHash = fd.FileHash()
} else if !fd.FileHash().Equal(verifiedHash) {
// Return an error if two valid hashes mismatch.
// For simplicity, all hash algorithms must be the same for now.
return nil, sigFileData, errors.New("file hashes from different signatures do not match")
}
}
return verifiedHash, sigFileData, nil
}
// CreateSymlinks creates a directory structure with unversioned symlinks to the given updates list.
func (reg *ResourceRegistry) CreateSymlinks(symlinkRoot *utils.DirStructure) error {
err := os.RemoveAll(symlinkRoot.Path)

Some files were not shown because too many files have changed in this diff Show more