feat(scanner): implement selective folder scanning and file system watcher improvements (#4674)
Some checks are pending
Pipeline: Test, Lint, Build / Test Go code (push) Waiting to run
Pipeline: Test, Lint, Build / Test JS code (push) Waiting to run
Pipeline: Test, Lint, Build / Lint i18n files (push) Waiting to run
Pipeline: Test, Lint, Build / Check Docker configuration (push) Waiting to run
Pipeline: Test, Lint, Build / Build (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Get version info (push) Waiting to run
Pipeline: Test, Lint, Build / Lint Go code (push) Waiting to run
Pipeline: Test, Lint, Build / Build-1 (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Build-2 (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Build-3 (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Build-4 (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Build-5 (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Build-6 (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Build-7 (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Build-8 (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Build-9 (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Push Docker manifest (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Build Windows installers (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Package/Release (push) Blocked by required conditions
Pipeline: Test, Lint, Build / Upload Linux PKG (push) Blocked by required conditions

* feat: Add selective folder scanning capability

Implement targeted scanning of specific library/folder pairs without
full recursion. This enables efficient rescanning of individual folders
when changes are detected, significantly reducing scan time for large
libraries.

Key changes:
- Add ScanTarget struct and ScanFolders API to Scanner interface
- Implement CLI flag --targets for specifying libraryID:folderPath pairs
- Add FolderRepository.GetByPaths() for batch folder info retrieval
- Create loadSpecificFolders() for non-recursive directory loading
- Scope GC operations to affected libraries only (with TODO for full impl)
- Add comprehensive tests for selective scanning behavior

The selective scan:
- Only processes specified folders (no subdirectory recursion)
- Maintains library isolation
- Runs full maintenance pipeline scoped to affected libraries
- Supports both full and quick scan modes

Examples:
  navidrome scan --targets "1:Music/Rock,1:Music/Jazz"
  navidrome scan --full --targets "2:Classical"

* feat(folder): replace GetByPaths with GetFolderUpdateInfo for improved folder updates retrieval

Signed-off-by: Deluan <deluan@navidrome.org>

* test: update parseTargets test to handle folder names with spaces

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(folder): remove unused LibraryPath struct and update GC logging message

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(folder): enhance external scanner to support target-specific scanning

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): simplify scanner methods

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(watcher): implement folder scanning notifications with deduplication

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(watcher): add resolveFolderPath function for testability

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(watcher): implement path ignoring based on .ndignore patterns

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): implement IgnoreChecker for managing .ndignore patterns

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(ignore_checker): rename scanner to lineScanner for clarity

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): enhance ScanTarget struct with String method for better target representation

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): validate library ID to prevent negative values

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): simplify GC method by removing library ID parameter

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(scanner): update folder scanning to include all descendants of specified folders

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(subsonic): allow selective scan in the /startScan endpoint

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): update CallScan to handle specific library/folder pairs

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): streamline scanning logic by removing scanAll method

Signed-off-by: Deluan <deluan@navidrome.org>

* test: enhance mockScanner for thread safety and improve test reliability

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): move scanner.ScanTarget to model.ScanTarget

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor: move scanner types to model,implement MockScanner

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): update scanner interface and implementations to use model.Scanner

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(folder_repository): normalize target path handling by using filepath.Clean

Signed-off-by: Deluan <deluan@navidrome.org>

* test(folder_repository): add comprehensive tests for folder retrieval and child exclusion

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): simplify selective scan logic using slice.Filter

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): streamline phase folder and album creation by removing unnecessary library parameter

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): move initialization logic from phase_1 to the scanner itself

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(tests): rename selective scan test file to scanner_selective_test.go

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(configuration): add DevSelectiveWatcher configuration option

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(watcher): enhance .ndignore handling for folder deletions and file changes

Signed-off-by: Deluan <deluan@navidrome.org>

* docs(scanner): comments

Signed-off-by: Deluan <deluan@navidrome.org>

* refactor(scanner): enhance walkDirTree to support target folder scanning

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner, watcher): handle errors when pushing ignore patterns for folders

Signed-off-by: Deluan <deluan@navidrome.org>

* Update scanner/phase_1_folders.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* refactor(scanner): replace parseTargets function with direct call to scanner.ParseTargets

Signed-off-by: Deluan <deluan@navidrome.org>

* test(scanner): add tests for ScanBegin and ScanEnd functionality

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(library): update PRAGMA optimize to check table sizes without ANALYZE

Signed-off-by: Deluan <deluan@navidrome.org>

* test(scanner): refactor tests

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add selective scan options and update translations

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add quick and full scan options for individual libraries

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(ui): add Scan buttonsto the LibraryList

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(scan): update scanning parameters from 'path' to 'target' for selective scans.

* refactor(scan): move ParseTargets function to model package

* test(scan): suppress unused return value from SetUserLibraries in tests

* feat(gc): enhance garbage collection to support selective library purging

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(scanner): prevent race condition when scanning deleted folders

When the watcher detects changes in a folder that gets deleted before
the scanner runs (due to the 10-second delay), the scanner was
prematurely removing these folders from the tracking map, preventing
them from being marked as missing.

The issue occurred because `newFolderEntry` was calling `popLastUpdate`
before verifying the folder actually exists on the filesystem.

Changes:
- Move fs.Stat check before newFolderEntry creation in loadDir to
  ensure deleted folders remain in lastUpdates for finalize() to handle
- Add early existence check in walkDirTree to skip non-existent target
  folders with a warning log
- Add unit test verifying non-existent folders aren't removed from
  lastUpdates prematurely
- Add integration test for deleted folder scenario with ScanFolders

Fixes the issue where deleting entire folders (e.g., /music/AC_DC)
wouldn't mark tracks as missing when using selective folder scanning.

* refactor(scan): streamline folder entry creation and update handling

Signed-off-by: Deluan <deluan@navidrome.org>

* feat(scan): add '@Recycle' (QNAP) to ignored directories list

Signed-off-by: Deluan <deluan@navidrome.org>

* fix(log): improve thread safety in logging level management

* test(scan): move unit tests for ParseTargets function

Signed-off-by: Deluan <deluan@navidrome.org>

* review

Signed-off-by: Deluan <deluan@navidrome.org>

---------

Signed-off-by: Deluan <deluan@navidrome.org>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: deluan <deluan.quintao@mechanical-orchard.com>
This commit is contained in:
Deluan Quintão 2025-11-14 22:15:43 -05:00 committed by GitHub
parent bca76069c3
commit 28d5299ffc
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
52 changed files with 3221 additions and 374 deletions

View file

@ -26,24 +26,8 @@ var (
ErrAlreadyScanning = errors.New("already scanning")
)
type Scanner interface {
// ScanAll starts a full scan of the music library. This is a blocking operation.
ScanAll(ctx context.Context, fullScan bool) (warnings []string, err error)
Status(context.Context) (*StatusInfo, error)
}
type StatusInfo struct {
Scanning bool
LastScan time.Time
Count uint32
FolderCount uint32
LastError string
ScanType string
ElapsedTime time.Duration
}
func New(rootCtx context.Context, ds model.DataStore, cw artwork.CacheWarmer, broker events.Broker,
pls core.Playlists, m metrics.Metrics) Scanner {
pls core.Playlists, m metrics.Metrics) model.Scanner {
c := &controller{
rootCtx: rootCtx,
ds: ds,
@ -65,9 +49,10 @@ func (s *controller) getScanner() scanner {
return &scannerImpl{ds: s.ds, cw: s.cw, pls: s.pls}
}
// CallScan starts an in-process scan of the music library.
// CallScan starts an in-process scan of specific library/folder pairs.
// If targets is empty, it scans all libraries.
// This is meant to be called from the command line (see cmd/scan.go).
func CallScan(ctx context.Context, ds model.DataStore, pls core.Playlists, fullScan bool) (<-chan *ProgressInfo, error) {
func CallScan(ctx context.Context, ds model.DataStore, pls core.Playlists, fullScan bool, targets []model.ScanTarget) (<-chan *ProgressInfo, error) {
release, err := lockScan(ctx)
if err != nil {
return nil, err
@ -79,7 +64,7 @@ func CallScan(ctx context.Context, ds model.DataStore, pls core.Playlists, fullS
go func() {
defer close(progress)
scanner := &scannerImpl{ds: ds, cw: artwork.NoopCacheWarmer(), pls: pls}
scanner.scanAll(ctx, fullScan, progress)
scanner.scanFolders(ctx, fullScan, targets, progress)
}()
return progress, nil
}
@ -99,8 +84,11 @@ type ProgressInfo struct {
ForceUpdate bool
}
// scanner defines the interface for different scanner implementations.
// This allows for swapping between in-process and external scanners.
type scanner interface {
scanAll(ctx context.Context, fullScan bool, progress chan<- *ProgressInfo)
// scanFolders performs the actual scanning of folders. If targets is nil, it scans all libraries.
scanFolders(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo)
}
type controller struct {
@ -158,7 +146,7 @@ func (s *controller) getScanInfo(ctx context.Context) (scanType string, elapsed
return scanType, elapsed, lastErr
}
func (s *controller) Status(ctx context.Context) (*StatusInfo, error) {
func (s *controller) Status(ctx context.Context) (*model.ScannerStatus, error) {
lastScanTime, err := s.getLastScanTime(ctx)
if err != nil {
return nil, fmt.Errorf("getting last scan time: %w", err)
@ -167,7 +155,7 @@ func (s *controller) Status(ctx context.Context) (*StatusInfo, error) {
scanType, elapsed, lastErr := s.getScanInfo(ctx)
if running.Load() {
status := &StatusInfo{
status := &model.ScannerStatus{
Scanning: true,
LastScan: lastScanTime,
Count: s.count.Load(),
@ -183,7 +171,7 @@ func (s *controller) Status(ctx context.Context) (*StatusInfo, error) {
if err != nil {
return nil, fmt.Errorf("getting library stats: %w", err)
}
return &StatusInfo{
return &model.ScannerStatus{
Scanning: false,
LastScan: lastScanTime,
Count: uint32(count),
@ -208,6 +196,10 @@ func (s *controller) getCounters(ctx context.Context) (int64, int64, error) {
}
func (s *controller) ScanAll(requestCtx context.Context, fullScan bool) ([]string, error) {
return s.ScanFolders(requestCtx, fullScan, nil)
}
func (s *controller) ScanFolders(requestCtx context.Context, fullScan bool, targets []model.ScanTarget) ([]string, error) {
release, err := lockScan(requestCtx)
if err != nil {
return nil, err
@ -224,7 +216,7 @@ func (s *controller) ScanAll(requestCtx context.Context, fullScan bool) ([]strin
go func() {
defer close(progress)
scanner := s.getScanner()
scanner.scanAll(ctx, fullScan, progress)
scanner.scanFolders(ctx, fullScan, targets, progress)
}()
// Wait for the scan to finish, sending progress events to all connected clients

View file

@ -9,6 +9,7 @@ import (
"github.com/navidrome/navidrome/core/artwork"
"github.com/navidrome/navidrome/core/metrics"
"github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/persistence"
"github.com/navidrome/navidrome/scanner"
"github.com/navidrome/navidrome/server/events"
@ -20,7 +21,7 @@ import (
var _ = Describe("Controller", func() {
var ctx context.Context
var ds *tests.MockDataStore
var ctrl scanner.Scanner
var ctrl model.Scanner
Describe("Status", func() {
BeforeEach(func() {

View file

@ -8,10 +8,12 @@ import (
"io"
"os"
"os/exec"
"strings"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/log"
. "github.com/navidrome/navidrome/utils/gg"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/utils/slice"
)
// scannerExternal is a scanner that runs an external process to do the scanning. It is used to avoid
@ -23,19 +25,41 @@ import (
// process will forward them to the caller.
type scannerExternal struct{}
func (s *scannerExternal) scanAll(ctx context.Context, fullScan bool, progress chan<- *ProgressInfo) {
func (s *scannerExternal) scanFolders(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo) {
s.scan(ctx, fullScan, targets, progress)
}
func (s *scannerExternal) scan(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo) {
exe, err := os.Executable()
if err != nil {
progress <- &ProgressInfo{Error: fmt.Sprintf("failed to get executable path: %s", err)}
return
}
log.Debug(ctx, "Spawning external scanner process", "fullScan", fullScan, "path", exe)
cmd := exec.CommandContext(ctx, exe, "scan",
// Build command arguments
args := []string{
"scan",
"--nobanner", "--subprocess",
"--configfile", conf.Server.ConfigFile,
"--datafolder", conf.Server.DataFolder,
"--cachefolder", conf.Server.CacheFolder,
If(fullScan, "--full", ""))
}
// Add targets if provided
if len(targets) > 0 {
targetsStr := strings.Join(slice.Map(targets, func(t model.ScanTarget) string { return t.String() }), ",")
args = append(args, "--targets", targetsStr)
log.Debug(ctx, "Spawning external scanner process with targets", "fullScan", fullScan, "path", exe, "targets", targetsStr)
} else {
log.Debug(ctx, "Spawning external scanner process", "fullScan", fullScan, "path", exe)
}
// Add full scan flag if needed
if fullScan {
args = append(args, "--full")
}
cmd := exec.CommandContext(ctx, exe, args...)
in, out := io.Pipe()
defer in.Close()

View file

@ -15,9 +15,7 @@ import (
"github.com/navidrome/navidrome/utils/chrono"
)
func newFolderEntry(job *scanJob, path string) *folderEntry {
id := model.FolderID(job.lib, path)
info := job.popLastUpdate(id)
func newFolderEntry(job *scanJob, id, path string, updTime time.Time, hash string) *folderEntry {
f := &folderEntry{
id: id,
job: job,
@ -25,8 +23,8 @@ func newFolderEntry(job *scanJob, path string) *folderEntry {
audioFiles: make(map[string]fs.DirEntry),
imageFiles: make(map[string]fs.DirEntry),
albumIDMap: make(map[string]string),
updTime: info.UpdatedAt,
prevHash: info.Hash,
updTime: updTime,
prevHash: hash,
}
return f
}

View file

@ -40,9 +40,8 @@ var _ = Describe("folder_entry", func() {
UpdatedAt: time.Now().Add(-30 * time.Minute),
Hash: "previous-hash",
}
job.lastUpdates[folderID] = updateInfo
entry := newFolderEntry(job, path)
entry := newFolderEntry(job, folderID, path, updateInfo.UpdatedAt, updateInfo.Hash)
Expect(entry.id).To(Equal(folderID))
Expect(entry.job).To(Equal(job))
@ -53,15 +52,10 @@ var _ = Describe("folder_entry", func() {
Expect(entry.updTime).To(Equal(updateInfo.UpdatedAt))
Expect(entry.prevHash).To(Equal(updateInfo.Hash))
})
})
It("creates a new folder entry with zero time when no previous update exists", func() {
entry := newFolderEntry(job, path)
Expect(entry.updTime).To(BeZero())
Expect(entry.prevHash).To(BeEmpty())
})
It("removes the lastUpdate from the job after popping", func() {
Describe("createFolderEntry", func() {
It("removes the lastUpdate from the job after creation", func() {
folderID := model.FolderID(lib, path)
updateInfo := model.FolderUpdateInfo{
UpdatedAt: time.Now().Add(-30 * time.Minute),
@ -69,8 +63,10 @@ var _ = Describe("folder_entry", func() {
}
job.lastUpdates[folderID] = updateInfo
newFolderEntry(job, path)
entry := job.createFolderEntry(path)
Expect(entry.updTime).To(Equal(updateInfo.UpdatedAt))
Expect(entry.prevHash).To(Equal(updateInfo.Hash))
Expect(job.lastUpdates).ToNot(HaveKey(folderID))
})
})
@ -79,7 +75,8 @@ var _ = Describe("folder_entry", func() {
var entry *folderEntry
BeforeEach(func() {
entry = newFolderEntry(job, path)
folderID := model.FolderID(lib, path)
entry = newFolderEntry(job, folderID, path, time.Time{}, "")
})
Describe("hasNoFiles", func() {
@ -458,7 +455,9 @@ var _ = Describe("folder_entry", func() {
Describe("integration scenarios", func() {
It("handles complete folder lifecycle", func() {
// Create new folder entry
entry := newFolderEntry(job, "music/rock/album")
folderPath := "music/rock/album"
folderID := model.FolderID(lib, folderPath)
entry := newFolderEntry(job, folderID, folderPath, time.Time{}, "")
// Initially new and has no files
Expect(entry.isNew()).To(BeTrue())

163
scanner/ignore_checker.go Normal file
View file

@ -0,0 +1,163 @@
package scanner
import (
"bufio"
"context"
"io/fs"
"path"
"strings"
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/log"
ignore "github.com/sabhiram/go-gitignore"
)
// IgnoreChecker manages .ndignore patterns using a stack-based approach.
// Use Push() to add patterns when entering a folder, Pop() when leaving,
// and ShouldIgnore() to check if a path should be ignored.
type IgnoreChecker struct {
fsys fs.FS
patternStack [][]string // Stack of patterns for each folder level
currentPatterns []string // Flattened current patterns
matcher *ignore.GitIgnore // Compiled matcher for current patterns
}
// newIgnoreChecker creates a new IgnoreChecker for the given filesystem.
func newIgnoreChecker(fsys fs.FS) *IgnoreChecker {
return &IgnoreChecker{
fsys: fsys,
patternStack: make([][]string, 0),
}
}
// Push loads .ndignore patterns from the specified folder and adds them to the pattern stack.
// Use this when entering a folder during directory tree traversal.
func (ic *IgnoreChecker) Push(ctx context.Context, folder string) error {
patterns := ic.loadPatternsFromFolder(ctx, folder)
ic.patternStack = append(ic.patternStack, patterns)
ic.rebuildCurrentPatterns()
return nil
}
// Pop removes the most recent patterns from the stack.
// Use this when leaving a folder during directory tree traversal.
func (ic *IgnoreChecker) Pop() {
if len(ic.patternStack) > 0 {
ic.patternStack = ic.patternStack[:len(ic.patternStack)-1]
ic.rebuildCurrentPatterns()
}
}
// PushAllParents pushes patterns from root down to the target path.
// This is a convenience method for when you need to check a specific path
// without recursively walking the tree. It handles the common pattern of
// pushing all parent directories from root to the target.
// This method is optimized to compile patterns only once at the end.
func (ic *IgnoreChecker) PushAllParents(ctx context.Context, targetPath string) error {
if targetPath == "." || targetPath == "" {
// Simple case: just push root
return ic.Push(ctx, ".")
}
// Load patterns for root
patterns := ic.loadPatternsFromFolder(ctx, ".")
ic.patternStack = append(ic.patternStack, patterns)
// Load patterns for each parent directory
currentPath := "."
parts := strings.Split(path.Clean(targetPath), "/")
for _, part := range parts {
if part == "." || part == "" {
continue
}
currentPath = path.Join(currentPath, part)
patterns = ic.loadPatternsFromFolder(ctx, currentPath)
ic.patternStack = append(ic.patternStack, patterns)
}
// Rebuild and compile patterns only once at the end
ic.rebuildCurrentPatterns()
return nil
}
// ShouldIgnore checks if the given path should be ignored based on the current patterns.
// Returns true if the path matches any ignore pattern, false otherwise.
func (ic *IgnoreChecker) ShouldIgnore(ctx context.Context, relPath string) bool {
// Handle root/empty path - never ignore
if relPath == "" || relPath == "." {
return false
}
// If no patterns loaded, nothing to ignore
if ic.matcher == nil {
return false
}
matches := ic.matcher.MatchesPath(relPath)
if matches {
log.Trace(ctx, "Scanner: Ignoring entry matching .ndignore", "path", relPath)
}
return matches
}
// loadPatternsFromFolder reads the .ndignore file in the specified folder and returns the patterns.
// If the file doesn't exist, returns an empty slice.
// If the file exists but is empty, returns a pattern to ignore everything ("**/*").
func (ic *IgnoreChecker) loadPatternsFromFolder(ctx context.Context, folder string) []string {
ignoreFilePath := path.Join(folder, consts.ScanIgnoreFile)
var patterns []string
// Check if .ndignore file exists
if _, err := fs.Stat(ic.fsys, ignoreFilePath); err != nil {
// No .ndignore file in this folder
return patterns
}
// Read and parse the .ndignore file
ignoreFile, err := ic.fsys.Open(ignoreFilePath)
if err != nil {
log.Warn(ctx, "Scanner: Error opening .ndignore file", "path", ignoreFilePath, err)
return patterns
}
defer ignoreFile.Close()
lineScanner := bufio.NewScanner(ignoreFile)
for lineScanner.Scan() {
line := strings.TrimSpace(lineScanner.Text())
if line == "" || strings.HasPrefix(line, "#") {
continue // Skip empty lines, whitespace-only lines, and comments
}
patterns = append(patterns, line)
}
if err := lineScanner.Err(); err != nil {
log.Warn(ctx, "Scanner: Error reading .ndignore file", "path", ignoreFilePath, err)
return patterns
}
// If the .ndignore file is empty, ignore everything
if len(patterns) == 0 {
log.Trace(ctx, "Scanner: .ndignore file is empty, ignoring everything", "path", folder)
patterns = []string{"**/*"}
}
return patterns
}
// rebuildCurrentPatterns flattens the pattern stack into currentPatterns and recompiles the matcher.
func (ic *IgnoreChecker) rebuildCurrentPatterns() {
ic.currentPatterns = make([]string, 0)
for _, patterns := range ic.patternStack {
ic.currentPatterns = append(ic.currentPatterns, patterns...)
}
ic.compilePatterns()
}
// compilePatterns compiles the current patterns into a GitIgnore matcher.
func (ic *IgnoreChecker) compilePatterns() {
if len(ic.currentPatterns) == 0 {
ic.matcher = nil
return
}
ic.matcher = ignore.CompileIgnoreLines(ic.currentPatterns...)
}

View file

@ -0,0 +1,313 @@
package scanner
import (
"context"
"testing/fstest"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("IgnoreChecker", func() {
Describe("loadPatternsFromFolder", func() {
var ic *IgnoreChecker
var ctx context.Context
BeforeEach(func() {
ctx = context.Background()
})
Context("when .ndignore file does not exist", func() {
It("should return empty patterns", func() {
fsys := fstest.MapFS{}
ic = newIgnoreChecker(fsys)
patterns := ic.loadPatternsFromFolder(ctx, ".")
Expect(patterns).To(BeEmpty())
})
})
Context("when .ndignore file is empty", func() {
It("should return wildcard to ignore everything", func() {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("")},
}
ic = newIgnoreChecker(fsys)
patterns := ic.loadPatternsFromFolder(ctx, ".")
Expect(patterns).To(Equal([]string{"**/*"}))
})
})
DescribeTable("parsing .ndignore content",
func(content string, expectedPatterns []string) {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte(content)},
}
ic = newIgnoreChecker(fsys)
patterns := ic.loadPatternsFromFolder(ctx, ".")
Expect(patterns).To(Equal(expectedPatterns))
},
Entry("single pattern", "*.txt", []string{"*.txt"}),
Entry("multiple patterns", "*.txt\n*.log", []string{"*.txt", "*.log"}),
Entry("with comments", "# comment\n*.txt\n# another\n*.log", []string{"*.txt", "*.log"}),
Entry("with empty lines", "*.txt\n\n*.log\n\n", []string{"*.txt", "*.log"}),
Entry("mixed content", "# header\n\n*.txt\n# middle\n*.log\n\n", []string{"*.txt", "*.log"}),
Entry("only comments and empty lines", "# comment\n\n# another\n", []string{"**/*"}),
Entry("trailing newline", "*.txt\n*.log\n", []string{"*.txt", "*.log"}),
Entry("directory pattern", "temp/", []string{"temp/"}),
Entry("wildcard pattern", "**/*.mp3", []string{"**/*.mp3"}),
Entry("multiple wildcards", "**/*.mp3\n**/*.flac\n*.log", []string{"**/*.mp3", "**/*.flac", "*.log"}),
Entry("negation pattern", "!important.txt", []string{"!important.txt"}),
Entry("comment with hash not at start is pattern", "not#comment", []string{"not#comment"}),
Entry("whitespace-only lines skipped", "*.txt\n \n*.log\n\t\n", []string{"*.txt", "*.log"}),
Entry("patterns with whitespace trimmed", " *.txt \n\t*.log\t", []string{"*.txt", "*.log"}),
)
})
Describe("Push and Pop", func() {
var ic *IgnoreChecker
var fsys fstest.MapFS
var ctx context.Context
BeforeEach(func() {
ctx = context.Background()
fsys = fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("*.txt")},
"folder1/.ndignore": &fstest.MapFile{Data: []byte("*.mp3")},
"folder2/.ndignore": &fstest.MapFile{Data: []byte("*.flac")},
}
ic = newIgnoreChecker(fsys)
})
Context("Push", func() {
It("should add patterns to stack", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(1))
Expect(ic.currentPatterns).To(ContainElement("*.txt"))
})
It("should compile matcher after push", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(ic.matcher).ToNot(BeNil())
})
It("should accumulate patterns from multiple levels", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
err = ic.Push(ctx, "folder1")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(2))
Expect(ic.currentPatterns).To(ConsistOf("*.txt", "*.mp3"))
})
It("should handle push when no .ndignore exists", func() {
err := ic.Push(ctx, "nonexistent")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(1))
Expect(ic.currentPatterns).To(BeEmpty())
})
})
Context("Pop", func() {
It("should remove most recent patterns", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
err = ic.Push(ctx, "folder1")
Expect(err).ToNot(HaveOccurred())
ic.Pop()
Expect(len(ic.patternStack)).To(Equal(1))
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
})
It("should handle Pop on empty stack gracefully", func() {
Expect(func() { ic.Pop() }).ToNot(Panic())
Expect(ic.patternStack).To(BeEmpty())
})
It("should set matcher to nil when all patterns popped", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(ic.matcher).ToNot(BeNil())
ic.Pop()
Expect(ic.matcher).To(BeNil())
})
It("should update matcher after pop", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
err = ic.Push(ctx, "folder1")
Expect(err).ToNot(HaveOccurred())
matcher1 := ic.matcher
ic.Pop()
matcher2 := ic.matcher
Expect(matcher1).ToNot(Equal(matcher2))
})
})
Context("multiple Push/Pop cycles", func() {
It("should maintain correct state through cycles", func() {
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
err = ic.Push(ctx, "folder1")
Expect(err).ToNot(HaveOccurred())
Expect(ic.currentPatterns).To(ConsistOf("*.txt", "*.mp3"))
ic.Pop()
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
err = ic.Push(ctx, "folder2")
Expect(err).ToNot(HaveOccurred())
Expect(ic.currentPatterns).To(ConsistOf("*.txt", "*.flac"))
ic.Pop()
Expect(ic.currentPatterns).To(Equal([]string{"*.txt"}))
ic.Pop()
Expect(ic.currentPatterns).To(BeEmpty())
})
})
})
Describe("PushAllParents", func() {
var ic *IgnoreChecker
var ctx context.Context
BeforeEach(func() {
ctx = context.Background()
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("root.txt")},
"folder1/.ndignore": &fstest.MapFile{Data: []byte("level1.txt")},
"folder1/folder2/.ndignore": &fstest.MapFile{Data: []byte("level2.txt")},
"folder1/folder2/folder3/.ndignore": &fstest.MapFile{Data: []byte("level3.txt")},
}
ic = newIgnoreChecker(fsys)
})
DescribeTable("loading parent patterns",
func(targetPath string, expectedStackDepth int, expectedPatterns []string) {
err := ic.PushAllParents(ctx, targetPath)
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(expectedStackDepth))
Expect(ic.currentPatterns).To(ConsistOf(expectedPatterns))
},
Entry("root path", ".", 1, []string{"root.txt"}),
Entry("empty path", "", 1, []string{"root.txt"}),
Entry("single level", "folder1", 2, []string{"root.txt", "level1.txt"}),
Entry("two levels", "folder1/folder2", 3, []string{"root.txt", "level1.txt", "level2.txt"}),
Entry("three levels", "folder1/folder2/folder3", 4, []string{"root.txt", "level1.txt", "level2.txt", "level3.txt"}),
)
It("should only compile patterns once at the end", func() {
// This is more of a behavioral test - we verify the matcher is not nil after PushAllParents
err := ic.PushAllParents(ctx, "folder1/folder2")
Expect(err).ToNot(HaveOccurred())
Expect(ic.matcher).ToNot(BeNil())
})
It("should handle paths with dot", func() {
err := ic.PushAllParents(ctx, "./folder1")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(2))
})
Context("when some parent folders have no .ndignore", func() {
BeforeEach(func() {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("root.txt")},
"folder1/folder2/.ndignore": &fstest.MapFile{Data: []byte("level2.txt")},
}
ic = newIgnoreChecker(fsys)
})
It("should still push all parent levels", func() {
err := ic.PushAllParents(ctx, "folder1/folder2")
Expect(err).ToNot(HaveOccurred())
Expect(len(ic.patternStack)).To(Equal(3)) // root, folder1 (empty), folder2
Expect(ic.currentPatterns).To(ConsistOf("root.txt", "level2.txt"))
})
})
})
Describe("ShouldIgnore", func() {
var ic *IgnoreChecker
var ctx context.Context
BeforeEach(func() {
ctx = context.Background()
})
Context("with no patterns loaded", func() {
It("should not ignore any path", func() {
fsys := fstest.MapFS{}
ic = newIgnoreChecker(fsys)
Expect(ic.ShouldIgnore(ctx, "anything.txt")).To(BeFalse())
Expect(ic.ShouldIgnore(ctx, "folder/file.mp3")).To(BeFalse())
})
})
Context("special paths", func() {
BeforeEach(func() {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("**/*")},
}
ic = newIgnoreChecker(fsys)
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
})
It("should never ignore root or empty paths", func() {
Expect(ic.ShouldIgnore(ctx, "")).To(BeFalse())
Expect(ic.ShouldIgnore(ctx, ".")).To(BeFalse())
})
It("should ignore all other paths with wildcard", func() {
Expect(ic.ShouldIgnore(ctx, "file.txt")).To(BeTrue())
Expect(ic.ShouldIgnore(ctx, "folder/file.mp3")).To(BeTrue())
})
})
DescribeTable("pattern matching",
func(pattern string, path string, shouldMatch bool) {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte(pattern)},
}
ic = newIgnoreChecker(fsys)
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
Expect(ic.ShouldIgnore(ctx, path)).To(Equal(shouldMatch))
},
Entry("glob match", "*.txt", "file.txt", true),
Entry("glob no match", "*.txt", "file.mp3", false),
Entry("directory pattern match", "tmp/", "tmp/file.txt", true),
Entry("directory pattern no match", "tmp/", "temporary/file.txt", false),
Entry("nested glob match", "**/*.log", "deep/nested/file.log", true),
Entry("nested glob no match", "**/*.log", "deep/nested/file.txt", false),
Entry("specific file match", "ignore.me", "ignore.me", true),
Entry("specific file no match", "ignore.me", "keep.me", false),
Entry("wildcard all", "**/*", "any/path/file.txt", true),
Entry("nested specific match", "temp/*", "temp/cache.db", true),
Entry("nested specific no match", "temp/*", "temporary/cache.db", false),
)
Context("with multiple patterns", func() {
BeforeEach(func() {
fsys := fstest.MapFS{
".ndignore": &fstest.MapFile{Data: []byte("*.txt\n*.log\ntemp/")},
}
ic = newIgnoreChecker(fsys)
err := ic.Push(ctx, ".")
Expect(err).ToNot(HaveOccurred())
})
It("should match any of the patterns", func() {
Expect(ic.ShouldIgnore(ctx, "file.txt")).To(BeTrue())
Expect(ic.ShouldIgnore(ctx, "debug.log")).To(BeTrue())
Expect(ic.ShouldIgnore(ctx, "temp/cache")).To(BeTrue())
Expect(ic.ShouldIgnore(ctx, "music.mp3")).To(BeFalse())
})
})
})
})

View file

@ -26,58 +26,46 @@ import (
"github.com/navidrome/navidrome/utils/slice"
)
func createPhaseFolders(ctx context.Context, state *scanState, ds model.DataStore, cw artwork.CacheWarmer, libs []model.Library) *phaseFolders {
func createPhaseFolders(ctx context.Context, state *scanState, ds model.DataStore, cw artwork.CacheWarmer) *phaseFolders {
var jobs []*scanJob
var updatedLibs []model.Library
for _, lib := range libs {
if lib.LastScanStartedAt.IsZero() {
err := ds.Library(ctx).ScanBegin(lib.ID, state.fullScan)
if err != nil {
log.Error(ctx, "Scanner: Error updating last scan started at", "lib", lib.Name, err)
state.sendWarning(err.Error())
continue
}
// Reload library to get updated state
l, err := ds.Library(ctx).Get(lib.ID)
if err != nil {
log.Error(ctx, "Scanner: Error reloading library", "lib", lib.Name, err)
state.sendWarning(err.Error())
continue
}
lib = *l
} else {
log.Debug(ctx, "Scanner: Resuming previous scan", "lib", lib.Name, "lastScanStartedAt", lib.LastScanStartedAt, "fullScan", lib.FullScanInProgress)
// Create scan jobs for all libraries
for _, lib := range state.libraries {
// Get target folders for this library if selective scan
var targetFolders []string
if state.isSelectiveScan() {
targetFolders = state.targets[lib.ID]
}
job, err := newScanJob(ctx, ds, cw, lib, state.fullScan)
job, err := newScanJob(ctx, ds, cw, lib, state.fullScan, targetFolders)
if err != nil {
log.Error(ctx, "Scanner: Error creating scan context", "lib", lib.Name, err)
state.sendWarning(err.Error())
continue
}
jobs = append(jobs, job)
updatedLibs = append(updatedLibs, lib)
}
// Update the state with the libraries that have been processed and have their scan timestamps set
state.libraries = updatedLibs
return &phaseFolders{jobs: jobs, ctx: ctx, ds: ds, state: state}
}
type scanJob struct {
lib model.Library
fs storage.MusicFS
cw artwork.CacheWarmer
lastUpdates map[string]model.FolderUpdateInfo
lock sync.Mutex
numFolders atomic.Int64
lib model.Library
fs storage.MusicFS
cw artwork.CacheWarmer
lastUpdates map[string]model.FolderUpdateInfo // Holds last update info for all (DB) folders in this library
targetFolders []string // Specific folders to scan (including all descendants)
lock sync.Mutex
numFolders atomic.Int64
}
func newScanJob(ctx context.Context, ds model.DataStore, cw artwork.CacheWarmer, lib model.Library, fullScan bool) (*scanJob, error) {
lastUpdates, err := ds.Folder(ctx).GetLastUpdates(lib)
func newScanJob(ctx context.Context, ds model.DataStore, cw artwork.CacheWarmer, lib model.Library, fullScan bool, targetFolders []string) (*scanJob, error) {
// Get folder updates, optionally filtered to specific target folders
lastUpdates, err := ds.Folder(ctx).GetFolderUpdateInfo(lib, targetFolders...)
if err != nil {
return nil, fmt.Errorf("getting last updates: %w", err)
}
fileStore, err := storage.For(lib.Path)
if err != nil {
log.Error(ctx, "Error getting storage for library", "library", lib.Name, "path", lib.Path, err)
@ -88,15 +76,17 @@ func newScanJob(ctx context.Context, ds model.DataStore, cw artwork.CacheWarmer,
log.Error(ctx, "Error getting fs for library", "library", lib.Name, "path", lib.Path, err)
return nil, fmt.Errorf("getting fs for library: %w", err)
}
lib.FullScanInProgress = lib.FullScanInProgress || fullScan
return &scanJob{
lib: lib,
fs: fsys,
cw: cw,
lastUpdates: lastUpdates,
lib: lib,
fs: fsys,
cw: cw,
lastUpdates: lastUpdates,
targetFolders: targetFolders,
}, nil
}
// popLastUpdate retrieves and removes the last update info for the given folder ID
// This is used to track which folders have been found during the walk_dir_tree
func (j *scanJob) popLastUpdate(folderID string) model.FolderUpdateInfo {
j.lock.Lock()
defer j.lock.Unlock()
@ -106,6 +96,15 @@ func (j *scanJob) popLastUpdate(folderID string) model.FolderUpdateInfo {
return lastUpdate
}
// createFolderEntry creates a new folderEntry for the given path, using the last update info from the job
// to populate the previous update time and hash. It also removes the folder from the job's lastUpdates map.
// This is used to track which folders have been found during the walk_dir_tree.
func (j *scanJob) createFolderEntry(path string) *folderEntry {
id := model.FolderID(j.lib, path)
info := j.popLastUpdate(id)
return newFolderEntry(j, id, path, info.UpdatedAt, info.Hash)
}
// phaseFolders represents the first phase of the scanning process, which is responsible
// for scanning all libraries and importing new or updated files. This phase involves
// traversing the directory tree of each library, identifying new or modified media files,
@ -144,7 +143,8 @@ func (p *phaseFolders) producer() ppl.Producer[*folderEntry] {
if utils.IsCtxDone(p.ctx) {
break
}
outputChan, err := walkDirTree(p.ctx, job)
outputChan, err := walkDirTree(p.ctx, job, job.targetFolders...)
if err != nil {
log.Warn(p.ctx, "Scanner: Error scanning library", "lib", job.lib.Name, err)
}

View file

@ -69,9 +69,6 @@ func (p *phaseMissingTracks) produce(put func(tracks *missingTracks)) error {
}
}
for _, lib := range p.state.libraries {
if lib.LastScanStartedAt.IsZero() {
continue
}
log.Debug(p.ctx, "Scanner: Checking missing tracks", "libraryId", lib.ID, "libraryName", lib.Name)
cursor, err := p.ds.MediaFile(p.ctx).GetMissingAndMatching(lib.ID)
if err != nil {

View file

@ -27,14 +27,13 @@ import (
type phaseRefreshAlbums struct {
ds model.DataStore
ctx context.Context
libs model.Libraries
refreshed atomic.Uint32
skipped atomic.Uint32
state *scanState
}
func createPhaseRefreshAlbums(ctx context.Context, state *scanState, ds model.DataStore, libs model.Libraries) *phaseRefreshAlbums {
return &phaseRefreshAlbums{ctx: ctx, ds: ds, libs: libs, state: state}
func createPhaseRefreshAlbums(ctx context.Context, state *scanState, ds model.DataStore) *phaseRefreshAlbums {
return &phaseRefreshAlbums{ctx: ctx, ds: ds, state: state}
}
func (p *phaseRefreshAlbums) description() string {
@ -47,7 +46,7 @@ func (p *phaseRefreshAlbums) producer() ppl.Producer[*model.Album] {
func (p *phaseRefreshAlbums) produce(put func(album *model.Album)) error {
count := 0
for _, lib := range p.libs {
for _, lib := range p.state.libraries {
cursor, err := p.ds.Album(p.ctx).GetTouchedAlbums(lib.ID)
if err != nil {
return fmt.Errorf("loading touched albums: %w", err)

View file

@ -32,8 +32,8 @@ var _ = Describe("phaseRefreshAlbums", func() {
{ID: 1, Name: "Library 1"},
{ID: 2, Name: "Library 2"},
}
state = &scanState{}
phase = createPhaseRefreshAlbums(ctx, state, ds, libs)
state = &scanState{libraries: libs}
phase = createPhaseRefreshAlbums(ctx, state, ds)
})
Describe("description", func() {

View file

@ -3,6 +3,8 @@ package scanner
import (
"context"
"fmt"
"maps"
"slices"
"sync/atomic"
"time"
@ -15,6 +17,7 @@ import (
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/utils/run"
"github.com/navidrome/navidrome/utils/slice"
)
type scannerImpl struct {
@ -28,7 +31,8 @@ type scanState struct {
progress chan<- *ProgressInfo
fullScan bool
changesDetected atomic.Bool
libraries model.Libraries // Store libraries list for consistency across phases
libraries model.Libraries // Store libraries list for consistency across phases
targets map[int][]string // Optional: map[libraryID][]folderPaths for selective scans
}
func (s *scanState) sendProgress(info *ProgressInfo) {
@ -37,6 +41,10 @@ func (s *scanState) sendProgress(info *ProgressInfo) {
}
}
func (s *scanState) isSelectiveScan() bool {
return len(s.targets) > 0
}
func (s *scanState) sendWarning(msg string) {
s.sendProgress(&ProgressInfo{Warning: msg})
}
@ -45,7 +53,7 @@ func (s *scanState) sendError(err error) {
s.sendProgress(&ProgressInfo{Error: err.Error()})
}
func (s *scannerImpl) scanAll(ctx context.Context, fullScan bool, progress chan<- *ProgressInfo) {
func (s *scannerImpl) scanFolders(ctx context.Context, fullScan bool, targets []model.ScanTarget, progress chan<- *ProgressInfo) {
startTime := time.Now()
state := scanState{
@ -59,38 +67,75 @@ func (s *scannerImpl) scanAll(ctx context.Context, fullScan bool, progress chan<
state.changesDetected.Store(true)
}
libs, err := s.ds.Library(ctx).GetAll()
// Get libraries and optionally filter by targets
allLibs, err := s.ds.Library(ctx).GetAll()
if err != nil {
state.sendWarning(fmt.Sprintf("getting libraries: %s", err))
return
}
state.libraries = libs
log.Info(ctx, "Scanner: Starting scan", "fullScan", state.fullScan, "numLibraries", len(libs))
if len(targets) > 0 {
// Selective scan: filter libraries and build targets map
state.targets = make(map[int][]string)
for _, target := range targets {
folderPath := target.FolderPath
if folderPath == "" {
folderPath = "."
}
state.targets[target.LibraryID] = append(state.targets[target.LibraryID], folderPath)
}
// Filter libraries to only those in targets
state.libraries = slice.Filter(allLibs, func(lib model.Library) bool {
return len(state.targets[lib.ID]) > 0
})
log.Info(ctx, "Scanner: Starting selective scan", "fullScan", state.fullScan, "numLibraries", len(state.libraries), "numTargets", len(targets))
} else {
// Full library scan
state.libraries = allLibs
log.Info(ctx, "Scanner: Starting scan", "fullScan", state.fullScan, "numLibraries", len(state.libraries))
}
// Store scan type and start time
scanType := "quick"
if state.fullScan {
scanType = "full"
}
if state.isSelectiveScan() {
scanType += "-selective"
}
_ = s.ds.Property(ctx).Put(consts.LastScanTypeKey, scanType)
_ = s.ds.Property(ctx).Put(consts.LastScanStartTimeKey, startTime.Format(time.RFC3339))
// if there was a full scan in progress, force a full scan
if !state.fullScan {
for _, lib := range libs {
for _, lib := range state.libraries {
if lib.FullScanInProgress {
log.Info(ctx, "Scanner: Interrupted full scan detected", "lib", lib.Name)
state.fullScan = true
_ = s.ds.Property(ctx).Put(consts.LastScanTypeKey, "full")
if state.isSelectiveScan() {
_ = s.ds.Property(ctx).Put(consts.LastScanTypeKey, "full-selective")
} else {
_ = s.ds.Property(ctx).Put(consts.LastScanTypeKey, "full")
}
break
}
}
}
// Prepare libraries for scanning (initialize LastScanStartedAt if needed)
err = s.prepareLibrariesForScan(ctx, &state)
if err != nil {
log.Error(ctx, "Scanner: Error preparing libraries for scan", err)
state.sendError(err)
return
}
err = run.Sequentially(
// Phase 1: Scan all libraries and import new/updated files
runPhase[*folderEntry](ctx, 1, createPhaseFolders(ctx, &state, s.ds, s.cw, libs)),
runPhase[*folderEntry](ctx, 1, createPhaseFolders(ctx, &state, s.ds, s.cw)),
// Phase 2: Process missing files, checking for moves
runPhase[*missingTracks](ctx, 2, createPhaseMissingTracks(ctx, &state, s.ds)),
@ -98,7 +143,7 @@ func (s *scannerImpl) scanAll(ctx context.Context, fullScan bool, progress chan<
// Phases 3 and 4 can be run in parallel
run.Parallel(
// Phase 3: Refresh all new/changed albums and update artists
runPhase[*model.Album](ctx, 3, createPhaseRefreshAlbums(ctx, &state, s.ds, libs)),
runPhase[*model.Album](ctx, 3, createPhaseRefreshAlbums(ctx, &state, s.ds)),
// Phase 4: Import/update playlists
runPhase[*model.Folder](ctx, 4, createPhasePlaylists(ctx, &state, s.ds, s.pls, s.cw)),
@ -131,7 +176,53 @@ func (s *scannerImpl) scanAll(ctx context.Context, fullScan bool, progress chan<
state.sendProgress(&ProgressInfo{ChangesDetected: true})
}
log.Info(ctx, "Scanner: Finished scanning all libraries", "duration", time.Since(startTime))
if state.isSelectiveScan() {
log.Info(ctx, "Scanner: Finished scanning selected folders", "duration", time.Since(startTime), "numTargets", len(targets))
} else {
log.Info(ctx, "Scanner: Finished scanning all libraries", "duration", time.Since(startTime))
}
}
// prepareLibrariesForScan initializes the scan for all libraries in the state.
// It calls ScanBegin for libraries that haven't started scanning yet (LastScanStartedAt is zero),
// reloads them to get the updated state, and filters out any libraries that fail to initialize.
func (s *scannerImpl) prepareLibrariesForScan(ctx context.Context, state *scanState) error {
var successfulLibs []model.Library
for _, lib := range state.libraries {
if lib.LastScanStartedAt.IsZero() {
// This is a new scan - mark it as started
err := s.ds.Library(ctx).ScanBegin(lib.ID, state.fullScan)
if err != nil {
log.Error(ctx, "Scanner: Error marking scan start", "lib", lib.Name, err)
state.sendWarning(err.Error())
continue
}
// Reload library to get updated state (timestamps, etc.)
reloadedLib, err := s.ds.Library(ctx).Get(lib.ID)
if err != nil {
log.Error(ctx, "Scanner: Error reloading library", "lib", lib.Name, err)
state.sendWarning(err.Error())
continue
}
lib = *reloadedLib
} else {
// This is a resumed scan
log.Debug(ctx, "Scanner: Resuming previous scan", "lib", lib.Name,
"lastScanStartedAt", lib.LastScanStartedAt, "fullScan", lib.FullScanInProgress)
}
successfulLibs = append(successfulLibs, lib)
}
if len(successfulLibs) == 0 {
return fmt.Errorf("no libraries available for scanning")
}
// Update state with only successfully initialized libraries
state.libraries = successfulLibs
return nil
}
func (s *scannerImpl) runGC(ctx context.Context, state *scanState) func() error {
@ -140,7 +231,15 @@ func (s *scannerImpl) runGC(ctx context.Context, state *scanState) func() error
return s.ds.WithTx(func(tx model.DataStore) error {
if state.changesDetected.Load() {
start := time.Now()
err := tx.GC(ctx)
// For selective scans, extract library IDs to scope GC operations
var libraryIDs []int
if state.isSelectiveScan() {
libraryIDs = slices.Collect(maps.Keys(state.targets))
log.Debug(ctx, "Scanner: Running selective GC", "libraryIDs", libraryIDs)
}
err := tx.GC(ctx, libraryIDs...)
if err != nil {
log.Error(ctx, "Scanner: Error running GC", err)
return fmt.Errorf("running GC: %w", err)

View file

@ -32,7 +32,7 @@ var _ = Describe("Scanner - Multi-Library", Ordered, func() {
var ctx context.Context
var lib1, lib2 model.Library
var ds *tests.MockDataStore
var s scanner.Scanner
var s model.Scanner
createFS := func(path string, files fstest.MapFS) storagetest.FakeFS {
fs := storagetest.FakeFS{}

View file

@ -0,0 +1,293 @@
package scanner_test
import (
"context"
"path/filepath"
"testing/fstest"
"github.com/Masterminds/squirrel"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/core"
"github.com/navidrome/navidrome/core/artwork"
"github.com/navidrome/navidrome/core/metrics"
"github.com/navidrome/navidrome/core/storage/storagetest"
"github.com/navidrome/navidrome/db"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/model/request"
"github.com/navidrome/navidrome/persistence"
"github.com/navidrome/navidrome/scanner"
"github.com/navidrome/navidrome/server/events"
"github.com/navidrome/navidrome/tests"
"github.com/navidrome/navidrome/utils/slice"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("ScanFolders", Ordered, func() {
var ctx context.Context
var lib model.Library
var ds model.DataStore
var s model.Scanner
var fsys storagetest.FakeFS
BeforeAll(func() {
ctx = request.WithUser(GinkgoT().Context(), model.User{ID: "123", IsAdmin: true})
tmpDir := GinkgoT().TempDir()
conf.Server.DbPath = filepath.Join(tmpDir, "test-selective-scan.db?_journal_mode=WAL")
log.Warn("Using DB at " + conf.Server.DbPath)
db.Db().SetMaxOpenConns(1)
})
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.MusicFolder = "fake:///music"
conf.Server.DevExternalScanner = false
db.Init(ctx)
DeferCleanup(func() {
Expect(tests.ClearDB()).To(Succeed())
})
ds = persistence.New(db.Db())
// Create the admin user in the database to match the context
adminUser := model.User{
ID: "123",
UserName: "admin",
Name: "Admin User",
IsAdmin: true,
NewPassword: "password",
}
Expect(ds.User(ctx).Put(&adminUser)).To(Succeed())
s = scanner.New(ctx, ds, artwork.NoopCacheWarmer(), events.NoopBroker(),
core.NewPlaylists(ds), metrics.NewNoopInstance())
lib = model.Library{ID: 1, Name: "Fake Library", Path: "fake:///music"}
Expect(ds.Library(ctx).Put(&lib)).To(Succeed())
// Initialize fake filesystem
fsys = storagetest.FakeFS{}
storagetest.Register("fake", &fsys)
})
Describe("Adding tracks to the library", func() {
It("scans specified folders recursively including all subdirectories", func() {
rock := template(_t{"albumartist": "Rock Artist", "album": "Rock Album"})
jazz := template(_t{"albumartist": "Jazz Artist", "album": "Jazz Album"})
pop := template(_t{"albumartist": "Pop Artist", "album": "Pop Album"})
createFS(fstest.MapFS{
"rock/track1.mp3": rock(track(1, "Rock Track 1")),
"rock/track2.mp3": rock(track(2, "Rock Track 2")),
"rock/subdir/track3.mp3": rock(track(3, "Rock Track 3")),
"jazz/track4.mp3": jazz(track(1, "Jazz Track 1")),
"jazz/subdir/track5.mp3": jazz(track(2, "Jazz Track 2")),
"pop/track6.mp3": pop(track(1, "Pop Track 1")),
})
// Scan only the "rock" and "jazz" folders (including their subdirectories)
targets := []model.ScanTarget{
{LibraryID: lib.ID, FolderPath: "rock"},
{LibraryID: lib.ID, FolderPath: "jazz"},
}
warnings, err := s.ScanFolders(ctx, false, targets)
Expect(err).ToNot(HaveOccurred())
Expect(warnings).To(BeEmpty())
// Verify all tracks in rock and jazz folders (including subdirectories) were imported
allFiles, err := ds.MediaFile(ctx).GetAll()
Expect(err).ToNot(HaveOccurred())
// Should have 5 tracks (all rock and jazz tracks including subdirectories)
Expect(allFiles).To(HaveLen(5))
// Get the file paths
paths := slice.Map(allFiles, func(mf model.MediaFile) string {
return filepath.ToSlash(mf.Path)
})
// Verify the correct files were scanned (including subdirectories)
Expect(paths).To(ContainElements(
"rock/track1.mp3",
"rock/track2.mp3",
"rock/subdir/track3.mp3",
"jazz/track4.mp3",
"jazz/subdir/track5.mp3",
))
// Verify files in the pop folder were NOT scanned
Expect(paths).ToNot(ContainElement("pop/track6.mp3"))
})
})
Describe("Deleting folders", func() {
Context("when a child folder is deleted", func() {
var (
revolver, help func(...map[string]any) *fstest.MapFile
artistFolderID string
album1FolderID string
album2FolderID string
album1TrackIDs []string
album2TrackIDs []string
)
BeforeEach(func() {
// Setup template functions for creating test files
revolver = storagetest.Template(_t{"albumartist": "The Beatles", "album": "Revolver", "year": 1966})
help = storagetest.Template(_t{"albumartist": "The Beatles", "album": "Help!", "year": 1965})
// Initial filesystem with nested folders
fsys.SetFiles(fstest.MapFS{
"The Beatles/Revolver/01 - Taxman.mp3": revolver(storagetest.Track(1, "Taxman")),
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(storagetest.Track(2, "Eleanor Rigby")),
"The Beatles/Help!/01 - Help!.mp3": help(storagetest.Track(1, "Help!")),
"The Beatles/Help!/02 - The Night Before.mp3": help(storagetest.Track(2, "The Night Before")),
})
// First scan - import everything
_, err := s.ScanAll(ctx, true)
Expect(err).ToNot(HaveOccurred())
// Verify initial state - all folders exist
folders, err := ds.Folder(ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"library_id": lib.ID}})
Expect(err).ToNot(HaveOccurred())
Expect(folders).To(HaveLen(4)) // root, Artist, Album1, Album2
// Store folder IDs for later verification
for _, f := range folders {
switch f.Name {
case "The Beatles":
artistFolderID = f.ID
case "Revolver":
album1FolderID = f.ID
case "Help!":
album2FolderID = f.ID
}
}
// Verify all tracks exist
allTracks, err := ds.MediaFile(ctx).GetAll()
Expect(err).ToNot(HaveOccurred())
Expect(allTracks).To(HaveLen(4))
// Store track IDs for later verification
for _, t := range allTracks {
if t.Album == "Revolver" {
album1TrackIDs = append(album1TrackIDs, t.ID)
} else if t.Album == "Help!" {
album2TrackIDs = append(album2TrackIDs, t.ID)
}
}
// Verify no tracks are missing initially
for _, t := range allTracks {
Expect(t.Missing).To(BeFalse())
}
})
It("should mark child folder and its tracks as missing when parent is scanned", func() {
// Delete the child folder (Help!) from the filesystem
fsys.SetFiles(fstest.MapFS{
"The Beatles/Revolver/01 - Taxman.mp3": revolver(storagetest.Track(1, "Taxman")),
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(storagetest.Track(2, "Eleanor Rigby")),
// "The Beatles/Help!" folder and its contents are DELETED
})
// Run selective scan on the parent folder (Artist)
// This simulates what the watcher does when a child folder is deleted
_, err := s.ScanFolders(ctx, false, []model.ScanTarget{
{LibraryID: lib.ID, FolderPath: "The Beatles"},
})
Expect(err).ToNot(HaveOccurred())
// Verify the deleted child folder is now marked as missing
deletedFolder, err := ds.Folder(ctx).Get(album2FolderID)
Expect(err).ToNot(HaveOccurred())
Expect(deletedFolder.Missing).To(BeTrue(), "Deleted child folder should be marked as missing")
// Verify the deleted folder's tracks are marked as missing
for _, trackID := range album2TrackIDs {
track, err := ds.MediaFile(ctx).Get(trackID)
Expect(err).ToNot(HaveOccurred())
Expect(track.Missing).To(BeTrue(), "Track in deleted folder should be marked as missing")
}
// Verify the parent folder is still present and not marked as missing
parentFolder, err := ds.Folder(ctx).Get(artistFolderID)
Expect(err).ToNot(HaveOccurred())
Expect(parentFolder.Missing).To(BeFalse(), "Parent folder should not be marked as missing")
// Verify the sibling folder and its tracks are still present and not missing
siblingFolder, err := ds.Folder(ctx).Get(album1FolderID)
Expect(err).ToNot(HaveOccurred())
Expect(siblingFolder.Missing).To(BeFalse(), "Sibling folder should not be marked as missing")
for _, trackID := range album1TrackIDs {
track, err := ds.MediaFile(ctx).Get(trackID)
Expect(err).ToNot(HaveOccurred())
Expect(track.Missing).To(BeFalse(), "Track in sibling folder should not be marked as missing")
}
})
It("should mark deeply nested child folders as missing", func() {
// Add a deeply nested folder structure
fsys.SetFiles(fstest.MapFS{
"The Beatles/Revolver/01 - Taxman.mp3": revolver(storagetest.Track(1, "Taxman")),
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(storagetest.Track(2, "Eleanor Rigby")),
"The Beatles/Help!/01 - Help!.mp3": help(storagetest.Track(1, "Help!")),
"The Beatles/Help!/02 - The Night Before.mp3": help(storagetest.Track(2, "The Night Before")),
"The Beatles/Help!/Bonus/01 - Bonus Track.mp3": help(storagetest.Track(99, "Bonus Track")),
"The Beatles/Help!/Bonus/Nested/01 - Deep Track.mp3": help(storagetest.Track(100, "Deep Track")),
})
// Rescan to import the new nested structure
_, err := s.ScanAll(ctx, true)
Expect(err).ToNot(HaveOccurred())
// Verify nested folders were created
allFolders, err := ds.Folder(ctx).GetAll(model.QueryOptions{Filters: squirrel.Eq{"library_id": lib.ID}})
Expect(err).ToNot(HaveOccurred())
Expect(len(allFolders)).To(BeNumerically(">", 4), "Should have more folders with nested structure")
// Now delete the entire Help! folder including nested children
fsys.SetFiles(fstest.MapFS{
"The Beatles/Revolver/01 - Taxman.mp3": revolver(storagetest.Track(1, "Taxman")),
"The Beatles/Revolver/02 - Eleanor Rigby.mp3": revolver(storagetest.Track(2, "Eleanor Rigby")),
// All Help! subfolders are deleted
})
// Run selective scan on parent
_, err = s.ScanFolders(ctx, false, []model.ScanTarget{
{LibraryID: lib.ID, FolderPath: "The Beatles"},
})
Expect(err).ToNot(HaveOccurred())
// Verify all Help! folders (including nested ones) are marked as missing
missingFolders, err := ds.Folder(ctx).GetAll(model.QueryOptions{
Filters: squirrel.And{
squirrel.Eq{"library_id": lib.ID},
squirrel.Eq{"missing": true},
},
})
Expect(err).ToNot(HaveOccurred())
Expect(len(missingFolders)).To(BeNumerically(">", 0), "At least one folder should be marked as missing")
// Verify all tracks in deleted folders are marked as missing
allTracks, err := ds.MediaFile(ctx).GetAll()
Expect(err).ToNot(HaveOccurred())
Expect(allTracks).To(HaveLen(6))
for _, track := range allTracks {
if track.Album == "Help!" {
Expect(track.Missing).To(BeTrue(), "All tracks in deleted Help! folder should be marked as missing")
} else if track.Album == "Revolver" {
Expect(track.Missing).To(BeFalse(), "Tracks in Revolver folder should not be marked as missing")
}
}
})
})
})
})

View file

@ -34,19 +34,19 @@ type _t = map[string]any
var template = storagetest.Template
var track = storagetest.Track
func createFS(files fstest.MapFS) storagetest.FakeFS {
fs := storagetest.FakeFS{}
fs.SetFiles(files)
storagetest.Register("fake", &fs)
return fs
}
var _ = Describe("Scanner", Ordered, func() {
var ctx context.Context
var lib model.Library
var ds *tests.MockDataStore
var mfRepo *mockMediaFileRepo
var s scanner.Scanner
createFS := func(files fstest.MapFS) storagetest.FakeFS {
fs := storagetest.FakeFS{}
fs.SetFiles(files)
storagetest.Register("fake", &fs)
return fs
}
var s model.Scanner
BeforeAll(func() {
ctx = request.WithUser(GinkgoT().Context(), model.User{ID: "123", IsAdmin: true})
@ -478,6 +478,56 @@ var _ = Describe("Scanner", Ordered, func() {
Expect(mf.Missing).To(BeFalse())
})
It("marks tracks as missing when scanning a deleted folder with ScanFolders", func() {
By("Adding a third track to Revolver to have more test data")
fsys.Add("The Beatles/Revolver/03 - I'm Only Sleeping.mp3", revolver(track(3, "I'm Only Sleeping")))
Expect(runScanner(ctx, false)).To(Succeed())
By("Verifying initial state has 5 tracks")
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
Filters: squirrel.Eq{"missing": false},
})).To(Equal(int64(5)))
By("Removing the entire Revolver folder from filesystem")
fsys.Remove("The Beatles/Revolver/01 - Taxman.mp3")
fsys.Remove("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
fsys.Remove("The Beatles/Revolver/03 - I'm Only Sleeping.mp3")
By("Scanning the parent folder (simulating watcher behavior)")
targets := []model.ScanTarget{
{LibraryID: lib.ID, FolderPath: "The Beatles"},
}
_, err := s.ScanFolders(ctx, false, targets)
Expect(err).To(Succeed())
By("Checking all Revolver tracks are marked as missing")
mf, err := findByPath("The Beatles/Revolver/01 - Taxman.mp3")
Expect(err).ToNot(HaveOccurred())
Expect(mf.Missing).To(BeTrue())
mf, err = findByPath("The Beatles/Revolver/02 - Eleanor Rigby.mp3")
Expect(err).ToNot(HaveOccurred())
Expect(mf.Missing).To(BeTrue())
mf, err = findByPath("The Beatles/Revolver/03 - I'm Only Sleeping.mp3")
Expect(err).ToNot(HaveOccurred())
Expect(mf.Missing).To(BeTrue())
By("Checking the Help! tracks are not affected")
mf, err = findByPath("The Beatles/Help!/01 - Help!.mp3")
Expect(err).ToNot(HaveOccurred())
Expect(mf.Missing).To(BeFalse())
mf, err = findByPath("The Beatles/Help!/02 - The Night Before.mp3")
Expect(err).ToNot(HaveOccurred())
Expect(mf.Missing).To(BeFalse())
By("Verifying only 2 non-missing tracks remain (Help! tracks)")
Expect(ds.MediaFile(ctx).CountAll(model.QueryOptions{
Filters: squirrel.Eq{"missing": false},
})).To(Equal(int64(2)))
})
It("does not override artist fields when importing an undertagged file", func() {
By("Making sure artist in the DB contains MBID and sort name")
aa, err := ds.Artist(ctx).GetAll(model.QueryOptions{

View file

@ -1,7 +1,6 @@
package scanner
import (
"bufio"
"context"
"io/fs"
"maps"
@ -11,37 +10,69 @@ import (
"strings"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/consts"
"github.com/navidrome/navidrome/log"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/utils"
ignore "github.com/sabhiram/go-gitignore"
)
func walkDirTree(ctx context.Context, job *scanJob) (<-chan *folderEntry, error) {
// walkDirTree recursively walks the directory tree starting from the given targetFolders.
// If no targetFolders are provided, it starts from the root folder (".").
// It returns a channel of folderEntry pointers representing each folder found.
func walkDirTree(ctx context.Context, job *scanJob, targetFolders ...string) (<-chan *folderEntry, error) {
results := make(chan *folderEntry)
folders := targetFolders
if len(targetFolders) == 0 {
// No specific folders provided, scan the root folder
folders = []string{"."}
}
go func() {
defer close(results)
err := walkFolder(ctx, job, ".", nil, results)
if err != nil {
log.Error(ctx, "Scanner: There were errors reading directories from filesystem", "path", job.lib.Path, err)
return
for _, folderPath := range folders {
if utils.IsCtxDone(ctx) {
return
}
// Check if target folder exists before walking it
// If it doesn't exist (e.g., deleted between watcher detection and scan execution),
// skip it so it remains in job.lastUpdates and gets handled in following steps
_, err := fs.Stat(job.fs, folderPath)
if err != nil {
log.Warn(ctx, "Scanner: Target folder does not exist.", "path", folderPath, err)
continue
}
// Create checker and push patterns from root to this folder
checker := newIgnoreChecker(job.fs)
err = checker.PushAllParents(ctx, folderPath)
if err != nil {
log.Error(ctx, "Scanner: Error pushing ignore patterns for target folder", "path", folderPath, err)
continue
}
// Recursively walk this folder and all its children
err = walkFolder(ctx, job, folderPath, checker, results)
if err != nil {
log.Error(ctx, "Scanner: Error walking target folder", "path", folderPath, err)
continue
}
}
log.Debug(ctx, "Scanner: Finished reading folders", "lib", job.lib.Name, "path", job.lib.Path, "numFolders", job.numFolders.Load())
log.Debug(ctx, "Scanner: Finished reading target folders", "lib", job.lib.Name, "path", job.lib.Path, "numFolders", job.numFolders.Load())
}()
return results, nil
}
func walkFolder(ctx context.Context, job *scanJob, currentFolder string, ignorePatterns []string, results chan<- *folderEntry) error {
ignorePatterns = loadIgnoredPatterns(ctx, job.fs, currentFolder, ignorePatterns)
func walkFolder(ctx context.Context, job *scanJob, currentFolder string, checker *IgnoreChecker, results chan<- *folderEntry) error {
// Push patterns for this folder onto the stack
_ = checker.Push(ctx, currentFolder)
defer checker.Pop() // Pop patterns when leaving this folder
folder, children, err := loadDir(ctx, job, currentFolder, ignorePatterns)
folder, children, err := loadDir(ctx, job, currentFolder, checker)
if err != nil {
log.Warn(ctx, "Scanner: Error loading dir. Skipping", "path", currentFolder, err)
return nil
}
for _, c := range children {
err := walkFolder(ctx, job, c, ignorePatterns, results)
err := walkFolder(ctx, job, c, checker, results)
if err != nil {
return err
}
@ -59,50 +90,17 @@ func walkFolder(ctx context.Context, job *scanJob, currentFolder string, ignoreP
return nil
}
func loadIgnoredPatterns(ctx context.Context, fsys fs.FS, currentFolder string, currentPatterns []string) []string {
ignoreFilePath := path.Join(currentFolder, consts.ScanIgnoreFile)
var newPatterns []string
if _, err := fs.Stat(fsys, ignoreFilePath); err == nil {
// Read and parse the .ndignore file
ignoreFile, err := fsys.Open(ignoreFilePath)
if err != nil {
log.Warn(ctx, "Scanner: Error opening .ndignore file", "path", ignoreFilePath, err)
// Continue with previous patterns
} else {
defer ignoreFile.Close()
scanner := bufio.NewScanner(ignoreFile)
for scanner.Scan() {
line := scanner.Text()
if line == "" || strings.HasPrefix(line, "#") {
continue // Skip empty lines and comments
}
newPatterns = append(newPatterns, line)
}
if err := scanner.Err(); err != nil {
log.Warn(ctx, "Scanner: Error reading .ignore file", "path", ignoreFilePath, err)
}
}
// If the .ndignore file is empty, mimic the current behavior and ignore everything
if len(newPatterns) == 0 {
log.Trace(ctx, "Scanner: .ndignore file is empty, ignoring everything", "path", currentFolder)
newPatterns = []string{"**/*"}
} else {
log.Trace(ctx, "Scanner: .ndignore file found ", "path", ignoreFilePath, "patterns", newPatterns)
}
}
// Combine the patterns from the .ndignore file with the ones passed as argument
combinedPatterns := append([]string{}, currentPatterns...)
return append(combinedPatterns, newPatterns...)
}
func loadDir(ctx context.Context, job *scanJob, dirPath string, ignorePatterns []string) (folder *folderEntry, children []string, err error) {
folder = newFolderEntry(job, dirPath)
func loadDir(ctx context.Context, job *scanJob, dirPath string, checker *IgnoreChecker) (folder *folderEntry, children []string, err error) {
// Check if directory exists before creating the folder entry
// This is important to avoid removing the folder from lastUpdates if it doesn't exist
dirInfo, err := fs.Stat(job.fs, dirPath)
if err != nil {
log.Warn(ctx, "Scanner: Error stating dir", "path", dirPath, err)
return nil, nil, err
}
// Now that we know the folder exists, create the entry (which removes it from lastUpdates)
folder = job.createFolderEntry(dirPath)
folder.modTime = dirInfo.ModTime()
dir, err := job.fs.Open(dirPath)
@ -117,12 +115,11 @@ func loadDir(ctx context.Context, job *scanJob, dirPath string, ignorePatterns [
return folder, children, err
}
ignoreMatcher := ignore.CompileIgnoreLines(ignorePatterns...)
entries := fullReadDir(ctx, dirFile)
children = make([]string, 0, len(entries))
for _, entry := range entries {
entryPath := path.Join(dirPath, entry.Name())
if len(ignorePatterns) > 0 && isScanIgnored(ctx, ignoreMatcher, entryPath) {
if checker.ShouldIgnore(ctx, entryPath) {
log.Trace(ctx, "Scanner: Ignoring entry", "path", entryPath)
continue
}
@ -234,6 +231,7 @@ func isDirReadable(ctx context.Context, fsys fs.FS, dirPath string) bool {
var ignoredDirs = []string{
"$RECYCLE.BIN",
"#snapshot",
"@Recycle",
"@Recently-Snapshot",
".streams",
"lost+found",
@ -254,11 +252,3 @@ func isDirIgnored(name string) bool {
func isEntryIgnored(name string) bool {
return strings.HasPrefix(name, ".") && !strings.HasPrefix(name, "..")
}
func isScanIgnored(ctx context.Context, matcher *ignore.GitIgnore, entryPath string) bool {
matches := matcher.MatchesPath(entryPath)
if matches {
log.Trace(ctx, "Scanner: Ignoring entry matching .ndignore: ", "path", entryPath)
}
return matches
}

View file

@ -25,82 +25,196 @@ var _ = Describe("walk_dir_tree", func() {
ctx context.Context
)
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
ctx = GinkgoT().Context()
fsys = &mockMusicFS{
FS: fstest.MapFS{
"root/a/.ndignore": {Data: []byte("ignored/*")},
"root/a/f1.mp3": {},
"root/a/f2.mp3": {},
"root/a/ignored/bad.mp3": {},
"root/b/cover.jpg": {},
"root/c/f3": {},
"root/d": {},
"root/d/.ndignore": {},
"root/d/f1.mp3": {},
"root/d/f2.mp3": {},
"root/d/f3.mp3": {},
"root/e/original/f1.mp3": {},
"root/e/symlink": {Mode: fs.ModeSymlink, Data: []byte("original")},
Context("full library", func() {
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
ctx = GinkgoT().Context()
fsys = &mockMusicFS{
FS: fstest.MapFS{
"root/a/.ndignore": {Data: []byte("ignored/*")},
"root/a/f1.mp3": {},
"root/a/f2.mp3": {},
"root/a/ignored/bad.mp3": {},
"root/b/cover.jpg": {},
"root/c/f3": {},
"root/d": {},
"root/d/.ndignore": {},
"root/d/f1.mp3": {},
"root/d/f2.mp3": {},
"root/d/f3.mp3": {},
"root/e/original/f1.mp3": {},
"root/e/symlink": {Mode: fs.ModeSymlink, Data: []byte("original")},
},
}
job = &scanJob{
fs: fsys,
lib: model.Library{Path: "/music"},
}
})
// Helper function to call walkDirTree and collect folders from the results channel
getFolders := func() map[string]*folderEntry {
results, err := walkDirTree(ctx, job)
Expect(err).ToNot(HaveOccurred())
folders := map[string]*folderEntry{}
g := errgroup.Group{}
g.Go(func() error {
for folder := range results {
folders[folder.path] = folder
}
return nil
})
_ = g.Wait()
return folders
}
DescribeTable("symlink handling",
func(followSymlinks bool, expectedFolderCount int) {
conf.Server.Scanner.FollowSymlinks = followSymlinks
folders := getFolders()
Expect(folders).To(HaveLen(expectedFolderCount + 2)) // +2 for `.` and `root`
// Basic folder structure checks
Expect(folders["root/a"].audioFiles).To(SatisfyAll(
HaveLen(2),
HaveKey("f1.mp3"),
HaveKey("f2.mp3"),
))
Expect(folders["root/a"].imageFiles).To(BeEmpty())
Expect(folders["root/b"].audioFiles).To(BeEmpty())
Expect(folders["root/b"].imageFiles).To(SatisfyAll(
HaveLen(1),
HaveKey("cover.jpg"),
))
Expect(folders["root/c"].audioFiles).To(BeEmpty())
Expect(folders["root/c"].imageFiles).To(BeEmpty())
Expect(folders).ToNot(HaveKey("root/d"))
// Symlink specific checks
if followSymlinks {
Expect(folders["root/e/symlink"].audioFiles).To(HaveLen(1))
} else {
Expect(folders).ToNot(HaveKey("root/e/symlink"))
}
},
}
job = &scanJob{
fs: fsys,
lib: model.Library{Path: "/music"},
}
Entry("with symlinks enabled", true, 7),
Entry("with symlinks disabled", false, 6),
)
})
// Helper function to call walkDirTree and collect folders from the results channel
getFolders := func() map[string]*folderEntry {
results, err := walkDirTree(ctx, job)
Expect(err).ToNot(HaveOccurred())
folders := map[string]*folderEntry{}
g := errgroup.Group{}
g.Go(func() error {
for folder := range results {
folders[folder.path] = folder
Context("with target folders", func() {
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
ctx = GinkgoT().Context()
fsys = &mockMusicFS{
FS: fstest.MapFS{
"Artist/Album1/track1.mp3": {},
"Artist/Album1/track2.mp3": {},
"Artist/Album2/track1.mp3": {},
"Artist/Album2/track2.mp3": {},
"Artist/Album2/Sub/track3.mp3": {},
"OtherArtist/Album3/track1.mp3": {},
},
}
job = &scanJob{
fs: fsys,
lib: model.Library{Path: "/music"},
}
return nil
})
_ = g.Wait()
return folders
}
DescribeTable("symlink handling",
func(followSymlinks bool, expectedFolderCount int) {
conf.Server.Scanner.FollowSymlinks = followSymlinks
folders := getFolders()
It("should recursively walk all subdirectories of target folders", func() {
results, err := walkDirTree(ctx, job, "Artist")
Expect(err).ToNot(HaveOccurred())
Expect(folders).To(HaveLen(expectedFolderCount + 2)) // +2 for `.` and `root`
folders := map[string]*folderEntry{}
g := errgroup.Group{}
g.Go(func() error {
for folder := range results {
folders[folder.path] = folder
}
return nil
})
_ = g.Wait()
// Basic folder structure checks
Expect(folders["root/a"].audioFiles).To(SatisfyAll(
HaveLen(2),
HaveKey("f1.mp3"),
HaveKey("f2.mp3"),
// Should include the target folder and all its descendants
Expect(folders).To(SatisfyAll(
HaveKey("Artist"),
HaveKey("Artist/Album1"),
HaveKey("Artist/Album2"),
HaveKey("Artist/Album2/Sub"),
))
Expect(folders["root/a"].imageFiles).To(BeEmpty())
Expect(folders["root/b"].audioFiles).To(BeEmpty())
Expect(folders["root/b"].imageFiles).To(SatisfyAll(
HaveLen(1),
HaveKey("cover.jpg"),
))
Expect(folders["root/c"].audioFiles).To(BeEmpty())
Expect(folders["root/c"].imageFiles).To(BeEmpty())
Expect(folders).ToNot(HaveKey("root/d"))
// Symlink specific checks
if followSymlinks {
Expect(folders["root/e/symlink"].audioFiles).To(HaveLen(1))
} else {
Expect(folders).ToNot(HaveKey("root/e/symlink"))
// Should not include folders outside the target
Expect(folders).ToNot(HaveKey("OtherArtist"))
Expect(folders).ToNot(HaveKey("OtherArtist/Album3"))
// Verify audio files are present
Expect(folders["Artist/Album1"].audioFiles).To(HaveLen(2))
Expect(folders["Artist/Album2"].audioFiles).To(HaveLen(2))
Expect(folders["Artist/Album2/Sub"].audioFiles).To(HaveLen(1))
})
It("should handle multiple target folders", func() {
results, err := walkDirTree(ctx, job, "Artist/Album1", "OtherArtist")
Expect(err).ToNot(HaveOccurred())
folders := map[string]*folderEntry{}
g := errgroup.Group{}
g.Go(func() error {
for folder := range results {
folders[folder.path] = folder
}
return nil
})
_ = g.Wait()
// Should include both target folders and their descendants
Expect(folders).To(SatisfyAll(
HaveKey("Artist/Album1"),
HaveKey("OtherArtist"),
HaveKey("OtherArtist/Album3"),
))
// Should not include other folders
Expect(folders).ToNot(HaveKey("Artist"))
Expect(folders).ToNot(HaveKey("Artist/Album2"))
Expect(folders).ToNot(HaveKey("Artist/Album2/Sub"))
})
It("should skip non-existent target folders and preserve them in lastUpdates", func() {
// Setup job with lastUpdates for both existing and non-existing folders
job.lastUpdates = map[string]model.FolderUpdateInfo{
model.FolderID(job.lib, "Artist/Album1"): {},
model.FolderID(job.lib, "NonExistent/DeletedFolder"): {},
model.FolderID(job.lib, "OtherArtist/Album3"): {},
}
},
Entry("with symlinks enabled", true, 7),
Entry("with symlinks disabled", false, 6),
)
// Try to scan existing folder and non-existing folder
results, err := walkDirTree(ctx, job, "Artist/Album1", "NonExistent/DeletedFolder")
Expect(err).ToNot(HaveOccurred())
// Collect results
folders := map[string]struct{}{}
for folder := range results {
folders[folder.path] = struct{}{}
}
// Should only include the existing folder
Expect(folders).To(HaveKey("Artist/Album1"))
Expect(folders).ToNot(HaveKey("NonExistent/DeletedFolder"))
// The non-existent folder should still be in lastUpdates (not removed by popLastUpdate)
Expect(job.lastUpdates).To(HaveKey(model.FolderID(job.lib, "NonExistent/DeletedFolder")))
// The existing folder should have been removed from lastUpdates
Expect(job.lastUpdates).ToNot(HaveKey(model.FolderID(job.lib, "Artist/Album1")))
// Folders not in targets should remain in lastUpdates
Expect(job.lastUpdates).To(HaveKey(model.FolderID(job.lib, "OtherArtist/Album3")))
})
})
})
Describe("helper functions", func() {

View file

@ -24,9 +24,9 @@ type Watcher interface {
type watcher struct {
mainCtx context.Context
ds model.DataStore
scanner Scanner
scanner model.Scanner
triggerWait time.Duration
watcherNotify chan model.Library
watcherNotify chan scanNotification
libraryWatchers map[int]*libraryWatcherInstance
mu sync.RWMutex
}
@ -36,14 +36,19 @@ type libraryWatcherInstance struct {
cancel context.CancelFunc
}
type scanNotification struct {
Library *model.Library
FolderPath string
}
// GetWatcher returns the watcher singleton
func GetWatcher(ds model.DataStore, s Scanner) Watcher {
func GetWatcher(ds model.DataStore, s model.Scanner) Watcher {
return singleton.GetInstance(func() *watcher {
return &watcher{
ds: ds,
scanner: s,
triggerWait: conf.Server.Scanner.WatcherWait,
watcherNotify: make(chan model.Library, 1),
watcherNotify: make(chan scanNotification, 1),
libraryWatchers: make(map[int]*libraryWatcherInstance),
}
})
@ -68,11 +73,11 @@ func (w *watcher) Run(ctx context.Context) error {
// Main scan triggering loop
trigger := time.NewTimer(w.triggerWait)
trigger.Stop()
waiting := false
targets := make(map[model.ScanTarget]struct{})
for {
select {
case <-trigger.C:
log.Info("Watcher: Triggering scan")
log.Info("Watcher: Triggering scan for changed folders", "numTargets", len(targets))
status, err := w.scanner.Status(ctx)
if err != nil {
log.Error(ctx, "Watcher: Error retrieving Scanner status", err)
@ -83,9 +88,23 @@ func (w *watcher) Run(ctx context.Context) error {
trigger.Reset(w.triggerWait * 3)
continue
}
waiting = false
// Convert targets map to slice
targetSlice := make([]model.ScanTarget, 0, len(targets))
for target := range targets {
targetSlice = append(targetSlice, target)
}
// Clear targets for next batch
targets = make(map[model.ScanTarget]struct{})
go func() {
_, err := w.scanner.ScanAll(ctx, false)
var err error
if conf.Server.DevSelectiveWatcher {
_, err = w.scanner.ScanFolders(ctx, false, targetSlice)
} else {
_, err = w.scanner.ScanAll(ctx, false)
}
if err != nil {
log.Error(ctx, "Watcher: Error scanning", err)
} else {
@ -102,13 +121,20 @@ func (w *watcher) Run(ctx context.Context) error {
w.libraryWatchers = make(map[int]*libraryWatcherInstance)
w.mu.Unlock()
return nil
case lib := <-w.watcherNotify:
if !waiting {
log.Debug(ctx, "Watcher: Detected changes. Waiting for more changes before triggering scan",
"libraryID", lib.ID, "name", lib.Name, "path", lib.Path)
waiting = true
case notification := <-w.watcherNotify:
lib := notification.Library
folderPath := notification.FolderPath
// If already scheduled for scan, skip
target := model.ScanTarget{LibraryID: lib.ID, FolderPath: folderPath}
if _, exists := targets[target]; exists {
continue
}
targets[target] = struct{}{}
trigger.Reset(w.triggerWait)
log.Debug(ctx, "Watcher: Detected changes. Waiting for more changes before triggering scan",
"libraryID", lib.ID, "name", lib.Name, "path", lib.Path, "folderPath", folderPath)
}
}
}
@ -199,13 +225,18 @@ func (w *watcher) watchLibrary(ctx context.Context, lib *model.Library) error {
log.Info(ctx, "Watcher started for library", "libraryID", lib.ID, "name", lib.Name, "path", lib.Path, "absoluteLibPath", absLibPath)
return w.processLibraryEvents(ctx, lib, fsys, c, absLibPath)
}
// processLibraryEvents processes filesystem events for a library.
func (w *watcher) processLibraryEvents(ctx context.Context, lib *model.Library, fsys storage.MusicFS, events <-chan string, absLibPath string) error {
for {
select {
case <-ctx.Done():
log.Debug(ctx, "Watcher stopped due to context cancellation", "libraryID", lib.ID, "name", lib.Name)
return nil
case path := <-c:
path, err = filepath.Rel(absLibPath, path)
case path := <-events:
path, err := filepath.Rel(absLibPath, path)
if err != nil {
log.Error(ctx, "Error getting relative path", "libraryID", lib.ID, "absolutePath", absLibPath, "path", path, err)
continue
@ -215,12 +246,27 @@ func (w *watcher) watchLibrary(ctx context.Context, lib *model.Library) error {
log.Trace(ctx, "Ignoring change", "libraryID", lib.ID, "path", path)
continue
}
log.Trace(ctx, "Detected change", "libraryID", lib.ID, "path", path, "absoluteLibPath", absLibPath)
// Check if the original path (before resolution) matches .ndignore patterns
// This is crucial for deleted folders - if a deleted folder matches .ndignore,
// we should ignore it BEFORE resolveFolderPath walks up to the parent
if w.shouldIgnoreFolderPath(ctx, fsys, path) {
log.Debug(ctx, "Ignoring change matching .ndignore pattern", "libraryID", lib.ID, "path", path)
continue
}
// Find the folder to scan - validate path exists as directory, walk up if needed
folderPath := resolveFolderPath(fsys, path)
// Double-check after resolution in case the resolved path is different and also matches patterns
if folderPath != path && w.shouldIgnoreFolderPath(ctx, fsys, folderPath) {
log.Trace(ctx, "Ignoring change in folder matching .ndignore pattern", "libraryID", lib.ID, "folderPath", folderPath)
continue
}
// Notify the main watcher of changes
select {
case w.watcherNotify <- *lib:
case w.watcherNotify <- scanNotification{Library: lib, FolderPath: folderPath}:
default:
// Channel is full, notification already pending
}
@ -228,6 +274,47 @@ func (w *watcher) watchLibrary(ctx context.Context, lib *model.Library) error {
}
}
// resolveFolderPath takes a path (which may be a file or directory) and returns
// the folder path to scan. If the path is a file, it walks up to find the parent
// directory. Returns empty string if the path should scan the library root.
func resolveFolderPath(fsys fs.FS, path string) string {
// Handle root paths immediately
if path == "." || path == "" {
return ""
}
folderPath := path
for {
info, err := fs.Stat(fsys, folderPath)
if err == nil && info.IsDir() {
// Found a valid directory
return folderPath
}
if folderPath == "." || folderPath == "" {
// Reached root, scan entire library
return ""
}
// Walk up the tree
dir, _ := filepath.Split(folderPath)
if dir == "" || dir == "." {
return ""
}
// Remove trailing slash
folderPath = filepath.Clean(dir)
}
}
// shouldIgnoreFolderPath checks if the given folderPath should be ignored based on .ndignore patterns
// in the library. It pushes all parent folders onto the IgnoreChecker stack before checking.
func (w *watcher) shouldIgnoreFolderPath(ctx context.Context, fsys storage.MusicFS, folderPath string) bool {
checker := newIgnoreChecker(fsys)
err := checker.PushAllParents(ctx, folderPath)
if err != nil {
log.Warn(ctx, "Watcher: Error pushing ignore patterns for folder", "path", folderPath, err)
}
return checker.ShouldIgnore(ctx, folderPath)
}
func isIgnoredPath(_ context.Context, _ fs.FS, path string) bool {
baseDir, name := filepath.Split(path)
switch {

491
scanner/watcher_test.go Normal file
View file

@ -0,0 +1,491 @@
package scanner
import (
"context"
"io/fs"
"path/filepath"
"testing/fstest"
"time"
"github.com/navidrome/navidrome/conf"
"github.com/navidrome/navidrome/conf/configtest"
"github.com/navidrome/navidrome/model"
"github.com/navidrome/navidrome/tests"
. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
)
var _ = Describe("Watcher", func() {
var ctx context.Context
var cancel context.CancelFunc
var mockScanner *tests.MockScanner
var mockDS *tests.MockDataStore
var w *watcher
var lib *model.Library
BeforeEach(func() {
DeferCleanup(configtest.SetupConfig())
conf.Server.Scanner.WatcherWait = 50 * time.Millisecond // Short wait for tests
ctx, cancel = context.WithCancel(context.Background())
DeferCleanup(cancel)
lib = &model.Library{
ID: 1,
Name: "Test Library",
Path: "/test/library",
}
// Set up mocks
mockScanner = tests.NewMockScanner()
mockDS = &tests.MockDataStore{}
mockLibRepo := &tests.MockLibraryRepo{}
mockLibRepo.SetData(model.Libraries{*lib})
mockDS.MockedLibrary = mockLibRepo
// Create a new watcher instance (not singleton) for testing
w = &watcher{
ds: mockDS,
scanner: mockScanner,
triggerWait: conf.Server.Scanner.WatcherWait,
watcherNotify: make(chan scanNotification, 10),
libraryWatchers: make(map[int]*libraryWatcherInstance),
mainCtx: ctx,
}
})
Describe("Target Collection and Deduplication", func() {
BeforeEach(func() {
// Start watcher in background
go func() {
_ = w.Run(ctx)
}()
// Give watcher time to initialize
time.Sleep(10 * time.Millisecond)
})
It("creates separate targets for different folders", func() {
// Send notifications for different folders
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
time.Sleep(10 * time.Millisecond)
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist2"}
// Wait for watcher to process and trigger scan
Eventually(func() int {
return mockScanner.GetScanFoldersCallCount()
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
// Verify two targets
calls := mockScanner.GetScanFoldersCalls()
Expect(calls).To(HaveLen(1))
Expect(calls[0].Targets).To(HaveLen(2))
// Extract folder paths
folderPaths := make(map[string]bool)
for _, target := range calls[0].Targets {
Expect(target.LibraryID).To(Equal(1))
folderPaths[target.FolderPath] = true
}
Expect(folderPaths).To(HaveKey("artist1"))
Expect(folderPaths).To(HaveKey("artist2"))
})
It("handles different folder paths correctly", func() {
// Send notification for nested folder
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1/album1"}
// Wait for watcher to process and trigger scan
Eventually(func() int {
return mockScanner.GetScanFoldersCallCount()
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
// Verify the target
calls := mockScanner.GetScanFoldersCalls()
Expect(calls).To(HaveLen(1))
Expect(calls[0].Targets).To(HaveLen(1))
Expect(calls[0].Targets[0].FolderPath).To(Equal("artist1/album1"))
})
It("deduplicates folder and file within same folder", func() {
// Send notification for a folder
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1/album1"}
time.Sleep(10 * time.Millisecond)
// Send notification for same folder (as if file change was detected there)
// In practice, watchLibrary() would walk up from file path to folder
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1/album1"}
time.Sleep(10 * time.Millisecond)
// Send another for same folder
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1/album1"}
// Wait for watcher to process and trigger scan
Eventually(func() int {
return mockScanner.GetScanFoldersCallCount()
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
// Verify only one target despite multiple file/folder changes
calls := mockScanner.GetScanFoldersCalls()
Expect(calls).To(HaveLen(1))
Expect(calls[0].Targets).To(HaveLen(1))
Expect(calls[0].Targets[0].FolderPath).To(Equal("artist1/album1"))
})
})
Describe("Timer Behavior", func() {
BeforeEach(func() {
// Start watcher in background
go func() {
_ = w.Run(ctx)
}()
// Give watcher time to initialize
time.Sleep(10 * time.Millisecond)
})
It("resets timer on each change (debouncing)", func() {
// Send first notification
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
// Wait a bit less than half the watcher wait time to ensure timer doesn't fire
time.Sleep(20 * time.Millisecond)
// No scan should have been triggered yet
Expect(mockScanner.GetScanFoldersCallCount()).To(Equal(0))
// Send another notification (resets timer)
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
// Wait a bit less than half the watcher wait time again
time.Sleep(20 * time.Millisecond)
// Still no scan
Expect(mockScanner.GetScanFoldersCallCount()).To(Equal(0))
// Wait for full timer to expire after last notification (plus margin)
time.Sleep(60 * time.Millisecond)
// Now scan should have been triggered
Eventually(func() int {
return mockScanner.GetScanFoldersCallCount()
}, 100*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
})
It("triggers scan after quiet period", func() {
// Send notification
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
// No scan immediately
Expect(mockScanner.GetScanFoldersCallCount()).To(Equal(0))
// Wait for quiet period
Eventually(func() int {
return mockScanner.GetScanFoldersCallCount()
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
})
})
Describe("Empty and Root Paths", func() {
BeforeEach(func() {
// Start watcher in background
go func() {
_ = w.Run(ctx)
}()
// Give watcher time to initialize
time.Sleep(10 * time.Millisecond)
})
It("handles empty folder path (library root)", func() {
// Send notification with empty folder path
w.watcherNotify <- scanNotification{Library: lib, FolderPath: ""}
// Wait for scan
Eventually(func() int {
return mockScanner.GetScanFoldersCallCount()
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
// Should scan the library root
calls := mockScanner.GetScanFoldersCalls()
Expect(calls).To(HaveLen(1))
Expect(calls[0].Targets).To(HaveLen(1))
Expect(calls[0].Targets[0].FolderPath).To(Equal(""))
})
It("deduplicates empty and dot paths", func() {
// Send notifications with empty and dot paths
w.watcherNotify <- scanNotification{Library: lib, FolderPath: ""}
time.Sleep(10 * time.Millisecond)
w.watcherNotify <- scanNotification{Library: lib, FolderPath: ""}
// Wait for scan
Eventually(func() int {
return mockScanner.GetScanFoldersCallCount()
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
// Should have only one target
calls := mockScanner.GetScanFoldersCalls()
Expect(calls).To(HaveLen(1))
Expect(calls[0].Targets).To(HaveLen(1))
})
})
Describe("Multiple Libraries", func() {
var lib2 *model.Library
BeforeEach(func() {
// Create second library
lib2 = &model.Library{
ID: 2,
Name: "Test Library 2",
Path: "/test/library2",
}
mockLibRepo := mockDS.MockedLibrary.(*tests.MockLibraryRepo)
mockLibRepo.SetData(model.Libraries{*lib, *lib2})
// Start watcher in background
go func() {
_ = w.Run(ctx)
}()
// Give watcher time to initialize
time.Sleep(10 * time.Millisecond)
})
It("creates separate targets for different libraries", func() {
// Send notifications for both libraries
w.watcherNotify <- scanNotification{Library: lib, FolderPath: "artist1"}
time.Sleep(10 * time.Millisecond)
w.watcherNotify <- scanNotification{Library: lib2, FolderPath: "artist2"}
// Wait for scan
Eventually(func() int {
return mockScanner.GetScanFoldersCallCount()
}, 200*time.Millisecond, 10*time.Millisecond).Should(Equal(1))
// Verify two targets for different libraries
calls := mockScanner.GetScanFoldersCalls()
Expect(calls).To(HaveLen(1))
Expect(calls[0].Targets).To(HaveLen(2))
// Verify library IDs are different
libraryIDs := make(map[int]bool)
for _, target := range calls[0].Targets {
libraryIDs[target.LibraryID] = true
}
Expect(libraryIDs).To(HaveKey(1))
Expect(libraryIDs).To(HaveKey(2))
})
})
Describe(".ndignore handling", func() {
var ctx context.Context
var cancel context.CancelFunc
var w *watcher
var mockFS *mockMusicFS
var lib *model.Library
var eventChan chan string
var absLibPath string
BeforeEach(func() {
ctx, cancel = context.WithCancel(GinkgoT().Context())
DeferCleanup(cancel)
// Set up library
var err error
absLibPath, err = filepath.Abs(".")
Expect(err).NotTo(HaveOccurred())
lib = &model.Library{
ID: 1,
Name: "Test Library",
Path: absLibPath,
}
// Create watcher with notification channel
w = &watcher{
watcherNotify: make(chan scanNotification, 10),
}
eventChan = make(chan string, 10)
})
// Helper to send an event - converts relative path to absolute
sendEvent := func(relativePath string) {
path := filepath.Join(absLibPath, relativePath)
eventChan <- path
}
// Helper to start the real event processing loop
startEventProcessing := func() {
go func() {
defer GinkgoRecover()
// Call the actual processLibraryEvents method - testing the real implementation!
_ = w.processLibraryEvents(ctx, lib, mockFS, eventChan, absLibPath)
}()
}
Context("when a folder matching .ndignore is deleted", func() {
BeforeEach(func() {
// Create filesystem with .ndignore containing _TEMP pattern
// The deleted folder (_TEMP) will NOT exist in the filesystem
mockFS = &mockMusicFS{
FS: fstest.MapFS{
"rock": &fstest.MapFile{Mode: fs.ModeDir},
"rock/.ndignore": &fstest.MapFile{Data: []byte("_TEMP\n")},
"rock/valid_album": &fstest.MapFile{Mode: fs.ModeDir},
"rock/valid_album/track.mp3": &fstest.MapFile{Data: []byte("audio")},
},
}
})
It("should NOT send scan notification when deleted folder matches .ndignore", func() {
startEventProcessing()
// Simulate deletion event for rock/_TEMP
sendEvent("rock/_TEMP")
// Wait a bit to ensure event is processed
time.Sleep(50 * time.Millisecond)
// No notification should have been sent
Consistently(eventChan, 100*time.Millisecond).Should(BeEmpty())
})
It("should send scan notification for valid folder deletion", func() {
startEventProcessing()
// Simulate deletion event for rock/other_folder (not in .ndignore and doesn't exist)
// Since it doesn't exist in mockFS, resolveFolderPath will walk up to "rock"
sendEvent("rock/other_folder")
// Should receive notification for parent folder
Eventually(w.watcherNotify, 200*time.Millisecond).Should(Receive(Equal(scanNotification{
Library: lib,
FolderPath: "rock",
})))
})
})
Context("with nested folder patterns", func() {
BeforeEach(func() {
mockFS = &mockMusicFS{
FS: fstest.MapFS{
"music": &fstest.MapFile{Mode: fs.ModeDir},
"music/.ndignore": &fstest.MapFile{Data: []byte("**/temp\n**/cache\n")},
"music/rock": &fstest.MapFile{Mode: fs.ModeDir},
"music/rock/artist": &fstest.MapFile{Mode: fs.ModeDir},
},
}
})
It("should NOT send notification when nested ignored folder is deleted", func() {
startEventProcessing()
// Simulate deletion of music/rock/artist/temp (matches **/temp)
sendEvent("music/rock/artist/temp")
// Wait to ensure event is processed
time.Sleep(50 * time.Millisecond)
// No notification should be sent
Expect(w.watcherNotify).To(BeEmpty(), "Expected no scan notification for nested ignored folder")
})
It("should send notification for non-ignored nested folder", func() {
startEventProcessing()
// Simulate change in music/rock/artist (doesn't match any pattern)
sendEvent("music/rock/artist")
// Should receive notification
Eventually(w.watcherNotify, 200*time.Millisecond).Should(Receive(Equal(scanNotification{
Library: lib,
FolderPath: "music/rock/artist",
})))
})
})
Context("with file events in ignored folders", func() {
BeforeEach(func() {
mockFS = &mockMusicFS{
FS: fstest.MapFS{
"rock": &fstest.MapFile{Mode: fs.ModeDir},
"rock/.ndignore": &fstest.MapFile{Data: []byte("_TEMP\n")},
},
}
})
It("should NOT send notification for file changes in ignored folders", func() {
startEventProcessing()
// Simulate file change in rock/_TEMP/file.mp3
sendEvent("rock/_TEMP/file.mp3")
// Wait to ensure event is processed
time.Sleep(50 * time.Millisecond)
// No notification should be sent
Expect(w.watcherNotify).To(BeEmpty(), "Expected no scan notification for file in ignored folder")
})
})
})
})
var _ = Describe("resolveFolderPath", func() {
var mockFS fs.FS
BeforeEach(func() {
// Create a mock filesystem with some directories and files
mockFS = fstest.MapFS{
"artist1": &fstest.MapFile{Mode: fs.ModeDir},
"artist1/album1": &fstest.MapFile{Mode: fs.ModeDir},
"artist1/album1/track1.mp3": &fstest.MapFile{Data: []byte("audio")},
"artist1/album1/track2.mp3": &fstest.MapFile{Data: []byte("audio")},
"artist1/album2": &fstest.MapFile{Mode: fs.ModeDir},
"artist1/album2/song.flac": &fstest.MapFile{Data: []byte("audio")},
"artist2": &fstest.MapFile{Mode: fs.ModeDir},
"artist2/cover.jpg": &fstest.MapFile{Data: []byte("image")},
}
})
It("returns directory path when given a directory", func() {
result := resolveFolderPath(mockFS, "artist1/album1")
Expect(result).To(Equal("artist1/album1"))
})
It("walks up to parent directory when given a file path", func() {
result := resolveFolderPath(mockFS, "artist1/album1/track1.mp3")
Expect(result).To(Equal("artist1/album1"))
})
It("walks up multiple levels if needed", func() {
result := resolveFolderPath(mockFS, "artist1/album1/nonexistent/file.mp3")
Expect(result).To(Equal("artist1/album1"))
})
It("returns empty string for non-existent paths at root", func() {
result := resolveFolderPath(mockFS, "nonexistent/path/file.mp3")
Expect(result).To(Equal(""))
})
It("returns empty string for dot path", func() {
result := resolveFolderPath(mockFS, ".")
Expect(result).To(Equal(""))
})
It("returns empty string for empty path", func() {
result := resolveFolderPath(mockFS, "")
Expect(result).To(Equal(""))
})
It("handles nested file paths correctly", func() {
result := resolveFolderPath(mockFS, "artist1/album2/song.flac")
Expect(result).To(Equal("artist1/album2"))
})
It("resolves to top-level directory", func() {
result := resolveFolderPath(mockFS, "artist2/cover.jpg")
Expect(result).To(Equal("artist2"))
})
})