Some distributions provide python2 version of pylint package by default,
but all provide pylint for python3 in some package. Python 2 reaches
EOL in few months, so there's no reason to support it.
This prevents our scripts from being accidentally marked invalid due to
language changes between python 2 and 3. Also, newer pylint has nicer
output, that provides exact module filename straight in the warning.
These files assume, that WIN32 is defined by compiler or MSVC project
file, when in MSYS2 and MinGW environments, it is defined in config.h
and appears after dosbox.h is included, which in turn is included by
associated header (cdrom.h in this case).
Fixes: #42
Includes two small scripts: verify-bash.sh for running shellcheck, and
verify-python.sh for running pylint.
.pylint rc files is a default configuration file generated by
pylint 2.3.1, with one change (min-similarity-lines changed
from 4 to 10).
Use C++11 static_assert to assure, that code behaves correctly; if
Chip or Channel structs will be changed to non-std layout or a different
first fields will be introduced, that will invalidate the offset
calculation.
Additionally, add asserts to prevent possibility of introducing a bug
when offset stored in one the tables is 0.
Silence compiler warnings:
enumeration value 'sm2Percussion' not handled in switch [-Wswitch]
enumeration value 'sm3Percussion' not handled in switch [-Wswitch]
Otherwise compilation fails on GCC 5.4 in Steam Runtime environment.
As of 2019, all up-to-date compilers support C++11, most of them
use C++14 as default standard, but C++14 is not fully supported by
autoconf in Ubuntu 16.04 LTS.
Full C++11 support was introduced in GCC 4.8.1 and Clang 3.3.
GitHub notified me, that they are dropping macOS-10.14 completely, all
users are upgraded to macOS-10.15 and the only valid value in CI jobs
will be macos-latest from now on.
I haven't seen any indication of the same happening for Windows
machines, but GitHub Actions documentation dropped all references to
windows-2016 and windows-2019 - windows-latest seems to be the only
valid value for shared runners from now on.
Ubuntu machines are left as they are (thankfully).
Thanks to @ripsaw8080 for insight into CD-DA channel mapping,
@hail-to-the-ryzen for testing and flagging a position-tracking bug,
and @dreamer_ for guidance and code review.
The CD-DA volume and channel mapping loops were moved to generic mixer
calls and no longer require a pre-processing loop:
- Application-controlled CD-DA volume adjustment is now applied using
an existing mixer volume scalar that was previously unused by the
CD-DA code.
- Mapping of CD-DA left and right channels is now applied at the tail
end of the mixer's sample ingest sequence.
The following have been removed:
- The CD-DA callback chunk-wise circular buffer
- The decode buffers in the Opus and MP3 decoders
- The decode buffer and conversion buffers in SDL_Sound
These removals and API changes allow the image player's buffer
to be passed-through ultimately to the audio codec, skipping multiple
intermediate buffers.
This way there's no need to prepend every line in build job with a path
to MSYS2-installed bash, and deal with problems related to escaping
embedded shell invocations.
Apt does not have a stable CLI interface, therefore should be avoided in
scripts. Using apt-get should be fine.
Split 'apt-get update' to a separate step. It makes it easier to
check, what mirrors and repositories are being used by CI machines.
Remove SPDX identifier - it's missing from other .yml files (I would
consider these configuration files and not source code, so not covered
by copyright).
Use the same build dependencies for static analysis build as in other
Linux jobs.
Runtime improvements:
- Replaces the existing audio callback routine with an efficient chunked
circular-buffer audio reader
- Replaces assumptions that all audio tracks are 44.1 kHz & stereo.
The mixer is now fed data at the actual compressed track's rate and
channel count
- Eliminates all SDL locks in the audio code in favour of mixer state
control
- Queries the codec for track-length instead of using hundreds of
iterative decimating seeks to determine track length (loading
a 99-track CUE now takes 0.1 user-seconds versus 3+ seconds
previously)
- Seeks are performed within the already-decoded buffer (for all
codecs) instead of discarding and re-running the decode sequence
- SDL_Sound's buffer-size is now set once and never resized, which
eliminates repeated re-malloc'ing in the library
- Only one seek is performed per-playback sequence followed by
sequential decodes, similar to a physical CDROM (The baseline dosbox
performs a seek for every 2352-bytes of uncompressed audio)
- The DOSBox mixer is now only active during playback sequences and
fully dormant otherwise (baseline dosbox instead performs hundreds of
calls/second with empty data)
- When using Opus audio tracks, and if your dosbox.conf [mixer]
rate=48000, then you will (very likely) achieve sample-exact
unadulterated pass-through along your entire audio chain from the
decoder, to DOSBox, to your operating system's software mixer,
to your sound card driver, to your sound card, to your speakers,
or to your digital receiver / USB speakers/headphones / or HDMI
TV/screen. This is because almost all modern hardware DACs use
a native sample rate of 48000
Source-level maintenance improvements:
- It strips all pre-processor #ifdef branching for SDL_Sound from
the code
- Fixes all codec compiler warnings (in the modified files); builds
have been tested with GCC 4 to 10, Clang 6 to 10, and MSVC 14
- Tested on Linux, macOS (Xcode), and Windows (MinGW MSYS 1.0)
operating systems
- Tested on i386, x86_64, ARM, and PowerPC (big-endian) architectures