The required AKA and TLDR….
AKA: A world of hell just to getting it functional. This process isn’t foolproof, but it dictates my process of what i did to make things work.
TLDR: You need to understand nginx, django, python, websockets, and to an extent redis configurations and associated code almost to a T in some instances. If you lack understanding on any of the above, you’re about to enter into a world of hell.
Many commands and bits below are sourced from the official documentation here: https://trunk-player.readthedocs.io/
READ THIS FIRST
If you don’t read this, you’ll regret it soon for not reading.
This is assuming you run TrunkPlayer on its own unique Virtual Machine or dedicated hardware(albeit overkill), which is completely separate of TrunkRecorder, and split away from the PostgreSQL and Redis server.
This also assumes you will be running a dual nginx setup, with a front-end nginx as a reverse proxy and a back-end nginx installation.
Everything below assumes you run in root
.
Requirements
TrunkPlayer REQUIRES the following packages/software/tools installed, either on the same machine, or portions split away on their own machines…
- PostgreSQL
- redis-client and redis-server
- nginx (engineX)
- python 3.6
git
to sync in the repo.- virtualenv
- supervisor
- pip (for python3.x)
- A normal user on the OS with the username of
radio
with a traditional /home/userName/ setup.
Package installation choices…
If you plan to host PostgreSQL on another machine (like I do), you will need to use this apt install command…
apt install python3-dev virtualenv redis-server redis python3-pip libpq-dev git nginx ffmpeg supervisor
If you plan to split “all the things” to their own machines, you’ll be doing this…
For a full-split, with nginx and django on a VM, redis on another, and postgres on another server, your apt install
sequences will look like this…
Server One (Django/TrunkPlayer): apt install python3-dev virtualenv python3-pip libpq-dev postgresql-client postgresql-client-common git nginx ffmpeg supervisor
Server Two (Cache): apt install redis-server redis
Server Three (DB): apt install postgresql libpq-dev postgresql-client postgresql-client-common
If you plan to “all-in-one” the PostgreSQL and the django/python TrunkPlayer webapp, then this command will be needed…
apt install python3-dev virtualenv redis-server redis python3-pip postgresql libpq-dev postgresql-client postgresql-client-common git nginx ffmpeg supervisor
Why redis
and redis-server
? For some reason a while back, debian wasn’t accepting redis-server
as an acceptable package, but only redis
. As of this writing, that appears to have been fixed (many months after the fact). You would likely be fine removing redis
and keeping redis-server
as of now on a Debian installation.
Now that we’ve installed the needed packages on the TrunkPlayer machine (and redis machine… and sql machine), let’s begin the “fun stuff”.
On the TrunkPlayer machine, you’ll be typing in these commands to get TrunkPlayer sync’d in and installed (ripped from docs site).
cd /home/radio
(it’d be wise tosu
toradio
rather asroot
)git clone https://github.com/ScanOC/trunk-player.git
cd trunk-player
virtualenv -p python3 env --prompt='(Trunk Player)'
source env/bin/activate
pip install -r requirements.txt
(you may need to usepip3
in place ofpip
!)
Back to Trunk Recorder’s virtual machine!
In the Trunk-Recorder machine, we’ll need to go to the trunk-build
directory (wherever you pulled the git repo to and compiled it at)… likely in /root/trunk-build/
, you will likely recall a line in config.json
named "uploadScript": "encode-sys-0.sh"
within the systems
section.
NOTE: Don’t have trunk-recorder’s ./recorder
running during this process – found out that bad things tend to happen.
encode-sys-0.sh
Here is what you dump into a file named encode-sys-0.sh
within /root/trunk-build/
(because i’m no mood to deal with this disaster again)…
#! /bin/bash # Script to upload audio from trunk-recorder to # web server to be displayed by trunk-player # # 02/15/2016 - Dylan Reinhold dreinhold@gmail.com # 01/19/2019 - Brett C, minor edits to script (less debug/trunk-rec spam). #----------------------------------------------------- # REMOTE_USER_NAME="radio" REMOTE_SERVER="192.168.20.212" REMOTE_AUDIO_FOLDER="/home/$REMOTE_USER_NAME/trunk-player/audio_files" REMOTE_IMPORT_SCRIPT="/home/$REMOTE_USER_NAME/trunk-player/utility/trunk-player/encode_load.sh" # echo "Encoding: $1" filename="$1" basename="${filename%.*}" filename_only=$(basename $basename) json="$basename.json" # Hack the JSON to add play length len=$(soxi -D $filename) head -n-2 $json > $json.new echo ""play_length": $len," >> $json.new tail -n2 $json >> $json.new mv $json.new $json # echo "Upload: $filename" scp -q -C ${filename} $json $REMOTE_USER_NAME@$REMOTE_SERVER:$REMOTE_AUDIO_FOLDER/ if [ $? -eq 0 ] then ssh -q $REMOTE_USER_NAME@$REMOTE_SERVER "$REMOTE_IMPORT_SCRIPT $filename_only" # echo "Removing: $json, $filename_only.mp3" rm -f $json $filename fi
What does this file do? It invokes scp then ssh to 1) move a wav file over, 2) move the json file over. Before that, it will inject the play-time duration of the wav file to the json.
A Note on scp
command… you will need to read up on what scp is before getting the ball rolling. If you’re not use to how scp
functions, check google for how it functions. A couple of articles were made elsewhere on the net to show examples and such.
– https://linuxize.com/post/how-to-use-scp-command-to-securely-transfer-files/
– https://haydenjames.io/linux-securely-copy-files-using-scp/
In my case, my servers all utilize SSH Keys. I stopped using passwords many years ago (a little more than a decade now!). You will need to read up on the basics of SSH keys – it’s worth your time (now and for the future).
You only need to edit the first four variables, or one variable (the IP) – depending on your setup!
REMOTE_USER_NAME="radio"
– Changeradio
to something else only if your TrunkPlayer machine doesn’t have theradio
account made. If you created an account calledradio
on the TrunkPlayer machine, don’t touch this.REMOTE_SERVER="192.168.20.212"
– Set the IP address of the server that TrunkPlayer resides on. In my case, TrunkRecorder’s VM runs on 192.168.20.211, and TrunkPlayer’s VM is 192.168.20.212. The 192.168.20.212 IP will be used in this case, since it’s the remote server to TrunkPlayers VM.REMOTE_AUDIO_FOLDER="/home/$REMOTE_USER_NAME/trunk-player/audio_files"
– An absolute path, including the$REMOTE_USER_NAME
variable for the audio files. If the directory doesn’t exist for some reason, you’re fine to create it within thetrunk-player
folder. In this case, it’s calledaudio_files
.REMOTE_IMPORT_SCRIPT="/home/$REMOTE_USER_NAME/trunk-player/utility/trunk-player/encode_load.sh"
– This file is invoked on the TrunkPlayer machine, but is called by the script and displays on TrunkRecorders screen UI if you kick on the debug-like outputs (there’s three items to unhash if desired). We’ll get to this file in a moment, as some edits need to happen to it.
encode_load.sh
This file will reside on the TrunkPlayer machine. Typically would reside in /home/radio/trunk-player/utility/trunk-player/encode_load.sh
#!/bin/sh # Wrapper script to import new audio files into trunk-player # * Encode the wav files into mp3 # * If the wav is from an analog TG increase the volume to match the digital groups # * Upload the new mp3 to amazon s3 # # Dylan Reinhold dreinhold@gmail.com # Brett C - Removed non-local-net lines, removed lame in place of ffmpeg #-------------------------------------------------------------------------- BASE_DIR="/home/radio/trunk-player" LOG="$BASE_DIR/logs/encode.log" # Load python virtual environment . $BASE_DIR/env/bin/activate basename="$1" filename="$basename.wav" mp3encoded="$basename.mp3" json="$basename.json" echo "$basename : `date` encode $basename" >> $LOG grep '"analog": 0' $BASE_DIR/audio_files/$json >/dev/null 2>&1 if [ $? -eq 0 ] then echo "$basename : `date` digital" >> $LOG ffmpeg -i "$BASE_DIR/audio_files/$filename" -codec:a mp3 -b:a 24k -hide_banner -cutoff 18000 -loglevel quiet "$BASE_DIR/audio_files/$mp3encoded" >> $LOG else echo "$basename : `date` analog" >> $LOG ffmpeg -i "$BASE_DIR/audio_files/$filename" -codec:a mp3 -b:a 24k -hide_banner -cutoff 18000 -loglevel quiet "$BASE_DIR/audio_files/$mp3encoded" >> $LOG fi $BASE_DIR/utility/trunk-player/load_audio_file.sh $basename rm -f $BASE_DIR/audio_files/$filename
For some reason, Trunk-Recorder does not append the json files on whether or not the recorded audio is analog or digital. It might be because I did not include a CSV list of all talkgroups on the system, tying the TG’s to D, A, or E audio output settings (Digital Analog Encrypted). With that said, the script will always fall back to the else segment at the bottom for analog encode, but it’s still the same encode settings used as the digital mode.
This shell/bash script expects completed MP3 audio files (and modified json files) to reside in /home/radio/trunk-player/audio_files
on the TrunkPlayer instance and expects .wav and .json files to reside at /root/trunk-build/audio_files/
. Typically, Trunk-Recorder will create some subfolder sections here. Each system that has been given a proper short name (from config.json on Trunk-Recorder) will have a folder named within the aforementioned folder. For me, my titled system is called JoCoMARRS
. After that, a folder for the year and another folder for the month, then the final folder for the day of month, within that – json’s and wav’s. It will look something like this on Trunk-Recorders machine: /root/trunk-build/audio_files/SystemName/2019/5/1/
.
Why ffmpeg over lame?
Simply put: It’s faster (means less cpu usage) from my POV anyways, better options for audio input and output, it didn’t make me rage at the audio-normalization. ffmpeg
‘s plethora of configurations and settings also is an appealing factor.
ffmpeg options, broken down
ffmpeg -i "$BASE_DIR/audio_files/$filename" -codec:a mp3 -b:a 24k -hide_banner -cutoff 18000 -loglevel quiet
So here’s what each of those little options denote. Best to understand this rather guessing “oh, it did that”, and having a mash of weird audio.
ffmpeg
– calls for the package.-i
is the inputted file.- In this instance, it is
"$BASE_DIR/audio_files/$filename"
.
- In this instance, it is
$BASE_DIR
– called fromBASE_DIR="/home/radio/trunk-player"
.$filename
– Dynamic variable to pull the current filename (1234-1234567890_8.5512e+08.wav) to-be/being processed.-codec:a mp3
– Tellsffmpeg
to convert the inputted audio to a mp3 file.-b:a
– Tellsffmpeg
to force the wav file to use no more than a bitrate of 24k once converted into a mp3 file. For P25 audio, this is more than enough.-hide_banner
– hides spammy debug/legal/blah stuff.-cutoff 18000
– While P25 audio doesn’t generally exceed more than 10k of the audio frequency (0hz to 18khz), i’ve set this to 18000 – had “fuzzy audio” when setting it to 12k or 8k.-loglevel quiet
– Just makes the encoder be quiet. No spammy output. You can setquiet
toverbose
if you run into issues with ffmpeg. Debug should output more than enough details if something goes sideways.
TrunkPlayer’s Redis config
You’re probably asking yourself at this point: why are we doing things so weirdly out of order? Simple, setup everything that functions with TrunkPlayer first, not setup TrunkPlayer First then fiddle with everything there on after. Honestly speaking, you have to setup and configure everything else before you even touch TrunkPlayer’s settings.
With Redis, there are two options on this… 1) hosting redis on the same VM/Machine as TrunkPlayer, 2) hosting redis on a remote server, split away from TrunkPlayer (what i do).
I use redis and memcache/d on a lot of my projects, and i don’t keep a running server of redis to each virtual machine, as that would waste space and memory. Instead, I’ve the caching packages on a dedicated and virtual machine. The amount of speed that you gain by using caching (of any sort) is immense.
So, in my method, I am using a machine dedicated to redis. With redis and TrunkPlayer, I enforce it to use a particular database for TrunkPlayer and another redis database for the general django / session details. Why split them? Because this is what happens… it’s very spammy.
So, assuming you’ve installed redis on a remote machine and not the same machine as TrunkPlayer, you’ll need to make a couple edits to the /etc/redis/redis.conf
file.
bind 127.0.0.1
– put a hash before bind (#bind 127.0.0.1
), we’re going to listen for anything.protected-mode
– set to no.timeout
– set to 0
The above settings are indeed insecure, so please do not expose your redis server to the internet. If you do, bad things can happen. If you do expose it to the net, ensure appropriate firewall rules are in place and an auth system is setup on redis.
Once done editing, restart redis.
PostgreSQL…
PostgreSQL, many love it, many hate it, many tolerate it. I tolerate it because it’s not mySQL/MariaDB, but still gets the job done.
TrunkPlayer (and django) utilizes the PSQL database… heavily. All users, all radios, transmissions, talkgroups… everything is slapped to that PSQL database that we’re about to setup.
As of this writing, PostgreSQL’s version is 9.6. Let’s get crackin’…
apt install postgresql-all
sudo su - postgres
- Type in
psql
- Paste in:
CREATE USER trunk_player_user WITH PASSWORD 'CHANGE_ME_PLZ';
where CHANGE_ME_PLZ is a password that you will be using between TrunkPlayer’s django backend and the database connection. Please make your password unique, and difficult to guess – no common names. - Paste in:
CREATE DATABASE trunk_player;
- Paste in:
GRANT ALL PRIVILEGES ON DATABASE trunk_player TO trunk_player_user;
- Paste in:
ALTER ROLE trunk_player_user SET client_encoding TO 'utf8'; ALTER ROLE trunk_player_user SET default_transaction_isolation TO 'read committed'; ALTER ROLE trunk_player_user SET timezone TO 'UTC';
- Type in
q
to exit.
Congrats, postgresql has been configured for your trunkplayer installation.
But wait….. there’s more! PostgreSQL’s autovacuum feature and listeners, and general settings. AutoVacuum NEEDS to be configured otherwise the trunk_player database and its associated tables get bloated – quick. This generally results in website slowness, slow queries, and you left wondering “WHAT IS WRONG WITH THIS THING?!?!?!?” after a month running it without autovacuum enabled. 🙂
The following settings assuming you’ve a SQL database server with at least 32GB to 64GB of RAM available for use. My pSQL server is on an actual machine that solely deals with SQL operations from multiple daemons. Many of the settings / variables below are already set, but some are not or are commented out.
- With your favorite text editor, load up
/etc/postgresql/9.6/main/postgresql.conf
- Modify
listen_addresses
setting to look likelisten_addresses = '*';
- Set
shared_buffers
to512MB
(or something sensible) - Set
huge_pages
totry
- Set
temp_buffers
to32MB
- Set
work_mem
to32MB
- Set
maintenance_work_mem
to64MB
- Set
autovacuum_work_mem
to-1
- Set
dynamic_shared_memory_type
toposix
- Set
temp_file_limit
to-1
Now, in the postgresql.conf
file, we scroll all the way down to the # AUTOVACUUM PARAMETERS
segment. This is where we modify and kick on the auto vacuum feature. If you’re looking for a quick write up on autovacuum, check this blog writeup. Here’s what mine looks like, it works a charm for TrunkPlayer – which does get a bit out of hand sometimes with tuples.
#------------------------------------------------------------------------------ # AUTOVACUUM PARAMETERS #------------------------------------------------------------------------------ autovacuum = on # Enable autovacuum subprocess? 'on' # requires track_counts to also be on. log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and # their durations, > 0 logs only # actions running at least this number # of milliseconds. autovacuum_max_workers = 6 # max number of autovacuum subprocesses # (change requires restart) autovacuum_naptime = 10s # time between autovacuum runs autovacuum_vacuum_threshold = 10 # min number of row updates before # vacuum autovacuum_analyze_threshold = 10 # min number of row updates before # analyze autovacuum_vacuum_scale_factor = 0.1 # fraction of table size before vacuum autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) autovacuum_multixact_freeze_max_age = 400000000 # maximum multixact age # before forced vacuum # (change requires restart) autovacuum_vacuum_cost_delay = 10ms # default vacuum cost delay for # autovacuum, in milliseconds; # -1 means use vacuum_cost_delay autovacuum_vacuum_cost_limit = 1000 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit
Once those changes are made and saved, restart postgresql.
Let’s move on to the next portion… TrunkPlayer.
TrunkPlayer
Here’s hoping the aforementioned configurations of Redis, PostgreSQL, and other TrunkPlayer prerequisites have been installed and completed… here comes the rollercoaster. Assuming you’re in the Trunk Player Terminal/SSH window and in the directory for where you git clone
‘d to (hint: trunk-player
folder, not trunk-player/trunk_player/
folder).
cp trunk_player/settings_local.py.sample trunk_player/settings_local.py
- Edit
settings_local.py
(you can edit the code below as you see fit). Change IP.IP.IP.IP to the needed servers, set the secret_key and site_* portions.
import os BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) LOCAL_SETTINGS = True DEBUG = False ALLOWED_HOSTS = ['*'] #ALLOW_ANONYMOUS = True # Make this unique, and don't share it with anybody. # You can use http://www.miniwebtool.com/django-secret-key-generator/ # to create one. SECRET_KEY = 'CHANGE THIS VIA THE LINK ABOVE' # Added line to prevent CSRF verification errors with Django SECURE_PROXY_SSL_HEADER = () # Name for site - change these to your own SITE_TITLE = 'Scanner Site' SITE_EMAIL = 'admin@somedomain.com' DEFAULT_FROM_EMAIL='SiteName <scanner@somedomain.com>' # Set this to the location of your audio files AUDIO_URL_BASE = '/audio_files/' # Allow TalkGroup access restrictions ACCESS_TG_RESTRICT = False # Most Time zones can be found here: # https://en.wikipedia.org/wiki/List_of_tz_database_time_zones # Use the "TZ Database Name" column here! TIME_ZONE = 'America/Chicago' # some of the options below stolen from this github page: # https://github.com/ScanOC/trunk-player/issues/64 CHANNEL_LAYERS = { "default": { "BACKEND": "asgi_redis.RedisChannelLayer", "CONFIG": { "hosts": [os.environ.get('REDIS_URL', 'redis://IP.IP.IP.IP:6379/1')], "channel_capacity": { "http.request": 200, "http.response!*": 10, "websocket.send*": 20, }, "capacity": 100, }, "ROUTING": "radio.routing.channel_routing", }, } CACHES = { "default": { "BACKEND": "django_redis.cache.RedisCache", "LOCATION": "redis://IP.IP.IP.IP:6379/2", "OPTIONS": { "CLIENT_CLASS": "django_redis.client.DefaultClient", } } } # Postgres database setup DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'trunk_player', # Database Name 'USER': 'trunk_player_user', # Database User 'PASSWORD': 'THE_PASSWORD_YOU_SET_EARLIER', # Database Password 'HOST': 'IP.IP.IP.IP, 'PORT': '', # You can generally leave this blank. } }
- After editing the IP.IP.IP.IP and other associated items, save that file.
- Now, we edit
settings.py
.
""" Django settings for trunk_player project. Generated by 'django-admin startproject' using Django 1.9.6. For more information on this file, see https://docs.djangoproject.com/en/1.9/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.9/ref/settings/ """ import os # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! #SECRET_KEY = 'WHARLGARHLHALSHALSALHSLSLHSLASKFGHSDAKGFSG' (not a real key) # SECURITY WARNING: don't run with debug turned on in production! LOGIN_URL = '/login/' # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.sites', 'local_override', 'radio.apps.RadioConfig', 'allauth', 'allauth.account', 'allauth.socialaccount', 'allauth.socialaccount.providers.google', #'allauth.socialaccount.providers.facebook', #'allauth.socialaccount.providers.instagram', 'rest_framework', 'channels', 'pinax.stripe', 'django_select2', ] MIDDLEWARE_CLASSES = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'radio.custom_middleware.ExtendUserSession', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'trunk_player.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'trunk_player.wsgi.application' # Database # https://docs.djangoproject.com/en/1.9/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Password validation # https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators AUTHENTICATION_BACKENDS = ( # Needed to login by username in Django admin, regardless of `allauth` 'django.contrib.auth.backends.ModelBackend', # `allauth` specific authentication methods, such as login by e-mail 'allauth.account.auth_backends.AuthenticationBackend', ) AUTH_PASSWORD_VALIDATORS = [ {'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',}, {'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',}, {'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',}, {'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',}, ] LANGUAGE_CODE = 'en-us' TIME_ZONE = 'America/Los_Angeles' USE_I18N = True USE_L10N = True USE_TZ = True # 'X-Forwarded-Proto' header for request.is_secure() SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') STATIC_URL = '/static/' #STATICFILES_DIRS = [ # os.path.join(BASE_DIR, "audio_files"), #] STATIC_ROOT = os.path.join(BASE_DIR, "static") # NOTE: Setting `PAGE_SIZE` value will change how many items are shown per-page. # Higher the setting, more DB i/o and more web bandwidth! REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': ('rest_framework.permissions.AllowAny',), 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination', 'PAGE_SIZE': 100 } MEDIA_URL = '/media/' MEDIA_ROOT = os.path.join(BASE_DIR, "audio_files") # How far back an anonymous/guest users can see back in minutes # 0 will disable the limit. Sane limit is between 1 and 30. ANONYMOUS_TIME = 5 # This Agency must exist in radio.Agency RADIO_DEFAULT_UNIT_AGENCY = 0 SITE_ID = 1 SOCIALACCOUNT_PROVIDERS = { 'google': { 'SCOPE': ['profile', 'email'], 'AUTH_PARAMS': { 'access_type': 'online' } } } ACCOUNT_AUTHENTICATION_METHOD="username_email" ACCOUNT_EMAIL_REQUIRED=True LOGIN_REDIRECT_URL="/" AMAZON_ADDS = False AMAZON_AD_TRACKING_ID = 'some-tracking-id-here' AMAZON_AD_LINK_ID = 'some-hash-here' AMAZON_AD_EMPHASIZE_CATEGORIES = 'some,ids,here' AMAZON_AD_FALL_BACK_SEARCH = ['common', 'keywords', 'here',] GOOGLE_ANALYTICS_PROPERTY_ID = 'UA-87256556-1' TWITTER_ACTIVE = False TWITTER_LIST_URL = None #This is set (read:overridden) on settings_local.py SITE_TITLE = 'Trunk-Player' SITE_EMAIL = 'help@example.com' PINAX_STRIPE_SECRET_KEY = '0' PINAX_STRIPE_PUBLIC_KEY = '0' # Set this to the location of your audio files #AUDIO_URL_BASE = '//s3.amazonaws.com/SET-TO-MY-BUCKET/' # Which settings are passed into the javascript object js_config JS_SETTINGS = ['SITE_TITLE', 'AUDIO_URL_BASE'] # Which settings are aviable to the template tag GET_SETTING VISABLE_SETTINGS = ['SITE_TITLE', 'AUDIO_URL_BASE', 'GOOGLE_ANALYTICS_PROPERTY_ID', 'COLOR_CSS', 'SITE_EMAIL', 'PINAX_STRIPE_PUBLIC_KEY', 'SHOW_STRIPE_PLANS', 'OPEN_SITE'] ALLOW_ANONYMOUS = True PINAX_STRIPE_SECRET_KEY = 'sk_test_xxxxxxxxxxxxxxxxxxxx' PINAX_STRIPE_PUBLIC_KEY = 'pk_test_xxxxxxxxxxxxxxxxxxxx' PINAX_STRIPE_INVOICE_FROM_EMAIL = 'help@example.com' ACCESS_TG_RESTRICT = False TALKGROUP_RECENT_LENGTH = 120 # Minutes of history for TG recent_usage ADD_TRANS_AUTH_TOKEN = '7cf5857c61284' # Token to allow adding transmissions OPEN_SITE = True # If False new users cannot sign up ALLOW_GOOGLE_SIGNIN = False FIX_AUDIO_NAME = False # Load our local settings try: LOCAL_SETTINGS except NameError: try: from trunk_player.settings_local import * except ImportError: print("Failed to open settings_local.py")
nginx configuration
TODO: Complete this soon. (June 2019)