The plan for loading custom stucks is to have a custom directory under src/data and on start up walk it recursively loading data from found files, recursively so that a server maintainer/data contributor can arbitrarily organize files, e.g. to keep a 'package' together. I can think of a number of ways to structure this, wondering if you have any insight/opinions:
Type by filename: for files matching skills.json or traits.json, load them as skills or traits respectively; stock.json and resources.json would be read as stock-specific data of the respective type, keyed within the file, or possibly by the name of the parent directory
Type by file extension: have extensions .skills, .traits, .lifepaths, and .stock, and try to parse the files as the respective type
Type by explicit key within file: read any .json, look for a key type, then look for a key skills, traits, resources, stock and try and parse the corresponding value as the respective type, e.g. { "type": "stock", "stock": { "key": "elf", "name": "Elf", ...} } (I suppose could also try and parse the remaining non-type keys into the given type, so { "type": "stock", "key": "elf", "name": "Elf", ...} or { "type": "lifepaths", "settings": [...]).
Type by implicit key within file: read any .json, look for type keys within file, and parse values as the type given by the key (e.g. { "stock": { "key": "elf", "name": "Elf" ...} }). The simplest way of doing this would naturally allow for multiple types within the same file (e.g. { "stock": {...}, "traits": {...}, ...}.
I don't like option 1 as it feels brittle and restrictive; the remaining options allow for much more freedom in file naming and less risk of collision. Option 2 is nice and clean, but requires 'non-standard' file extensions (which I feel is fine). Option 3 is almost as clean as Option 2 with an extension making clear it's json and keeping the classification data within the data stream itself. Option 4 is super flexible, is more concise than Option 3, and even allows 'all-in-one packages', again keeping classification within the file itself, but I could maybe see resulting in unexpected behaviour.
Options 3 and 4 could even allow multiple objects within the same file: if the root is an array, parse each entity in the above way, so that [{"type": "stock", "stock": {...}}, {"type": "stock", "stock": {...}}, ...] and [{"stock": {...}}, {"stock": {...}}, ...] would read multiple distinct stocks.
Options 1, 3, and 4 would also allow us to extend to parsing YAML, TOML, XML, etc.
Personally my inclinations are towards 2 or 4. I like 4, but may be overengineered, and 2 is crystal clear, though I don't like keeping metadata outside the data stream.
Thoughts?
The plan for loading custom stucks is to have a `custom` directory under [`src/data`](https://git.obscuritus.ca:3000/danwizard208/charred-gold/src/branch/custom-stocks/src/data) and on start up walk it recursively loading data from found files, recursively so that a server maintainer/data contributor can arbitrarily organize files, e.g. to keep a 'package' together. I can think of a number of ways to structure this, wondering if you have any insight/opinions:
1. Type by filename: for files matching `skills.json` or `traits.json`, load them as skills or traits respectively; `stock.json` and `resources.json` would be read as stock-specific data of the respective type, keyed within the file, or possibly by the name of the parent directory
2. Type by file extension: have extensions `.skills`, `.traits`, `.lifepaths`, and `.stock`, and try to parse the files as the respective type
3. Type by explicit key within file: read any `.json`, look for a key `type`, then look for a key `skills`, `traits`, `resources`, `stock` and try and parse the corresponding value as the respective type, e.g. `{ "type": "stock", "stock": { "key": "elf", "name": "Elf", ...} }` (I suppose could also try and parse the remaining non-`type` keys into the given type, so `{ "type": "stock", "key": "elf", "name": "Elf", ...}` or `{ "type": "lifepaths", "settings": [...]`).
4. Type by implicit key within file: read any `.json`, look for type keys within file, and parse values as the type given by the key (e.g. `{ "stock": { "key": "elf", "name": "Elf" ...} }`). The simplest way of doing this would naturally allow for multiple types within the same file (e.g. `{ "stock": {...}, "traits": {...}, ...}`.
I don't like option 1 as it feels brittle and restrictive; the remaining options allow for much more freedom in file naming and less risk of collision. Option 2 is nice and clean, but requires 'non-standard' file extensions (which I feel is fine). Option 3 is almost as clean as Option 2 with an extension making clear it's json and keeping the classification data within the data stream itself. Option 4 is super flexible, is more concise than Option 3, and even allows 'all-in-one packages', again keeping classification within the file itself, but I could maybe see resulting in unexpected behaviour.
Options 3 and 4 could even allow multiple objects within the same file: if the root is an array, parse each entity in the above way, so that `[{"type": "stock", "stock": {...}}, {"type": "stock", "stock": {...}}, ...]` and `[{"stock": {...}}, {"stock": {...}}, ...]` would read multiple distinct stocks.
Options 1, 3, and 4 would also allow us to extend to parsing YAML, TOML, XML, etc.
Personally my inclinations are towards 2 or 4. I *like* 4, but may be overengineered, and 2 is crystal clear, though I don't like keeping metadata outside the data stream.
Thoughts?
The plan for loading custom stucks is to have a
custom
directory undersrc/data
and on start up walk it recursively loading data from found files, recursively so that a server maintainer/data contributor can arbitrarily organize files, e.g. to keep a 'package' together. I can think of a number of ways to structure this, wondering if you have any insight/opinions:Type by filename: for files matching
skills.json
ortraits.json
, load them as skills or traits respectively;stock.json
andresources.json
would be read as stock-specific data of the respective type, keyed within the file, or possibly by the name of the parent directoryType by file extension: have extensions
.skills
,.traits
,.lifepaths
, and.stock
, and try to parse the files as the respective typeType by explicit key within file: read any
.json
, look for a keytype
, then look for a keyskills
,traits
,resources
,stock
and try and parse the corresponding value as the respective type, e.g.{ "type": "stock", "stock": { "key": "elf", "name": "Elf", ...} }
(I suppose could also try and parse the remaining non-type
keys into the given type, so{ "type": "stock", "key": "elf", "name": "Elf", ...}
or{ "type": "lifepaths", "settings": [...]
).Type by implicit key within file: read any
.json
, look for type keys within file, and parse values as the type given by the key (e.g.{ "stock": { "key": "elf", "name": "Elf" ...} }
). The simplest way of doing this would naturally allow for multiple types within the same file (e.g.{ "stock": {...}, "traits": {...}, ...}
.I don't like option 1 as it feels brittle and restrictive; the remaining options allow for much more freedom in file naming and less risk of collision. Option 2 is nice and clean, but requires 'non-standard' file extensions (which I feel is fine). Option 3 is almost as clean as Option 2 with an extension making clear it's json and keeping the classification data within the data stream itself. Option 4 is super flexible, is more concise than Option 3, and even allows 'all-in-one packages', again keeping classification within the file itself, but I could maybe see resulting in unexpected behaviour.
Options 3 and 4 could even allow multiple objects within the same file: if the root is an array, parse each entity in the above way, so that
[{"type": "stock", "stock": {...}}, {"type": "stock", "stock": {...}}, ...]
and[{"stock": {...}}, {"stock": {...}}, ...]
would read multiple distinct stocks.Options 1, 3, and 4 would also allow us to extend to parsing YAML, TOML, XML, etc.
Personally my inclinations are towards 2 or 4. I like 4, but may be overengineered, and 2 is crystal clear, though I don't like keeping metadata outside the data stream.
Thoughts?
I would never do anything out than option 2 because it would allow easy reading and is friendly to someone designing