Add readme section about generating
This commit is contained in:
parent
2006df75d0
commit
b86568a6d0
2 changed files with 109 additions and 1 deletions
96
README.md
96
README.md
|
@ -104,6 +104,7 @@ in the freemarker template as a Map of String-keys to String-values.
|
||||||
`/some/endpoint <- SomeType(foo:String)` is an endpoint declaration. It declares one endpoint that have a request body
|
`/some/endpoint <- SomeType(foo:String)` is an endpoint declaration. It declares one endpoint that have a request body
|
||||||
data type called `SomeType` that has a field called `foo` of the type `String`.
|
data type called `SomeType` that has a field called `foo` of the type `String`.
|
||||||
|
|
||||||
|
### Data types
|
||||||
The DSL uses Scala convention of writing data types after the field name separated by a colon. Of course the DSL parser
|
The DSL uses Scala convention of writing data types after the field name separated by a colon. Of course the DSL parser
|
||||||
does not know anything about java or scala types, as far as it is concerned these are 2 strings and the first one is
|
does not know anything about java or scala types, as far as it is concerned these are 2 strings and the first one is
|
||||||
just named field-name and the other string is named field-type.
|
just named field-name and the other string is named field-type.
|
||||||
|
@ -111,6 +112,8 @@ just named field-name and the other string is named field-type.
|
||||||
`Embedded(foo:Bar)` is a `namedTypeDeclaration` which is parsed the same way as the request type above. But isn't tied
|
`Embedded(foo:Bar)` is a `namedTypeDeclaration` which is parsed the same way as the request type above. But isn't tied
|
||||||
to a specific endpoint.
|
to a specific endpoint.
|
||||||
|
|
||||||
|
### Automatically named data types
|
||||||
|
|
||||||
`/some/other/endpoint <- (bar:Seq[Embedded])` is another endpoint declaration. However this time the request body is
|
`/some/other/endpoint <- (bar:Seq[Embedded])` is another endpoint declaration. However this time the request body is
|
||||||
not named in the DSL. But all datatypes must have a name so it will simply name it after the last path segment and
|
not named in the DSL. But all datatypes must have a name so it will simply name it after the last path segment and
|
||||||
tack on the string 'Request' at the end. So the AST till contain a datatype named `endpointRequest` with a field named
|
tack on the string 'Request' at the end. So the AST till contain a datatype named `endpointRequest` with a field named
|
||||||
|
@ -122,9 +125,100 @@ decide to generate in the templates.
|
||||||
|
|
||||||
The only 'semantic' validation the parser performs is to check that not two types have the same name.
|
The only 'semantic' validation the parser performs is to check that not two types have the same name.
|
||||||
|
|
||||||
|
### Reponse data types
|
||||||
|
|
||||||
|
It is possible to have an optional response data type declared like so:
|
||||||
|
|
||||||
|
`/some/other/endpoint <- (bar:Seq[Embedded]) -> ResponseType(foo: Bar)`
|
||||||
|
|
||||||
|
The right pointing arrow `->` denotes a response type, it can be an anonymous data type in which case the parser till
|
||||||
|
name it from the last path segment and add 'Response' to the end of the data type name.
|
||||||
|
|
||||||
### DSL config
|
### DSL config
|
||||||
|
|
||||||
The only key in the config block the generator looks at is called `ending`, this will be used as the file ending for
|
The only key in the config block the generator looks at is called `ending`, this will be used as the file ending for
|
||||||
the resulting file of applying the freemarker template.
|
the resulting file of applying the freemarker template.
|
||||||
|
|
||||||
## Generating
|
## Generating
|
||||||
|
|
||||||
|
If the parser is successful it will hold the following data in the AST
|
||||||
|
|
||||||
|
```java
|
||||||
|
public record DocumentNode(
|
||||||
|
Map<String, String> config,
|
||||||
|
List<TypeNode> typeDefinitions,
|
||||||
|
List<EndpointNode> endpoints) {
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
This will be passed to the freemarker engine as the 'root' data object, meaning you have access to the parts in your freemarker template like this:
|
||||||
|
|
||||||
|
```injectedfreemarker
|
||||||
|
<#list typeDefinitions as type>
|
||||||
|
This is the datat type name: ${type.name?cap_first} with the first letter capitalized.
|
||||||
|
</#list>
|
||||||
|
```
|
||||||
|
|
||||||
|
That is, you can directly reference `typeDefinitions`, `endpoints` or `config`.
|
||||||
|
|
||||||
|
### Config
|
||||||
|
|
||||||
|
The config object is simply a String-map with the keys and values unfiltered from the input file. Here is an example
|
||||||
|
that writes the value for a config key called 'package'.
|
||||||
|
|
||||||
|
`package ${config.package}`
|
||||||
|
|
||||||
|
### Data types
|
||||||
|
|
||||||
|
These are all the data types the parser have collected, either from explicit declarations, request payloads and response
|
||||||
|
bodies.
|
||||||
|
|
||||||
|
```java
|
||||||
|
public record TypeNode(String name, List<FieldNode> fields) { }
|
||||||
|
public record FieldNode(String name, String type) { }
|
||||||
|
```
|
||||||
|
|
||||||
|
Here is an example template that writes the data types as Scala case classes
|
||||||
|
```injectedfreemarker
|
||||||
|
object Protocol:
|
||||||
|
<#list typeDefinitions?sort as type>
|
||||||
|
case class ${type.name?cap_first}(
|
||||||
|
<#list type.fields as field>
|
||||||
|
${field.name} : ${field.type},
|
||||||
|
</#list>
|
||||||
|
)
|
||||||
|
</#list>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Endpoints
|
||||||
|
|
||||||
|
The parser will collect the following data for endpoint declarations
|
||||||
|
|
||||||
|
```java
|
||||||
|
public record EndpointNode(
|
||||||
|
PathsNode paths,
|
||||||
|
String inputType,
|
||||||
|
Optional<String> outputType) {}
|
||||||
|
|
||||||
|
public record PathsNode(List<String> paths) {}
|
||||||
|
```
|
||||||
|
|
||||||
|
This is an example that will write out the endpoints with the path first, then the Input data type, then the optional
|
||||||
|
Output data type.
|
||||||
|
|
||||||
|
```injectedfreemarker
|
||||||
|
<#list endpoints as endpoint>
|
||||||
|
<#list endpoint.paths.paths>
|
||||||
|
<#items as segment>/${segment}</#items>
|
||||||
|
Input:
|
||||||
|
${endpoint.inputType?cap_first}
|
||||||
|
Output:
|
||||||
|
<#if endpoint.outputType.isPresent()>
|
||||||
|
${endpoint.outputType.get()?cap_first}
|
||||||
|
<#else>
|
||||||
|
Not specified
|
||||||
|
</#if>
|
||||||
|
</#list>
|
||||||
|
|
||||||
|
</#list>
|
||||||
|
```
|
14
endpoints-templates/endpoints-list.ftl
Normal file
14
endpoints-templates/endpoints-list.ftl
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
<#list endpoints as endpoint>
|
||||||
|
<#list endpoint.paths.paths>
|
||||||
|
<#items as segment>/${segment}</#items>
|
||||||
|
Input:
|
||||||
|
${endpoint.inputType?cap_first}
|
||||||
|
Output:
|
||||||
|
<#if endpoint.outputType.isPresent()>
|
||||||
|
${endpoint.outputType.get()?cap_first}
|
||||||
|
<#else>
|
||||||
|
Not specified
|
||||||
|
</#if>
|
||||||
|
</#list>
|
||||||
|
|
||||||
|
</#list>
|
Loading…
Add table
Reference in a new issue