Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Taxi: A language for documenting data models and the contracts of APIs (taxilang.org)
100 points by mmastrac on May 27, 2022 | hide | past | favorite | 36 comments


Defining basic scalars and giving them good names is useful.

Trying to define models (as Taxi defines them) statically is often a source of hidden technical debt as models change over time and different teams in different parts of an org can't agree on what a model should contain.

I like that the creator of Taxi is already aware of the anti-pattern lurking behind common domain modeling but doesn't take those lessons learned far enough.

Case in point: the field names in models themselves should be the actual Types (again, in the nomenclature of Taxi)

`firstName: FirstName` is redundant. It is equivalent to an alias.

Instead a model should just be a open set of Types (I would probably call them attributes).

That's what Clojure Spec gets right from the get go and leads to much more flexible and open systems.

Given a model Person with something like this: firstName: FirstName middleName: MiddleName lastName: LastName ssn: SSN

If I only have a FirstName and LastName and don't know the ssn yet, I can't pass these around without having to define an extra model `PersonButWithoutSSN`. I can try to mark ssn as optional, but aren't all these fields optional? And in what combination?

Lastly I would advise against enforcing business logic in models, which is what the conditional blocks are effectively doing. Even if just used for data transformation: You'll run the danger of embedding a full scripting language (and you are pretty far already in that direction)

Stick to data coercion on the Type level. Any other transformation should be out of scope or defined separately to maximize interop.


> `firstName: FirstName` is redundant. It is equivalent to an alias.

FirstName is here its own type inheriting String, NOT being a String and that is important in order to enforce not to mixing two different ontological categories represented by the same implementation (STRING). For example, if we follow C style alias of types, FirstName and LastName could be silently exchanged leading to lots of silent errors.

I sort of agree with the rest of what you said however more dynamic systems although flexible tend to bring "undesired entropy" into the system.


> Trying to define models (as Taxi defines them) statically is often a source of hidden technical debt as models change over time and different teams in different parts of an org can't agree on what a model should contain.

I completely agree.

My view (and the view of Taxi) is that Types should be shared by teams, but never models - for exactly the reasons you've outlined.

Systems need a way of describing their data as a composite of those scalar types - that's what Models provide. Field names are part of that contract, as they tell you structurally where information is. Types describe semantically what that information contains - you need both to semantically document the contract of data.

However, if multiple teams start sharing models, you end up with exactly the `PersonButWithoutSSN` situation you've described, which is a BadThing.

Our vision is to let data consumers to semantically describe the data they require, in the format they want it. If that doesn't align with how a producer is emitting it, or requires stitching data from multiple producers, that's OK - tooling can enable that, provided you have rich semantic schemas.


I'm not familiar with Clojure, but: how would that suggestion handle a model like:

`hometown: City extends String, livesIn: City extends String`

Is the counter-argument that these are poorly specified types, and it should be: `hometown: Hometown extends City extends String, livesIn: LiveIn extends City extends String`?

I feel like at some point a balance needs to be struck. Its valuable to name types specifically; but that value can evaporate if they're too specific. It could approach FooFactoryFactoryGeneratorFactory java type-hell, where the names no longer make sense because they have to be globally unique and maximally describe whatever entity its typing. Maximal rigor isn't productive.


So... This might be a good opportunity for me to clear up some of the doubts I've had for a long time. I don't really work in "teams", I'm an amateur and all projects I've worked on have been just myself, for myself. So please bear with me.

Most of the things I do aren't really that important, but there's this one project where I need everything to work perfectly, or else I need to know about issues as soon as possible.

The whole thing depends on input I get from scraping, which basically means the data model can change at any time and there's a myriad errors that can happen. Besides from this input data, I also have to check that my "big-picture" assumptions about the model are correct before the actual program runs. In particular, the sample data already determines what values should be possible for most database tables; these tables don't change during program execution. I have a(n also immutable) config file that determines what can be queried from the database. This means I can prove that my config is/isn't correct (e.g. the program should never try to query for an inexistent value) before the program runs. And it should then be possible to turn these values into types to further tighten up the code and my workflow. But creating types from database values just feels all sorts of wrong. Or is it not?

All I really want is something that is DRY while also 100% ensuring the consistency of data before I run the program. I guess my idea is: Everything that can be automatically proved about the program before it runs, should be, and it should be stopped on its tracks before running if it's provably incorrect. I don't see how Clojure does that.

From everything I've read about Clojure, it sounds like the perfect fit for this kind of thing, but in the end I always conclude I'm too dumb and undisciplined to rely on myself to use it correctly. I make too many mistakes that boil down to assuming something has a different type than it actually has, or that it has a field it actually doesn't, I forget about checking important conditions, etc. And, to be quite honest, I'm too lazy for my own benefit, so I need the system itself to force me to do it the right way, while Clojure assumes you're responsible enough.

So I guess my question is: If I switch to Clojure for a critical project, how can I sleep at night?


Hey folks. Creator of Taxi here. Pretty chuffed to wake up and find Taxi on the front page of HN.

We built Taxi originally as a "Better OpenAPI", being frustrated with the signal-to-noise ratio of OpenAPI / Swagger's format.

However, over the years, we've found it shines when used to describe how data and APIs from different teams & systems can interoperate.

Rather than describing an API endpoint as accepting a "String", it's much more descriptive to say "EmailAddress", or "WorkEmailAddress". By using these same scalar types to enrich models in other systems, you can start building pretty powerful knowledge graphs of how everything in your org hangs together.

Happy to answer any questions.


Is there a distinction between values and references to values? Is there a concept of compound values and their components? Is there anything on referential constraints definition? Is it possible to indicate that an item cannot be removed if it is still used in another item? Or that if an item is removed other items that depend on it, should also be removed?

See for some explaination about what I mean: https://github.com/FransFaase/DataLang


https://xkcd.com/927/

Edit: Sorry, that was probably not polite. But I've been here before too many times. IO Docs, RAML, Blueprint... I guess the question is - why should I adopt this if I'm already committed to the Open API ecosystem, and why should I consider it if I am still deciding?


We started trying to be a better OpenAPI, however the goals have shifted, and today we're trying to build a cross-team semantic language for describing data and APIs - one which describes how data and APIs from independent teams can interact.

OpenAPI is a great way for someone to publish their API, so that another person can discover it, generate a Client SDK, and start building integrations. Taxi has the same goal, but for machines - so that software can automatically discover another API, and understand semantically how to interact with it.

The two techs can co-exist quite happily, and you get great documentation benefits in embedding Taxi types in your OpenAPI spec [0]

IMO, OpenAPI has a poor signal-to-noise ratio - a documentation tool you can't actually read without tooling. That feels misaligned to me.

More importantly, OpenAPI doesn't do a good enough job in describing the semantics of a contract. What String should I pass to your API? EmailAddress or CustomerId? In this transaction, where do I place the BuyerCustomerId vs SellerCustomerId? That's an important distinction, which other tools aren't trying to address.

With Taxi, we want to document APIs and Data in a way that tooling can understand across the organsiation, without having to fall back to reading docs, and that doesn't require shared models across the org.

From there, we can build tooling can understand the semantics of an API, and then automate the integration (That's our other tool, Vyne).

[0] https://docs.taxilang.org/generating-taxi-from-source/#examp...


Unpopular opinion, but I think RDF and shacl solves a lot of this!


I hadn't heard of RDF. I've tried to review its W3C material right now but the documentation is so abstract that I'm struggling to understand what idea it's trying to convey, or problem to solve.

https://www.w3.org/TR/rdf11-concepts/

I think the concepts this document is trying to convey would be a lot clearer with an example of how one might use the concept to build a concrete system, and what problem the RDF technology helps solve.

The overall description is "RDF is a standard model for data interchange on the Web." That seems to fall short from defining services and APIs, and then code generating client and/or server logic, which is the goal of Taxi and Smithy: https://awslabs.github.io/smithy/

RDF appears to be a data format, and I don't understand what benefit I'd get from using it, except perhaps that automated reasoning tools could potentially interact with my data more easily. But that is rarely a problem I have when building systems: maybe I'd consider this if I was in charge of building a government system that publishes official data to the public (and then also offer a CSV/JSON feed which everyone would probably use in practice), but when working in a private (corporate/organizational) environment, the benefits of RDF seem lackluster. The only immediate benefit I can think of would be that, using unique IRIs to represent business data types, I could perhaps see what systems across a company handle a particular type of data: but this would also be possible when using Smithy or Taxi (if appropriately reusing types and models across many interacting systems).

The data format also appears to be reasonably verbose due to the inclusion of the field name and type for every field. But many of the systems I build are RPC and the client and service already have a shared model of the data that they will be exchanging. Including RDF's additional information would seem to result in a wire format that's lower performance than something built on a toolkit like Apache Thrift/gRPC/Capn'Proto, or even hand-coded HTTP APIs (as much as I despise that approach).

What benefits would one get using RDF vs. defining an API using Taxi or Smithy, and then binding it to an efficient binary or text HTTP protocol representation? (E.g., BSON, FlatBuffers, etc.)

If I wanted to serve self-describing data I'd be more inclined to use a technology like Ion, which is very concrete and simple to understand: https://amzn.github.io/ion-docs/

The documents on SHACL are a bit clearer due to the inclusion of concrete examples, but I'm still not sure (1) what engineering problem I would face when building a system that these technologies would help me solve, and (2) is the actually usable (cross-platform) and robust, or is it largely a set conceptual ideas? There didn't seem to be many actively maintained open source projects focused on RDF.

https://www.w3.org/TR/2017/REC-shacl-20170720/

Please feel free to let me know if I'm underestimating or misunderstanding RDF. From my skim of its docs it seems like a data format, not a way to model APIs (and get down to business of giving them concrete protocols), which seems like a very crucial distinction from the problem that Taxi and Smithy solve.

I suppose since RDF can describe "any" data, one could invent a set of data types that express operations, models, types, etc., but unless there's a standard set of types for expressing operations, models, types, and bindings to protocol interfaces, already, then you're leaving it to the developers of each service to define their own RDF representations of these things, which means that there wouldn't be similarity to modeling of these concepts across services, preventing universal automated reasoning or transformation. (Let me know if there's a spec for using RDF for defining service interfaces ... there are a lot of docs associated with that WG and I can't read them all before writing an HN comment :-)

From all of what I've read, it looks like RDF is trying to define a data format that can be understood "universally" through the definition of well known IRIs; that feels like a different problem than defining a service interface, service stubs, and client libraries, that might be used privately within an organization for building client/server systems.


RDF and similar tech does suffer from documentation that is hard to digest, and hard to see how they combine.

The semantic stack is fundamentally about how you share data, integrate it etc Taxi seems to try and differentiate itself by describing data better (I've not looked at it deeply just some of the comments here describing it) for interchange. Also the restrictions on data, i.e. its shape (mush have property x) is what Shacl supports, which is also declarative in RDF.

There are a bunch of tools, extensions and what not out there for all this, most of which isn't grasped by average devs (don't mean that in a bad way).


There is https://www.hydra-cg.com in this space, which I've not even quite understood!

Also https://comunica.dev

I think the goals of these go beyond basic API specification and client generation.


How does Taxi and its goals compare to that of Smithy? https://awslabs.github.io/smithy/

On the surface it looks like they're attempting to solve very similar problems. Smithy also seems to come with code generators that can produce client libraries in various languages for calling the APIs too -- something that would be an important feature to me. There are implementations for a number of languages apparently, at various stages of maturity: https://awslabs.github.io/smithy/implementations.html

Among them include TypeScript, Go, Rust, Kotlin, Swift, Scala; server -side generators for TypeScript; and model converters for converting Smithy models to Open API or JSON Schema (presumably where compatible).

Cool project. I've definitely hoped that a toolkit would become de facto standard that provides model definition, protocol bindings (abstract operations accepting/returning models aka resources, described as a specific wire protocol like HTTP), and finally code generation for both client and server-sides of the interface for multiple platforms.

It looks like Smithy encourages the same approach of defining unique types to represent model fields. A Smithy example pulled from their page (abbreviating some newlines since HN still doesn't support Markdown :-( for code blocks):

// "pattern" is a trait. @pattern("^[A-Za-z0-9 ]+$") string CityId

resource City { identifiers: { cityId: CityId }, read: GetCity, list: ListCities, resources: [Forecast], }

resource Forecast { identifiers: { cityId: CityId }, read: GetForecast, }

@readonly operation GetCity { input: GetCityInput, output: GetCityOutput, errors: [NoSuchResource] }

@input structure GetCityInput { // "cityId" provides the identifier for the resource and // has to be marked as required. @required cityId: CityId }

Where existing RPC toolkits like Apache Thrift, gRPC, and Cap'n Proto define and derive their wire formats from your IDL, and you have no control over them, I like this concept's power because you can:

(1) use it to match and model the interface of an existing API, if you need to

(2) generate client and server stubs for many platforms, saving the hassle of trying to integrate with raw HTTP APIs (frequently mistermed "REST")

(3) ideally the toolkit would also provide bindings or plugins for multiple potential protocols, such as an HTTP resource-oriented representation, or a binary BSON representation, etc., to allow for easily making tradeoffs between APIs that are human-accessible and those that are more highly efficient (although if you can count on HTTP/2 or HTTP/3 at all layers, regular "text" requests should actually be quite efficient). And if you don't need the API to be easily callable from browse JavaScript, all the substantial parameters could be represented by HTTP headers to benefit from those protocols' header compression.

(4) Perhaps a plugin could also provide a special-purpose (perhaps configuration-driven, but might require the ability to define state machines; perhaps the protocol translator could be implemented in one language as a sort of VM with a sequence of commands for how the service API, and its operations, models, resources, types are converted into binary wire format.

(5) Toolkits like this if designed with foresight could also be useful for modeling (large) data sets -- a completely different use-case than providing APIs. For example, a plugin to convert collections of models/resources into Apache Parque and back, or other similar efficient bulk data formats. You could the toolkit to model your system's log file format (where resource collections convert to newline-delimited JSON text files), or perhaps performance.

Anywhere that data is handled, it ought to have a formal model in my opinion. Toolkits like these will simplify translating and manipulating it between formats and systems in a type-safe way given sufficient code generation support. (For example, in languages that support opaque type aliases, CityId, despite being a string, would not be castable to/from string.

Instead, a transformation can be defined to convert a CityId to string if necessary -- otherwise, developers are discouraged from conflating many types that might be aliases for string as interchangeable, permitting errors that a strong type system with opaque aliases would catch. (For example, since a CityId must match a regex, creating one would require calling a function that perhaps accepts a string as input, validates the regex, and then returns a CityId as long as the regex matches.) It's interesting to imagine what other semantic validation could be lifted up to the model level to make applications more robust. Perhaps the next stage beyond this one might be relationships between data. For example, perhaps a model could include a set of words, and then a text blob that must consist exclusively of space-delimited words from that list; or a list of banned words.

Anyway, it looks like Taxi and Smithy might be close enough in goals to potentially benefit from exploring collaboration. On the other hand some competition might be healthy.


Yep, similar goals, similar space.

Smithy looks nice.

Smithy didn't exist (publicly) when we started, so I guess we're the OG? :)


At work I often use Go notation for quickly describing my models to share with other devs, or for collaboratively iterating on ideas; both in text form and on UML diagrams.

The syntax is pretty simple and doesn't have "noise", it supports type aliases, too ("type Email string"), you can emulate optionals, there are useful built-in containers such as slices and maps on the language level (no need to import additional packages), most IDEs support syntax highlighting, etc.

It's an ad hoc solution and there's no tooling for it but Go has a built-in parser for its syntax so technically it's not hard to write a codegen tool based on that.


Not to be confused with Taxi [0], the esolang where programs are a series of directions for a taxi. "Programs are built by giving directions to destinations on a map which is where computation takes place."

[0]: https://esolangs.org/wiki/Taxi


I wish CUElang is going to succeed.

From what I'm seeing, it's already better than Taxi at what Taxi is supposed to be excellent at, and yet does much more, while still not being Turing complete.

I enjoy it better than JSON, YAML, TOML, and any respective schema associated to those. Sum and Union types are a killer feature, espacially since CUE can define a type and a value using the same syntax.

But it must be hard to implement, because for now, only the reference implementation, in Go, exists.


I also personally think CUE hits a sweet spot. The union (pun intended) between types and values (both sets) is really a game changer!

CUE is indeed not trivial to implement, especially its intricate semantics are tricky. I work an a CUE alternative, called ReSeT to really understand the semantics better.

See: https://old.reddit.com/r/ProgrammingLanguages/comments/uukxv...


I've wanted to use cuelang twice already, but having to install the go language on Ubuntu is a major inconvenience. Instead I used NestedText (I tend to do all my parsing with Pydantic anyway).


I’m sad to see that Taxi doesn’t support union/sum types! We make use of them quite heavily at Notion - our “block” model is a union of 60 subtypes - so it’s a make or break feature for any semantic language we’d use internally. For Kotlin, taxi could support this by generating sealed (data) class. GraphQL’s schema language seems similar in some respects, and supports union types but not generics.

I’m also curious about how teams or systems onboard to Taxi. There’s a lot of advice about “models shouldn’t be shared between teams” and such. How would you suggest an engineering org go from a shared set of models defined in some other language today, to model-per-team defined in Taxi? I’m a bit confused about to what degree models should encode some existing types in an implementation, or if models should be idealistic, and the implementation will need to do a lot of data munging to produce a structure that matches the model.

Finally, how should teams think about computed properties / expressions? If models describe the input or output of an API my team owns, wouldn’t our API compute these properties internally? Why expose a possibly-incorrect expression to compute it?


Hey.

There's a few things here, and I'd be keen to chat about them in more detail - I'm marty at vyne dot co, or often on our slack - if you're keen to dive into anything further.

> I’m sad to see that Taxi doesn’t support union/sum types!

These are in progress right now. I'm not sure how well it fits your usecase, so it'd be interesting to chat.

> For Kotlin, taxi could support this by generating sealed (data) class.

That's a nice idea. We have a kotlin generator for taxi, but I hadn't gotten to how we would handle the union types. I feel like you just saved us a few iterations! :)

> I’m also curious about how teams or systems onboard to Taxi. There’s a lot of advice about “models shouldn’t be shared between teams” and such.

What we're advocating against here is a common domain model - where everyone in the org is expected to model a Customer in exactly the same way - this leads to everything being optional.

In terms of onboarding, there's normally a few components:

* A shared set of types (no models), describing in a taxi project, which lives in a git repo somewhere

* Teams start describing their APIs using those types - either by publishing a Taxi model directly, or (more commonly), embedding the taxi types in their Swagger / Proto etc.

* Teams then start querying using TaxiQL and a query server (like Vyne) to fetch data from across those APIs. This keeps the consumers decoupled from the producer schemas.

> Finally, how should teams think about computed properties / expressions? [..] wouldn’t our API compute these properties internally?

Yeah, it's uncommon for a producer contract to include a computed property - though we have seen teams do this against things like CSV models.

It's more common for consumers to use these for declaring the computed result they want for a field in a query, and let the query service handle the computation. This allows either ad-hoc computation (defined in a field), or shared common expressions, defined as an expression type.


This feels almost identical to GraphQL

With GraphQL, you'd define types -- say something like:

    type User {
        id: ID
        name: String
        age: Int
        todos: Todo[]
    }

    type Todo {
        id: ID
        user_id: ID
        text: String
        is_completed: Boolean
    }
These can be autogenerated based on, for instance, the tables in your SQL database

Then you can use tools to generate query functions, like:

    query {
        users(where: { age: { _lte: 25 } }) {
            name
            todos(where: { is_completed: { _eq: false } }) {
                text
            }
        }
    }
And finally generate a typed SDK so that operations are strongly-typed for whatever language it is you're using to query the GraphQL API


Looks nice. But miss this : https://en.wikipedia.org/wiki/Object_Constraint_Language

That little language really helps to express constraints over data.


Are there any benefits of Taxi over Protobuf (the language)?


Protobuf is rather (extremely?) weak type-wise, so a lot of it probably boils down to "it has reasonable / usable types". Like, it doesn't even have semantics for saying "this string is assumed to be a timestamp" in a way you can refer to in multiple locations, the best you can do is name your field "ThingTimestamp" and hope people get the right idea.

This also seems to be describing a querying system, data mapping (e.g. for using csv data), runtime calculated data (likely used to drive the csv reading stuff), etc a number of other things that are less related to just data and more how to get that data, beyond just an RPC API described by types. E.g. the annotations and computed types could describe a fair number of sql queries in your data model, so you could probably just point it at a database with very minor "glue" code.

Whether that second part is relevant probably depends on how much you want to write in your implementation language vs in taxi's language... but it's substantially different than protobuf in any case.


Taxi is a documentation and query language, but not a runtime wire format. ie., you can't encode a message into Taxi and send it across the wire, like you can with Protobuf.

Taxi is intended to work with protobuf, rather than replace it.

The idea is teams work on the Taxi type system collaboratively, building a library of terms (FirstName / LastName / etc), which can be applied consistently across multiple team's schemas.

Developers using Protobuf / OpenAPI / RAML / etc., can embed those terms directly into their existing schemas, to improve documentation, and describe interoperability between multple internal systems.

(TeamA's field called "givenName" is the same data as TeamB's field called "FIRST_NAME")

As I write this, I realize our docs are missing how to use Taxi with Protobuf, which is kinda embarrassing. There's examples documented elsewhere[0], but we need to get the Taxi docs updated!

[0] https://docs.vyne.co/reference/message-formats/protobuf/#emb...


Yep, excellent point, I completely missed that it had no encoding specified. Thank you! That's mostly on me for skimming, don't take it as any kind of strong "docs should be more clear on this" signal :)

Honestly I think this makes it more interesting... though I'm curious how it handles its generics in an agnostic way, as protobuf has some, but they're quite crippled. Or do you just have to know the limitations of your encoding and not use those features, and/or build your own translation layer to handle it? E.g. you can get pretty far with a double-encoded blob of bytes.


Taxi has support for generics[0], but it's pretty nascent at the moment.

Internally, the compiler understands the idea of a generic type A<B>, which is used for Array<T> and Stream<T>, as well as describing generic functions.

I definitely think there's more we could do here, but haven't hit the usecase yet.

Hit me up on Slack[1] if you wanna discuss ideas, as I'd love to hear more about what role generics can play.

[0] https://docs.taxilang.org/language-reference/functions/#gene... [1] https://join.slack.com/t/vyne-dev/shared_invite/zt-697laanr-...


Ummm... google.protobuf.Timestamp is a thing. Use it. Don't just throw an int32 at me and _say_ it's a timestamp. /s


It's a struct which contains an int field, which iirc has more encoding, transfer, and decoding costs than an int (nobody cares when it's one, you might care when it's many thousands), and implies nullability because all message types are nullable. And unless you're using a Google-blessed code generator in a Google-blessed language, it's also noticeably more verbose to use[1].

But yes, you do indeed have user-defined structure (message) types in protobuf, and you can bend them quite far. There are also annotations of a sort, though support there seems worse, personally I've only very rarely been able to use them cross-language at all.

[1] yeah, there are alternate code generators, but their support and quality is "mixed" to put it mildly. They do exist though, and many have a plugin system of some kind for making your own less-painful custom types.


From the looks of it, documentation as a first class part of the schema definition, and enforceable, arbitrarily meaningful subtypes of scalars. There are tons of use cases for those features, which are achievable with Protobuf for struct-y types, but the advantage of using scalar subtypes is better integration into common host systems without gymnastics to wrap/unwrap the values.


Exactly.

Also, the idea that those types can be embedded across multiple schemas is how you can describe cross-system interoperability, without requiring all systems to have the same Model for "Customer" or "Account"


What benefits does this provide over Swagger/OpenAPI?

The ecosystem for OpenAPI is pretty rich - what is the intent here? Does it live with OpenAPI or replace it?


Originally, Taxi was built because the readability / signal-to-noise ration of OpenAPI is pretty poor. Plus, we wanted to build a documentation spec that could automate integration - describe the specific contracts of what an API does, the data it exposes, and how and where that data can be leveraged in other APIs.

You can't really read a swagger spec to get a sense of what it can do - you have to rely on tooling and UIs to make it understandable. For a toolkit who's entire purpose is documentation, that felt pretty frustrating.

Also, in OpenAPI, you can't document how data attributes interact - ie., the specifics about what data an operation needs (beyond a map of Strings and Numbers). We wanted a way of saying "This operation accepts an Email Address, and returns an AccountBalance" that was machine understandable.

However, you're right - the OpenAPI ecosystem is really rich, and we're not going to displace it.

We've built Taxi extensions for OpenAPI[0] which allow you to embed the taxi type metadata directly inside OpenAPI specs. This helps with describing the interoperability (and is how tooling like Vyne[1] can automate orchestration between unrelated APIs)

There's Swagger-to-taxi[2] tooling which will convert a Taxi schema to Swagger and back again - though this is more used internally to transpile OpenAPI on the fly. I doubt anyone will throw out their OpenAPI specs for Taxi - and that's ok.

[0] https://docs.taxilang.org/generating-taxi-from-source/#swagg... [1] https://vyne.co [2] https://gitlab.com/taxi-lang/taxi-lang/-/tree/develop/swagge...


Not to be confused with the abominable Taxii




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: