# TypeORM > TypeORM is an ORM that can run in NodeJS, Browser, Cordova, Ionic, React Native, NativeScript, Expo, and Electron platforms and can be used with TypeScript and JavaScript. - [TypeORM](https://typeorm.io/index.md) ## maintainers Meet the team behind TypeORM - [Maintainers](https://typeorm.io/maintainers.md): Meet the team behind TypeORM ## docs ### advanced-topics #### indices Column indices - [Indices](https://typeorm.io/docs/advanced-topics/indices.md): Column indices #### listeners-and-subscribers What is an Entity Listener? - [Entity Listeners and Subscribers](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md): What is an Entity Listener? #### logging Enabling logging - [Logging](https://typeorm.io/docs/advanced-topics/logging.md): Enabling logging #### performance-optimizing 1. Introduction to performance optimization - [Performance and optimization in TypeORM](https://typeorm.io/docs/advanced-topics/performance-optimizing.md): 1. Introduction to performance optimization #### transactions Creating and using transactions - [Transactions](https://typeorm.io/docs/advanced-topics/transactions.md): Creating and using transactions #### using-cli Installing CLI - [Using CLI](https://typeorm.io/docs/advanced-topics/using-cli.md): Installing CLI ### data-source #### data-source What is a DataSource? - [DataSource](https://typeorm.io/docs/data-source/data-source.md): What is a DataSource? #### data-source-api - options - Options used to create this dataSource. - [DataSource API](https://typeorm.io/docs/data-source/data-source-api.md): - options - Options used to create this dataSource. #### data-source-options What is DataSourceOptions? - [Data Source Options](https://typeorm.io/docs/data-source/data-source-options.md): What is DataSourceOptions? #### multiple-data-sources Using multiple data sources - [Multiple data sources, databases, schemas and replication setup](https://typeorm.io/docs/data-source/multiple-data-sources.md): Using multiple data sources #### null-and-undefined-handling In 'WHERE' conditions the values null and undefined are not strictly valid values in TypeORM. - [Handling null and undefined values in where conditions](https://typeorm.io/docs/data-source/null-and-undefined-handling.md): In 'WHERE' conditions the values null and undefined are not strictly valid values in TypeORM. ### drivers #### google-spanner Installation - [Google Spanner](https://typeorm.io/docs/drivers/google-spanner.md): Installation #### microsoft-sqlserver Installation - [Microsoft SQLServer](https://typeorm.io/docs/drivers/microsoft-sqlserver.md): Installation #### mongodb MongoDB support - [MongoDB](https://typeorm.io/docs/drivers/mongodb.md): MongoDB support #### mysql MySQL, MariaDB and Amazon Aurora MySQL are supported as TypeORM drivers. - [MySQL / MariaDB](https://typeorm.io/docs/drivers/mysql.md): MySQL, MariaDB and Amazon Aurora MySQL are supported as TypeORM drivers. #### oracle Installation - [Oracle](https://typeorm.io/docs/drivers/oracle.md): Installation #### postgres PostgreSQL, CockroachDB and Amazon Aurora Postgres are supported as TypeORM drivers. - [Postgres / CockroachDB](https://typeorm.io/docs/drivers/postgres.md): PostgreSQL, CockroachDB and Amazon Aurora Postgres are supported as TypeORM drivers. #### sap Installation - [SAP HANA](https://typeorm.io/docs/drivers/sap.md): Installation #### sqlite Installation - [SQLite](https://typeorm.io/docs/drivers/sqlite.md): Installation ### entity #### embedded-entities There is an amazing way to reduce duplication in your app (using composition over inheritance) by using embedded columns. - [Embedded Entities](https://typeorm.io/docs/entity/embedded-entities.md): There is an amazing way to reduce duplication in your app (using composition over inheritance) by using embedded columns. #### entities What is an Entity? - [Entities](https://typeorm.io/docs/entity/entities.md): What is an Entity? #### entity-inheritance Concrete Table Inheritance - [Entity Inheritance](https://typeorm.io/docs/entity/entity-inheritance.md): Concrete Table Inheritance #### separating-entity-definition Defining Schemas - [Separating Entity Definition](https://typeorm.io/docs/entity/separating-entity-definition.md): Defining Schemas #### tree-entities TypeORM supports the Adjacency list and Closure table patterns for storing tree structures. - [Tree Entities](https://typeorm.io/docs/entity/tree-entities.md): TypeORM supports the Adjacency list and Closure table patterns for storing tree structures. #### view-entities What is a ViewEntity? - [View Entities](https://typeorm.io/docs/entity/view-entities.md): What is a ViewEntity? ### getting-started TypeORM is an ORM - [Getting Started](https://typeorm.io/docs/getting-started.md): TypeORM is an ORM ### guides #### active-record-data-mapper What is the Active Record pattern? - [Active Record vs Data Mapper](https://typeorm.io/docs/guides/active-record-data-mapper.md): What is the Active Record pattern? #### example-with-express Initial setup - [Example using TypeORM with Express](https://typeorm.io/docs/guides/example-with-express.md): Initial setup #### sequelize-migration Setting up a data source - [Migration from Sequelize to TypeORM](https://typeorm.io/docs/guides/sequelize-migration.md): Setting up a data source #### sql-tag TypeORM provides a way to write SQL queries using template literals with automatic parameter handling based on your database type. This feature helps prevent SQL injection while making queries more readable. The SQL tag is implemented as a wrapper around the .query method, providing an alternative interface while maintaining the same underlying functionality. - [SQL Tag](https://typeorm.io/docs/guides/sql-tag.md): TypeORM provides a way to write SQL queries using template literals with automatic parameter handling based on your database type. This feature helps prevent SQL injection while making queries more readable. The SQL tag is implemented as a wrapper around the .query method, providing an alternative interface while maintaining the same underlying functionality. #### usage-with-javascript TypeORM can be used not only with TypeScript, but also with JavaScript. - [Using with JavaScript](https://typeorm.io/docs/guides/usage-with-javascript.md): TypeORM can be used not only with TypeScript, but also with JavaScript. #### validation To use validation use class-validator. - [Using Validation](https://typeorm.io/docs/guides/validation.md): To use validation use class-validator. ### help #### decorator-reference Entity decorators - [Decorator reference](https://typeorm.io/docs/help/decorator-reference.md): Entity decorators #### faq How do I update a database schema? - [FAQ](https://typeorm.io/docs/help/faq.md): How do I update a database schema? #### support Found a bug or want to propose a new feature? - [Support](https://typeorm.io/docs/help/support.md): Found a bug or want to propose a new feature? #### supported-platforms NodeJS - [Supported platforms](https://typeorm.io/docs/help/supported-platforms.md): NodeJS ### migrations #### api In order to use an API to change a database schema you can use QueryRunner. - [Query Runner API](https://typeorm.io/docs/migrations/api.md): In order to use an API to change a database schema you can use QueryRunner. #### creating You can create a new migration using CLI by specifying the name and location of the migration: - [Creating manually](https://typeorm.io/docs/migrations/creating.md): You can create a new migration using CLI by specifying the name and location of the migration: #### executing Once you have a migration to run on production, you can run them using a CLI command: - [Executing and reverting](https://typeorm.io/docs/migrations/executing.md): Once you have a migration to run on production, you can run them using a CLI command: #### extra Timestamp - [Extra options](https://typeorm.io/docs/migrations/extra.md): Timestamp #### faking You can also fake run a migration using the --fake flag (-f for short). This will add the migration - [Faking Migrations and Rollbacks](https://typeorm.io/docs/migrations/faking.md): You can also fake run a migration using the --fake flag (-f for short). This will add the migration #### generating TypeORM is able to automatically generate migration files based on the changes you made to the entities, comparing them with existing database schema on the server. - [Generating](https://typeorm.io/docs/migrations/generating.md): TypeORM is able to automatically generate migration files based on the changes you made to the entities, comparing them with existing database schema on the server. #### reverting If for some reason you want to revert the changes, you can run: - [Reverting](https://typeorm.io/docs/migrations/reverting.md): If for some reason you want to revert the changes, you can run: #### setup Before working with migrations you need to setup your DataSource options properly: - [Setup](https://typeorm.io/docs/migrations/setup.md): Before working with migrations you need to setup your DataSource options properly: #### status To show all migrations and whether they've been run or not use following command: - [Status](https://typeorm.io/docs/migrations/status.md): To show all migrations and whether they've been run or not use following command: #### vite Using TypeORM in a Vite project is pretty straight forward. However, when you use migrations, you will run into "...migration name is wrong. Migration class name should have a - [Vite](https://typeorm.io/docs/migrations/vite.md): Using TypeORM in a Vite project is pretty straight forward. However, when you use migrations, you will run into "...migration name is wrong. Migration class name should have a #### why Once you get into production you'll need to synchronize model changes into the database. - [How migrations work?](https://typeorm.io/docs/migrations/why.md): Once you get into production you'll need to synchronize model changes into the database. ### query-builder #### caching You can cache results selected by these QueryBuilder methods: getMany, getOne, getRawMany, getRawOne and getCount. - [Caching queries](https://typeorm.io/docs/query-builder/caching.md): You can cache results selected by these QueryBuilder methods: getMany, getOne, getRawMany, getRawOne and getCount. #### delete-query-builder Delete - [Delete using Query Builder](https://typeorm.io/docs/query-builder/delete-query-builder.md): Delete #### insert-query-builder You can create INSERT queries using QueryBuilder. - [Insert using Query Builder](https://typeorm.io/docs/query-builder/insert-query-builder.md): You can create INSERT queries using QueryBuilder. #### relational-query-builder RelationQueryBuilder is a special type of QueryBuilder which allows you to work with your relations. - [Working with Relations](https://typeorm.io/docs/query-builder/relational-query-builder.md): RelationQueryBuilder is a special type of QueryBuilder which allows you to work with your relations. #### select-query-builder What is a QueryBuilder? - [Select using Query Builder](https://typeorm.io/docs/query-builder/select-query-builder.md): What is a QueryBuilder? #### update-query-builder You can create UPDATE queries using QueryBuilder. - [Update using Query Builder](https://typeorm.io/docs/query-builder/update-query-builder.md): You can create UPDATE queries using QueryBuilder. ### query-runner What is a QueryRunner? - [Query Runner](https://typeorm.io/docs/query-runner.md): What is a QueryRunner? ### relations #### eager-and-lazy-relations Eager relations - [Eager and Lazy Relations](https://typeorm.io/docs/relations/eager-and-lazy-relations.md): Eager relations #### many-to-many-relations What are many-to-many relations? - [Many-to-many relations](https://typeorm.io/docs/relations/many-to-many-relations.md): What are many-to-many relations? #### many-to-one-one-to-many-relations Many-to-one / one-to-many is a relation where A contains multiple instances of B, but B contains only one instance of A. - [Many-to-one / one-to-many relations](https://typeorm.io/docs/relations/many-to-one-one-to-many-relations.md): Many-to-one / one-to-many is a relation where A contains multiple instances of B, but B contains only one instance of A. #### one-to-one-relations One-to-one is a relation where A contains only one instance of B, and B contains only one instance of A. - [One-to-one relations](https://typeorm.io/docs/relations/one-to-one-relations.md): One-to-one is a relation where A contains only one instance of B, and B contains only one instance of A. #### relations What are relations? - [Relations](https://typeorm.io/docs/relations/relations.md): What are relations? #### relations-faq How to create self referencing relation? - [Relations FAQ](https://typeorm.io/docs/relations/relations-faq.md): How to create self referencing relation? ### working-with-entity-manager #### custom-repository You can create a custom repository which should contain methods to work with your database. - [Custom repositories](https://typeorm.io/docs/working-with-entity-manager/custom-repository.md): You can create a custom repository which should contain methods to work with your database. #### entity-manager-api - dataSource - The DataSource used by EntityManager. - [EntityManager API](https://typeorm.io/docs/working-with-entity-manager/entity-manager-api.md): - dataSource - The DataSource used by EntityManager. #### find-options Basic options - [Find Options](https://typeorm.io/docs/working-with-entity-manager/find-options.md): Basic options #### repository-api Repository API - [Repository APIs](https://typeorm.io/docs/working-with-entity-manager/repository-api.md): Repository API #### working-with-entity-manager Using EntityManager you can manage (insert, update, delete, load, etc.) any entity. - [EntityManager](https://typeorm.io/docs/working-with-entity-manager/working-with-entity-manager.md): Using EntityManager you can manage (insert, update, delete, load, etc.) any entity. #### working-with-repository Repository is just like EntityManager but its operations are limited to a concrete entity. - [Repository](https://typeorm.io/docs/working-with-entity-manager/working-with-repository.md): Repository is just like EntityManager but its operations are limited to a concrete entity. --- # Full Documentation Content [![Michael Bromley](https://avatars.githubusercontent.com/michaelbromley?s=150)](https://github.com/michaelbromley) [Michael BromleySteering, Technical Liaison](https://github.com/michaelbromley) [![David Höck](https://avatars.githubusercontent.com/dlhck?s=150)](https://github.com/dlhck) [David HöckSteering, External Relations](https://github.com/dlhck) [![Lucian Mocanu](https://avatars.githubusercontent.com/alumni?s=150)](https://github.com/alumni) [Lucian MocanuTechnical Lead](https://github.com/alumni) [![Naor Peled](https://avatars.githubusercontent.com/naorpeled?s=150)](https://github.com/naorpeled) [Naor PeledMaintainer](https://github.com/naorpeled) [![Giorgio Boa](https://avatars.githubusercontent.com/gioboa?s=150)](https://github.com/gioboa) [Giorgio BoaMaintainer](https://github.com/gioboa) [![Piotr Kuczynski](https://avatars.githubusercontent.com/pkuczynski?s=150)](https://github.com/pkuczynski) [Piotr KuczynskiMaintainer](https://github.com/pkuczynski) [![Mohammed Gomaa](https://avatars.githubusercontent.com/G0maa?s=150)](https://github.com/G0maa) [Mohammed GomaaMaintainer](https://github.com/G0maa) [![Julian Pufler](https://avatars.githubusercontent.com/pujux?s=150)](https://github.com/pujux) [Julian PuflerMaintainer](https://github.com/pujux) [![Simon Garner](https://avatars.githubusercontent.com/sgarner?s=150)](https://github.com/sgarner) [Simon GarnerMaintainer](https://github.com/sgarner) [![Pieter Wigboldus](https://avatars.githubusercontent.com/w3nl?s=150)](https://github.com/w3nl) [Pieter WigboldusMaintainer](https://github.com/w3nl) [![Mike Guida](https://avatars.githubusercontent.com/mguida22?s=150)](https://github.com/mguida22) [Mike GuidaMaintainer](https://github.com/mguida22) --- # Indices ## Column indices[​](#column-indices "Direct link to Column indices") You can create a database index for a specific column by using `@Index` on a column you want to make an index. You can create indices for any columns of your entity. Example: ``` import { Entity, PrimaryGeneratedColumn, Column, Index } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Index() @Column() firstName: string @Column() @Index() lastName: string } ``` You can also specify an index name: ``` import { Entity, PrimaryGeneratedColumn, Column, Index } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Index("name1-idx") @Column() firstName: string @Column() @Index("name2-idx") lastName: string } ``` ## Unique indices[​](#unique-indices "Direct link to Unique indices") To create a unique index you need to specify `{ unique: true }` in the index options: > Note: CockroachDB stores unique indices as `UNIQUE` constraints ``` import { Entity, PrimaryGeneratedColumn, Column, Index } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Index({ unique: true }) @Column() firstName: string @Column() @Index({ unique: true }) lastName: string } ``` ## Indices with multiple columns[​](#indices-with-multiple-columns "Direct link to Indices with multiple columns") To create an index with multiple columns you need to put `@Index` on the entity itself and specify all column property names which should be included in the index. Example: ``` import { Entity, PrimaryGeneratedColumn, Column, Index } from "typeorm" @Entity() @Index(["firstName", "lastName"]) @Index(["firstName", "middleName", "lastName"], { unique: true }) export class User { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() middleName: string @Column() lastName: string } ``` ## Spatial Indices[​](#spatial-indices "Direct link to Spatial Indices") MySQL, CockroachDB and PostgreSQL (when PostGIS is available) supports spatial indices. To create a spatial index on a column in MySQL, add an `Index` with `spatial: true` on a column that uses a spatial type (`geometry`, `point`, `linestring`, `polygon`, `multipoint`, `multilinestring`, `multipolygon`, `geometrycollection`): ``` @Entity() export class Thing { @Column("point") @Index({ spatial: true }) point: string } ``` To create a spatial index on a column add an `Index` with `spatial: true` on a column that uses a spatial type (`geometry`, `geography`): ``` export interface Geometry { type: "Point" coordinates: [Number, Number] } @Entity() export class Thing { @Column("geometry", { spatialFeatureType: "Point", srid: 4326, }) @Index({ spatial: true }) point: Geometry } ``` ## Concurrent creation[​](#concurrent-creation "Direct link to Concurrent creation") In order to avoid having to obtain an ACCESS EXCLUSIVE lock when creating and dropping indexes in Postgres, you may create them using the CONCURRENTLY modifier. If you want to use the concurrent option, you need to set `migrationsTransactionMode: none` in your data source options. TypeORM supports generating SQL with this option when the concurrent option is specified on the index. ``` @Index(["firstName", "middleName", "lastName"], { concurrent: true }) ``` For more information see the [Postgres documentation](https://www.postgresql.org/docs/current/sql-createindex.html). ## Disabling synchronization[​](#disabling-synchronization "Direct link to Disabling synchronization") TypeORM does not support some index options and definitions (e.g. `lower`, `pg_trgm`) due to many database-specific differences and multiple issues with getting information about existing database indices and synchronizing them automatically. In such cases you should create the index manually (for example, in [the migrations](https://typeorm.io/docs/migrations/why.md)) with any index signature you want. To make TypeORM ignore these indices during synchronization, use `synchronize: false` option on the `@Index` decorator. For example, you create an index with case-insensitive comparison: ``` CREATE INDEX "POST_NAME_INDEX" ON "post" (lower("name")) ``` after that, you should disable synchronization for this index to avoid deletion on next schema sync: ``` @Entity() @Index("POST_NAME_INDEX", { synchronize: false }) export class Post { @PrimaryGeneratedColumn() id: number @Column() name: string } ``` --- # Entity Listeners and Subscribers ## What is an Entity Listener?[​](#what-is-an-entity-listener "Direct link to What is an Entity Listener?") Any of your entities can have methods with custom logic that listen to specific entity events. You must mark those methods with special decorators depending on what event you want to listen to. **Note:** Do not make any database calls within a listener, opt for [subscribers](#what-is-a-subscriber) instead. ### `@AfterLoad`[​](#afterload "Direct link to afterload") You can define a method with any name in entity and mark it with `@AfterLoad` and TypeORM will call it each time the entity is loaded using `QueryBuilder` or repository/manager find methods. Example: ``` @Entity() export class Post { @AfterLoad() updateCounters() { if (this.likesCount === undefined) this.likesCount = 0 } } ``` ### `@BeforeInsert`[​](#beforeinsert "Direct link to beforeinsert") You can define a method with any name in entity and mark it with `@BeforeInsert` and TypeORM will call it before the entity is inserted using repository/manager `save`. Example: ``` @Entity() export class Post { @BeforeInsert() updateDates() { this.createdDate = new Date() } } ``` ### `@AfterInsert`[​](#afterinsert "Direct link to afterinsert") You can define a method with any name in entity and mark it with `@AfterInsert` and TypeORM will call it after the entity is inserted using repository/manager `save`. Example: ``` @Entity() export class Post { @AfterInsert() resetCounters() { this.counters = 0 } } ``` ### `@BeforeUpdate`[​](#beforeupdate "Direct link to beforeupdate") You can define a method with any name in the entity and mark it with `@BeforeUpdate` and TypeORM will call it before an existing entity is updated using repository/manager `save`. Keep in mind, however, that this will occur only when information is changed in the model. If you run `save` without modifying anything from the model, `@BeforeUpdate` and `@AfterUpdate` will not run. Example: ``` @Entity() export class Post { @BeforeUpdate() updateDates() { this.updatedDate = new Date() } } ``` ### `@AfterUpdate`[​](#afterupdate "Direct link to afterupdate") You can define a method with any name in the entity and mark it with `@AfterUpdate` and TypeORM will call it after an existing entity is updated using repository/manager `save`. Example: ``` @Entity() export class Post { @AfterUpdate() updateCounters() { this.counter = 0 } } ``` ### `@BeforeRemove`[​](#beforeremove "Direct link to beforeremove") You can define a method with any name in the entity and mark it with `@BeforeRemove` and TypeORM will call it before an entity is removed using repository/manager `remove`. Example: ``` @Entity() export class Post { @BeforeRemove() updateStatus() { this.status = "removed" } } ``` ### `@AfterRemove`[​](#afterremove "Direct link to afterremove") You can define a method with any name in the entity and mark it with `@AfterRemove` and TypeORM will call it after the entity is removed using repository/manager `remove`. Example: ``` @Entity() export class Post { @AfterRemove() updateStatus() { this.status = "removed" } } ``` ### `@BeforeSoftRemove`[​](#beforesoftremove "Direct link to beforesoftremove") You can define a method with any name in the entity and mark it with `@BeforeSoftRemove` and TypeORM will call it before an entity is soft removed using repository/manager `softRemove`. Example: ``` @Entity() export class Post { @BeforeSoftRemove() updateStatus() { this.status = "soft-removed" } } ``` ### `@AfterSoftRemove`[​](#aftersoftremove "Direct link to aftersoftremove") You can define a method with any name in the entity and mark it with `@AfterSoftRemove` and TypeORM will call it after the entity is soft removed using repository/manager `softRemove`. Example: ``` @Entity() export class Post { @AfterSoftRemove() updateStatus() { this.status = "soft-removed" } } ``` ### `@BeforeRecover`[​](#beforerecover "Direct link to beforerecover") You can define a method with any name in the entity and mark it with `@BeforeRecover` and TypeORM will call it before an entity is recovered using repository/manager `recover`. Example: ``` @Entity() export class Post { @BeforeRecover() updateStatus() { this.status = "recovered" } } ``` ### `@AfterRecover`[​](#afterrecover "Direct link to afterrecover") You can define a method with any name in the entity and mark it with `@AfterRecover` and TypeORM will call it after the entity is recovered using repository/manager `recover`. Example: ``` @Entity() export class Post { @AfterRecover() updateStatus() { this.status = "recovered" } } ``` ## What is a Subscriber?[​](#what-is-a-subscriber "Direct link to What is a Subscriber?") Marks a class as an event subscriber which can listen to specific entity events or any entity events. Events are firing using `QueryBuilder` and repository/manager methods. Example: ``` @EventSubscriber() export class PostSubscriber implements EntitySubscriberInterface { /** * Indicates that this subscriber only listen to Post events. */ listenTo() { return Post } /** * Called before post insertion. */ beforeInsert(event: InsertEvent) { console.log(`BEFORE POST INSERTED: `, event.entity) } } ``` You can implement any method from `EntitySubscriberInterface`. To listen to any entity you just omit `listenTo` method and use `any`: ``` @EventSubscriber() export class PostSubscriber implements EntitySubscriberInterface { /** * Called after entity is loaded. */ afterLoad(entity: any) { console.log(`AFTER ENTITY LOADED: `, entity) } /** * Called before query execution. */ beforeQuery(event: BeforeQueryEvent) { console.log(`BEFORE QUERY: `, event.query) } /** * Called after query execution. */ afterQuery(event: AfterQueryEvent) { console.log(`AFTER QUERY: `, event.query) } /** * Called before entity insertion. */ beforeInsert(event: InsertEvent) { console.log(`BEFORE ENTITY INSERTED: `, event.entity) } /** * Called after entity insertion. */ afterInsert(event: InsertEvent) { console.log(`AFTER ENTITY INSERTED: `, event.entity) } /** * Called before entity update. */ beforeUpdate(event: UpdateEvent) { console.log(`BEFORE ENTITY UPDATED: `, event.entity) } /** * Called after entity update. */ afterUpdate(event: UpdateEvent) { console.log(`AFTER ENTITY UPDATED: `, event.entity) } /** * Called before entity removal. */ beforeRemove(event: RemoveEvent) { console.log( `BEFORE ENTITY WITH ID ${event.entityId} REMOVED: `, event.entity, ) } /** * Called after entity removal. */ afterRemove(event: RemoveEvent) { console.log( `AFTER ENTITY WITH ID ${event.entityId} REMOVED: `, event.entity, ) } /** * Called before entity removal. */ beforeSoftRemove(event: SoftRemoveEvent) { console.log( `BEFORE ENTITY WITH ID ${event.entityId} SOFT REMOVED: `, event.entity, ) } /** * Called after entity removal. */ afterSoftRemove(event: SoftRemoveEvent) { console.log( `AFTER ENTITY WITH ID ${event.entityId} SOFT REMOVED: `, event.entity, ) } /** * Called before entity recovery. */ beforeRecover(event: RecoverEvent) { console.log( `BEFORE ENTITY WITH ID ${event.entityId} RECOVERED: `, event.entity, ) } /** * Called after entity recovery. */ afterRecover(event: RecoverEvent) { console.log( `AFTER ENTITY WITH ID ${event.entityId} RECOVERED: `, event.entity, ) } /** * Called before transaction start. */ beforeTransactionStart(event: TransactionStartEvent) { console.log(`BEFORE TRANSACTION STARTED: `, event) } /** * Called after transaction start. */ afterTransactionStart(event: TransactionStartEvent) { console.log(`AFTER TRANSACTION STARTED: `, event) } /** * Called before transaction commit. */ beforeTransactionCommit(event: TransactionCommitEvent) { console.log(`BEFORE TRANSACTION COMMITTED: `, event) } /** * Called after transaction commit. */ afterTransactionCommit(event: TransactionCommitEvent) { console.log(`AFTER TRANSACTION COMMITTED: `, event) } /** * Called before transaction rollback. */ beforeTransactionRollback(event: TransactionRollbackEvent) { console.log(`BEFORE TRANSACTION ROLLBACK: `, event) } /** * Called after transaction rollback. */ afterTransactionRollback(event: TransactionRollbackEvent) { console.log(`AFTER TRANSACTION ROLLBACK: `, event) } } ``` Make sure your `subscribers` property is set in your [DataSourceOptions](https://typeorm.io/docs/data-source/data-source-options.md#common-data-source-options) so TypeORM loads your subscriber. ### `Event Object`[​](#event-object "Direct link to event-object") Excluding `listenTo`, all `EntitySubscriberInterface` methods are passed an event object that has the following base properties: * `dataSource: DataSource` - DataSource used in the event. * `queryRunner: QueryRunner` - QueryRunner used in the event transaction. * `manager: EntityManager` - EntityManager used in the event transaction. See each [Event's interface](https://github.com/typeorm/typeorm/tree/master/src/subscriber/event) for additional properties. Note that `event.entity` may not necessarily contain primary key(s) when `Repository.update()` is used. Only the values provided as the entity partial will be available. In order to make primary keys available in the subscribers, you can explicitly pass primary key value(s) in the partial entity object literal or use `Repository.save()`, which performs re-fetching. ``` await postRepository.update(post.id, { description: "Bacon ipsum dolor amet cow" }) // post.subscriber.ts afterUpdate(event: UpdateEvent) { console.log(event.entity) // outputs { description: 'Bacon ipsum dolor amet cow' } } ``` **Note:** All database operations in the subscribed event listeners should be performed using the event object's `queryRunner` or `manager` instance. --- # Logging ## Enabling logging[​](#enabling-logging "Direct link to Enabling logging") You can enable logging of all queries and errors by simply setting `logging: true` in data source options: ``` { name: "mysql", type: "mysql", host: "localhost", port: 3306, username: "test", password: "test", database: "test", ... logging: true } ``` ## Logging options[​](#logging-options "Direct link to Logging options") You can enable different types of logging in data source options: ``` { host: "localhost", ... logging: ["query", "error"] } ``` If you want to enable logging of failed queries only then only add `error`: ``` { host: "localhost", ... logging: ["error"] } ``` There are other options you can use: * `query` - logs all queries. * `error` - logs all failed queries and errors. * `schema` - logs the schema build process. * `warn` - logs internal orm warnings. * `info` - logs internal orm informative messages. * `log` - logs internal orm log messages. You can specify as many options as needed. If you want to enable all logging you can simply specify `logging: "all"`: ``` { host: "localhost", ... logging: "all" } ``` ## Log long-running queries[​](#log-long-running-queries "Direct link to Log long-running queries") If you have performance issues, you can log queries that take too much time to execute by setting `maxQueryExecutionTime` in data source options: ``` { host: "localhost", ... maxQueryExecutionTime: 1000 } ``` This code will log all queries which run for more than `1 second`. ## Changing default logger[​](#changing-default-logger "Direct link to Changing default logger") TypeORM ships with 4 different types of logger: * `advanced-console` - this is the default logger which logs all messages into the console using color and sql syntax highlighting. * `simple-console` - this is a simple console logger which is exactly the same as the advanced logger, but it does not use any color highlighting. This logger can be used if you have problems / or don't like colorized logs. * `formatted-console` - this is almost the same as the advanced logger, but it formats sql queries to be more readable (using [@sqltools/formatter](https://github.com/mtxr/vscode-sqltools)). * `file` - this logger writes all logs into `ormlogs.log` in the root folder of your project (near `package.json`). * `debug` - this logger uses [debug package](https://github.com/visionmedia/debug), to turn on logging set your env variable `DEBUG=typeorm:*` (note logging option has no effect on this logger). You can enable any of them in data source options: ``` { host: "localhost", ... logging: true, logger: "file" } ``` ## Using custom logger[​](#using-custom-logger "Direct link to Using custom logger") You can create your own logger class by implementing the `Logger` interface: ``` import { Logger } from "typeorm" export class MyCustomLogger implements Logger { // implement all methods from logger class } ``` Or you can extend the `AbstractLogger` class: ``` import { AbstractLogger } from "typeorm" export class MyCustomLogger extends AbstractLogger { /** * Write log to specific output. */ protected writeLog( level: LogLevel, logMessage: LogMessage | LogMessage[], queryRunner?: QueryRunner, ) { const messages = this.prepareLogMessages(logMessage, { highlightSql: false, }, queryRunner) for (let message of messages) { switch (message.type ?? level) { case "log": case "schema-build": case "migration": console.log(message.message) break case "info": case "query": if (message.prefix) { console.info(message.prefix, message.message) } else { console.info(message.message) } break case "warn": case "query-slow": if (message.prefix) { console.warn(message.prefix, message.message) } else { console.warn(message.message) } break case "error": case "query-error": if (message.prefix) { console.error(message.prefix, message.message) } else { console.error(message.message) } break } } } } ``` And specify it in data source options: ``` import { DataSource } from "typeorm" import { MyCustomLogger } from "./logger/MyCustomLogger" const dataSource = new DataSource({ name: "mysql", type: "mysql", host: "localhost", port: 3306, username: "test", password: "test", database: "test", logger: new MyCustomLogger(), }) ``` Logger methods can accept `QueryRunner` when it's available. It's helpful if you want to log additional data. Also, via query runner, you can get access to additional data passed during to persist/remove. For example: ``` // user sends request during entity save postRepository.save(post, { data: { request: request } }); // in logger you can access it this way: logQuery(query: string, parameters?: any[], queryRunner?: QueryRunner) { const requestUrl = queryRunner && queryRunner.data["request"] ? "(" + queryRunner.data["request"].url + ") " : ""; console.log(requestUrl + "executing query: " + query); } ``` --- # Performance and optimization in TypeORM ## 1. Introduction to performance optimization[​](#1-introduction-to-performance-optimization "Direct link to 1. Introduction to performance optimization") * In applications using ORM like TypeORM, performance optimization is crucial to ensure the system runs smoothly, minimizes latency, and uses resources efficiently. * Common challenges when using ORM include unnecessary data retrieval, N+1 query problems, and not leveraging optimization tools such as indexing or caching. * The main goals of optimization include: * Reducing the number of SQL queries sent to the database. * Optimizing complex queries to run faster. * Using caching and indexing to speed up data retrieval. * Ensuring efficient data retrieval using appropriate loading methods (Lazy vs. Eager loading). ## 2. Efficient use of Query Builder[​](#2-efficient-use-of-query-builder "Direct link to 2. Efficient use of Query Builder") ### 2.1. Avoiding the N+1 Query Problem[​](#21-avoiding-the-n1-query-problem "Direct link to 2.1. Avoiding the N+1 Query Problem") * The N+1 Query Problem occurs when the system executes too many sub-queries for each row of data retrieved. * To avoid this, you can use `leftJoinAndSelect` or `innerJoinAndSelect` to combine tables in a single query instead of executing multiple queries. ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .leftJoinAndSelect("user.posts", "post") .getMany() ``` * Here, `leftJoinAndSelect` helps retrieve all user posts in a single query rather than many small queries. ### 2.2. Use `getRawMany()` when only raw data is needed[​](#22-use-getrawmany-when-only-raw-data-is-needed "Direct link to 22-use-getrawmany-when-only-raw-data-is-needed") * In cases where full objects aren't required, you can use `getRawMany()` to fetch raw data and avoid TypeORM processing too much information. ``` const rawPosts = await dataSource .getRepository(Post) .createQueryBuilder("post") .select("post.title, post.createdAt") .getRawMany() ``` ### 2.3. Limit fields using `select`[​](#23-limit-fields-using-select "Direct link to 23-limit-fields-using-select") * To optimize memory usage and reduce unnecessary data, select only the required fields using `select`. ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .select(["user.name", "user.email"]) .getMany() ``` ## 3. Using indices[​](#3-using-indices "Direct link to 3. Using indices") * Indexes speed up query performance in the database by reducing the amount of data scanned. TypeORM supports creating indexes on table columns using the `@Index` decorator. ### 3.1. Creating an index[​](#31-creating-an-index "Direct link to 3.1. Creating an index") * Indexes can be created directly in entities using the `@Index` decorator. ``` import { Entity, Column, Index } from "typeorm" @Entity() @Index(["firstName", "lastName"]) // Composite index export class User { @Column() firstName: string @Column() lastName: string } ``` ### 3.2. Unique index[​](#32-unique-index "Direct link to 3.2. Unique index") * You can create unique indexes to ensure no duplicate values in a column. ``` @Index(["email"], { unique: true }) ``` ## 4. Lazy loading and Eager Loading[​](#4-lazy-loading-and-eager-loading "Direct link to 4. Lazy loading and Eager Loading") TypeORM provides two main methods for loading data relations: Lazy Loading and Eager Loading. Each has a different impact on the performance of your application. ### 4.1. Lazy loading[​](#41-lazy-loading "Direct link to 4.1. Lazy loading") * Lazy loading loads the relation data only when needed, reducing database load when all related data isn't always necessary. ``` @Entity() export class User { @OneToMany(() => Post, (post) => post.user, { lazy: true }) posts: Promise } ``` * When you need to retrieve the data, simply call ``` const user = await userRepository.findOne(userId) const posts = await user.posts ``` * Advantages: * Resource efficiency: Only loads the necessary data when actually required, reducing query costs and memory usage. * Ideal for selective data usage: Suitable for scenarios where not all related data is needed. * Disadvantages: * Increased query complexity: Each access to related data triggers an additional query to the database, which may increase latency if not managed properly. * Difficult to track: Can lead to the n+1 query problem if used carelessly. ### 4.2. Eager Loading[​](#42-eager-loading "Direct link to 4.2. Eager Loading") * Eager loading automatically retrieves all related data when the main query is executed. This can be convenient but may cause performance issues if there are too many complex relations. ``` @Entity() export class User { @OneToMany(() => Post, (post) => post.user, { eager: true }) posts: Post[] } ``` * In this case, posts will be loaded as soon as user data is retrieved. * Advantages: * Automatically loads related data, making it easier to access relationships without additional queries. * Avoids the n+1 query problem: Since all data is fetched in a single query, there's no risk of generating unnecessary multiple queries. * Disadvantages: * Fetching all related data at once may result in large queries, even if not all data is needed. * Not suitable for scenarios where only a subset of related data is required, as it can lead to inefficient data usage. * To explore more details and examples of how to configure and use lazy and eager relations, visit the official TypeORM documentation: [Eager and Lazy Relations](https://typeorm.io/docs/relations/eager-and-lazy-relations.md) ## 5. Advanced optimization[​](#5-advanced-optimization "Direct link to 5. Advanced optimization") ### 5.1. Using Query Hints[​](#51-using-query-hints "Direct link to 5.1. Using Query Hints") * Query Hints are instructions sent along with SQL queries, helping the database decide on more efficient execution strategies. * Different RDBMS systems support different kinds of hints, such as suggesting index usage or choosing the appropriate JOIN type. ``` await dataSource.query(` SELECT /*+ MAX_EXECUTION_TIME(1000) */ * FROM user WHERE email = 'example@example.com' `) ``` * In the example above, `MAX_EXECUTION_TIME(1000)` instructs MySQL to stop the query if it takes more than 1 second. ### 5.2. Pagination[​](#52-pagination "Direct link to 5.2. Pagination") * Pagination is a crucial technique for improving performance when retrieving large amounts of data. Instead of fetching all data at once, pagination divides data into smaller pages, reducing database load and optimizing memory usage. * In TypeORM, you can use `limit` and `offset` for pagination. ``` const users = await userRepository .createQueryBuilder("user") .limit(10) // Number of records to fetch per page .offset(20) // Skip the first 20 records .getMany() ``` * Pagination helps prevent fetching large amounts of data at once, minimizing latency and optimizing memory usage. When implementing pagination, consider using pagination cursors for more efficient handling of dynamic data. ### 5.3. Caching[​](#53-caching "Direct link to 5.3. Caching") * Caching is the technique of temporarily storing query results or data for use in future requests without querying the database each time. * TypeORM has built-in caching support, and you can customize how caching is used. ``` const users = await userRepository .createQueryBuilder("user") .cache(true) // Enable caching .getMany() ``` * Additionally, you can configure cache duration or use external caching tools like Redis for better efficiency. ``` const dataSource = new DataSource({ type: "mysql", host: "localhost", port: 3306, username: "test", password: "test", database: "test", cache: { type: "redis", options: { host: "localhost", port: 6379 } } }); ``` --- # Transactions ## Creating and using transactions[​](#creating-and-using-transactions "Direct link to Creating and using transactions") Transactions are created using `DataSource` or `EntityManager`. Examples: ``` await myDataSource.transaction(async (transactionalEntityManager) => { // execute queries using transactionalEntityManager }) ``` or ``` await myDataSource.manager.transaction(async (transactionalEntityManager) => { // execute queries using transactionalEntityManager }) ``` Everything you want to run in a transaction must be executed in a callback: ``` await myDataSource.manager.transaction(async (transactionalEntityManager) => { await transactionalEntityManager.save(users) await transactionalEntityManager.save(photos) // ... }) ``` The most important restriction when working in a transaction is to **ALWAYS** use the provided instance of entity manager - `transactionalEntityManager` in this example. DO NOT USE GLOBAL ENTITY MANAGER. All operations **MUST** be executed using the provided transactional entity manager. ### Specifying Isolation Levels[​](#specifying-isolation-levels "Direct link to Specifying Isolation Levels") Specifying the isolation level for the transaction can be done by supplying it as the first parameter: ``` await myDataSource.manager.transaction( "SERIALIZABLE", (transactionalEntityManager) => {}, ) ``` Isolation level implementations are *not* agnostic across all databases. The following database drivers support the standard isolation levels (`READ UNCOMMITTED`, `READ COMMITTED`, `REPEATABLE READ`, `SERIALIZABLE`): * MySQL * Postgres * SQL Server **SQLite** defaults transactions to `SERIALIZABLE`, but if *shared cache mode* is enabled, a transaction can use the `READ UNCOMMITTED` isolation level. **Oracle** only supports the `READ COMMITTED` and `SERIALIZABLE` isolation levels. ## Using `QueryRunner` to create and control state of single database connection[​](#using-queryrunner-to-create-and-control-state-of-single-database-connection "Direct link to using-queryrunner-to-create-and-control-state-of-single-database-connection") `QueryRunner` provides a single database connection. Transactions are organized using query runners. Single transactions can only be established on a single query runner. You can manually create a query runner instance and use it to manually control transaction state. Example: ``` // create a new query runner const queryRunner = dataSource.createQueryRunner() // establish real database connection using our new query runner await queryRunner.connect() // now we can execute any queries on a query runner, for example: await queryRunner.query("SELECT * FROM users") // we can also access entity manager that works with connection created by a query runner: const users = await queryRunner.manager.find(User) // lets now open a new transaction: await queryRunner.startTransaction() try { // execute some operations on this transaction: await queryRunner.manager.save(user1) await queryRunner.manager.save(user2) await queryRunner.manager.save(photos) // commit transaction now: await queryRunner.commitTransaction() } catch (err) { // since we have errors let's rollback changes we made await queryRunner.rollbackTransaction() } finally { // you need to release query runner which is manually created: await queryRunner.release() } ``` There are 3 methods to control transactions in `QueryRunner`: * `startTransaction` - starts a new transaction inside the query runner instance. * `commitTransaction` - commits all changes made using the query runner instance. * `rollbackTransaction` - rolls all changes made using the query runner instance back. Learn more about [Query Runner](https://typeorm.io/docs/query-runner.md). --- # Using CLI ## Installing CLI[​](#installing-cli "Direct link to Installing CLI") ### If entities files are in javascript[​](#if-entities-files-are-in-javascript "Direct link to If entities files are in javascript") If you have a local typeorm version, make sure it matches the global version we are going to install. You can install typeorm globally with `npm i -g typeorm`. You can also choose to use `npx typeorm ` for each command if you prefer not having to install it. ### If entities files are in typescript[​](#if-entities-files-are-in-typescript "Direct link to If entities files are in typescript") This CLI tool is written in javascript and to be run on node. If your entity files are in typescript, you will need to transpile them to javascript before using CLI. You may skip this section if you only use javascript. You may setup ts-node in your project to ease the operation as follows: Install ts-node: ``` npm install ts-node --save-dev ``` Add typeorm command under scripts section in package.json ``` "scripts": { ... "typeorm": "typeorm-ts-node-commonjs" } ``` For ESM projects add this instead: ``` "scripts": { ... "typeorm": "typeorm-ts-node-esm" } ``` If you want to load more modules like [module-alias](https://github.com/ilearnio/module-alias) you can add more `--require my-module-supporting-register` Then you may run the command like this: ``` npm run typeorm migration:run -- -d path-to-datasource-config ``` ### How to read the documentation?[​](#how-to-read-the-documentation "Direct link to How to read the documentation?") To reduce verbosity of the documentation, the following sections are using a globally installed typeorm CLI. Depending on how you installed the CLI, you may replace `typeorm` at the start of the command, by either `npx typeorm` or `npm run typeorm`. ## Initialize a new TypeORM project[​](#initialize-a-new-typeorm-project "Direct link to Initialize a new TypeORM project") You can create a new project with everything already setup: ``` typeorm init ``` It creates all files needed for a basic project with TypeORM: * .gitignore * package.json * README.md * tsconfig.json * src/entity/User.ts * src/index.ts Then you can run `npm install` to install all dependencies. After that, you can run your application by running `npm start`. All files are generated in the current directory. If you want to generate them in a special directory you can use `--name`: ``` typeorm init --name my-project ``` To specify a specific database you use you can use `--database`: ``` typeorm init --database mssql ``` To generate an ESM base project you can use `--module esm`: ``` typeorm init --name my-project --module esm ``` You can also generate a base project with Express: ``` typeorm init --name my-project --express ``` If you are using docker you can generate a `docker-compose.yml` file using: ``` typeorm init --docker ``` `typeorm init` is the easiest and fastest way to setup a TypeORM project. ## Create a new entity[​](#create-a-new-entity "Direct link to Create a new entity") You can create a new entity using CLI: ``` typeorm entity:create path-to-entity-dir/entity ``` Learn more about [entities](https://typeorm.io/docs/entity/entities.md). ## Create a new subscriber[​](#create-a-new-subscriber "Direct link to Create a new subscriber") You can create a new subscriber using CLI: ``` typeorm subscriber:create path-to-subscriber-dir/subscriber ``` Learn more about [Subscribers](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). ## Manage migrations[​](#manage-migrations "Direct link to Manage migrations") * `typeorm migration:create` - [create](https://typeorm.io/docs/migrations/creating.md) empty migration * `typeorm migration:generate` - [generate](https://typeorm.io/docs/migrations/generating.md) migration comparing entities with actual database schema * `typeorm migration:run` - [execute](https://typeorm.io/docs/migrations/executing.md) all migrations * `typeorm migration:revert` - [revert](https://typeorm.io/docs/migrations/reverting.md) last migration * `typeorm migration:show` - [list](https://typeorm.io/docs/migrations/status.md) all migrations with their execution status Learn more about [Migrations](https://typeorm.io/docs/migrations/why.md). ## Sync database schema[​](#sync-database-schema "Direct link to Sync database schema") To synchronize a database schema use: ``` typeorm schema:sync ``` Be careful running this command in production - schema sync may cause data loss if you don't use it wisely. Check which sql queries it will run before running on production. ## Log sync database schema queries without actual running them[​](#log-sync-database-schema-queries-without-actual-running-them "Direct link to Log sync database schema queries without actual running them") To check what sql queries `schema:sync` is going to run use: ``` typeorm schema:log ``` ## Drop database schema[​](#drop-database-schema "Direct link to Drop database schema") To completely drop a database schema use: ``` typeorm schema:drop -- -d path-to-datasource-config ``` Be careful with this command on production since it completely removes data from your database. ## Run any SQL query[​](#run-any-sql-query "Direct link to Run any SQL query") You can execute any SQL query you want directly in the database using: ``` typeorm query "SELECT * FROM USERS" ``` ## Clear cache[​](#clear-cache "Direct link to Clear cache") If you are using `QueryBuilder` caching, sometimes you may want to clear everything stored in the cache. You can do it using the following command: ``` typeorm cache:clear ``` ## Check version[​](#check-version "Direct link to Check version") You can check what typeorm version you have installed (both local and global) by running: ``` typeorm version ``` --- # DataSource ## What is a DataSource?[​](#what-is-a-datasource "Direct link to What is a DataSource?") Your interaction with the database is only possible once you setup a `DataSource`. TypeORM's `DataSource` holds your database connection settings and establishes the initial database connection or connection pool depending on the RDBMS you use. To establish the initial connection/connection pool, you must call the `initialize` method of your `DataSource` instance. Disconnection (closing all connections in the pool) occurs when the `destroy` method is called. Generally, you call the `initialize` method of the `DataSource` instance on the application bootstrap, and `destroy` it after you finished working with the database. In practice, if you are building a backend for your site and your backend server always stays running - you never `destroy` a DataSource. ## Creating a new DataSource[​](#creating-a-new-datasource "Direct link to Creating a new DataSource") To create a new `DataSource` instance you must initialize its constructor by calling `new DataSource` and assigning to a global variable that you'll use across your application: ``` import { DataSource } from "typeorm" const AppDataSource = new DataSource({ type: "mysql", host: "localhost", port: 3306, username: "test", password: "test", database: "test", }) try { await AppDataSource.initialize() console.log("Data Source has been initialized!") } catch (error) { console.error("Error during Data Source initialization", error) } ``` It's a good idea to make `AppDataSource` globally available by `export`-ing it since you'll use this instance across your application. `DataSource` accepts `DataSourceOptions` and those options vary depending on the database `type` you use. For different database types, there are different options you can specify. You can define as many data sources as you need in your application, for example: ``` import { DataSource } from "typeorm" const MysqlDataSource = new DataSource({ type: "mysql", host: "localhost", port: 3306, username: "test", password: "test", database: "test", entities: [ // .... ], }) const PostgresDataSource = new DataSource({ type: "postgres", host: "localhost", port: 5432, username: "test", password: "test", database: "test", entities: [ // .... ], }) ``` ## How to use DataSource?[​](#how-to-use-datasource "Direct link to How to use DataSource?") Once you set your `DataSource`, you can use it anywhere in your app, for example: ``` import { AppDataSource } from "./app-data-source" import { User } from "../entity/User" export class UserController { @Get("/users") getAll() { return AppDataSource.manager.find(User) } } ``` Using the `DataSource` instance you can execute database operations with your entities, particularly using `.manager` and `.getRepository()` properties. For more information about them see [Entity Manager](https://typeorm.io/docs/working-with-entity-manager/working-with-entity-manager.md) and [Repository](https://typeorm.io/docs/working-with-entity-manager/working-with-repository.md) documentation. --- # DataSource API * `options` - Options used to create this dataSource. Learn more about [Data Source Options](https://typeorm.io/docs/data-source/data-source-options.md). ``` const dataSourceOptions: DataSourceOptions = dataSource.options ``` * `isInitialized` - Indicates if DataSource was initialized and initial connection / connection pool with database was established or not. ``` const isInitialized: boolean = dataSource.isInitialized ``` * `driver` - Underlying database driver used in this dataSource. ``` const driver: Driver = dataSource.driver ``` * `manager` - `EntityManager` used to work with entities. Learn more about [Entity Manager](https://typeorm.io/docs/working-with-entity-manager/working-with-entity-manager.md) and [Repository](https://typeorm.io/docs/working-with-entity-manager/working-with-repository.md). ``` const manager: EntityManager = dataSource.manager // you can call manager methods, for example find: const users = await manager.find() ``` * `mongoManager` - `MongoEntityManager` used to work with entities for mongodb data source. For more information about MongoEntityManager see [MongoDB](https://typeorm.io/docs/drivers/mongodb.md) documentation. ``` const manager: MongoEntityManager = dataSource.mongoManager // you can call manager or mongodb-manager specific methods, for example find: const users = await manager.find() ``` * `initialize` - Initializes data source and opens connection pool to the database. ``` await dataSource.initialize() ``` * `destroy` - Destroys the DataSource and closes all database connections. Usually, you call this method when your application is shutting down. ``` await dataSource.destroy() ``` * `synchronize` - Synchronizes database schema. When `synchronize: true` is set in data source options it calls this method. Usually, you call this method when your application is starting. ``` await dataSource.synchronize() ``` * `dropDatabase` - Drops the database and all its data. Be careful with this method on production since this method will erase all your database tables and their data. Can be used only after connection to the database is established. ``` await dataSource.dropDatabase() ``` * `runMigrations` - Runs all pending migrations. ``` await dataSource.runMigrations() ``` * `undoLastMigration` - Reverts last executed migration. ``` await dataSource.undoLastMigration() ``` * `hasMetadata` - Checks if metadata for a given entity is registered. ``` if (dataSource.hasMetadata(User)) const userMetadata = dataSource.getMetadata(User) ``` * `getMetadata` - Gets `EntityMetadata` of the given entity. You can also specify a table name and if entity metadata with such table name is found it will be returned. ``` const userMetadata = dataSource.getMetadata(User) // now you can get any information about User entity ``` * `getRepository` - Gets `Repository` of the given entity. You can also specify a table name and if repository for given table is found it will be returned. Learn more about [Repositories](https://typeorm.io/docs/working-with-entity-manager/working-with-repository.md). ``` const repository = dataSource.getRepository(User) // now you can call repository methods, for example find: const users = await repository.find() ``` * `getTreeRepository` - Gets `TreeRepository` of the given entity. You can also specify a table name and if repository for given table is found it will be returned. Learn more about [Repositories](https://typeorm.io/docs/working-with-entity-manager/working-with-repository.md). ``` const repository = dataSource.getTreeRepository(Category) // now you can call tree repository methods, for example findTrees: const categories = await repository.findTrees() ``` * `getMongoRepository` - Gets `MongoRepository` of the given entity. This repository is used for entities in MongoDB dataSource. Learn more about [MongoDB support](https://typeorm.io/docs/drivers/mongodb.md). ``` const repository = dataSource.getMongoRepository(User) // now you can call mongodb-specific repository methods, for example createEntityCursor: const categoryCursor = repository.createEntityCursor() const category1 = await categoryCursor.next() const category2 = await categoryCursor.next() ``` * `transaction` - Provides a single transaction where multiple database requests will be executed in a single database transaction. Learn more about [Transactions](https://typeorm.io/docs/advanced-topics/transactions.md). ``` await dataSource.transaction(async (manager) => { // NOTE: you must perform all database operations using given manager instance // its a special instance of EntityManager working with this transaction // and don't forget to await things here }) ``` * `query` - Executes a raw SQL query. ``` const rawData = await dataSource.query(`SELECT * FROM USERS`) // You can also use parameters to avoid SQL injection // The syntax differs between the drivers // aurora-mysql, better-sqlite3, capacitor, cordova, // expo, mariadb, mysql, nativescript, react-native, // sap, sqlite, sqljs const rawData = await dataSource.query( "SELECT * FROM USERS WHERE name = ? and age = ?", ["John", 24], ) // aurora-postgres, cockroachdb, postgres const rawData = await dataSource.query( "SELECT * FROM USERS WHERE name = $1 and age = $2", ["John", 24], ) // oracle const rawData = await dataSource.query( "SELECT * FROM USERS WHERE name = :1 and age = :2", ["John", 24], ) // spanner const rawData = await dataSource.query( "SELECT * FROM USERS WHERE name = @param0 and age = @param1", ["John", 24], ) // mssql const rawData = await dataSource.query( "SELECT * FROM USERS WHERE name = @0 and age = @1", ["John", 24], ) ``` * `sql` - Executes a raw SQL query using template literals. ``` const rawData = await dataSource.sql`SELECT * FROM USERS WHERE name = ${"John"} and age = ${24}` ``` Learn more about using the [SQL Tag syntax](https://typeorm.io/docs/guides/sql-tag.md). * `createQueryBuilder` - Creates a query builder, which can be used to build queries. Learn more about [QueryBuilder](https://typeorm.io/docs/query-builder/select-query-builder.md). ``` const users = await dataSource .createQueryBuilder() .select() .from(User, "user") .where("user.name = :name", { name: "John" }) .getMany() ``` * `createQueryRunner` - Creates a query runner used to manage and work with a single real database dataSource. Learn more about [QueryRunner](https://typeorm.io/docs/query-runner.md). ``` const queryRunner = dataSource.createQueryRunner() // you can use its methods only after you call connect // which performs real database connection await queryRunner.connect() // .. now you can work with query runner and call its methods // very important - don't forget to release query runner once you finished working with it await queryRunner.release() ``` --- # Data Source Options ## What is DataSourceOptions?[​](#what-is-datasourceoptions "Direct link to What is DataSourceOptions?") `DataSourceOptions` is a data source configuration you pass when you create a new `DataSource` instance. Different RDBMS-es have their own specific options. ## Common data source options[​](#common-data-source-options "Direct link to Common data source options") * `type` - RDBMS type. You must specify what database engine you use. Possible values are: "mysql", "postgres", "cockroachdb", "sap", "spanner", "mariadb", "sqlite", "cordova", "react-native", "nativescript", "sqljs", "oracle", "mssql", "mongodb", "aurora-mysql", "aurora-postgres", "expo", "better-sqlite3", "capacitor". This option is **required**. * `extra` - Extra options to be passed to the underlying driver. Use it if you want to pass extra settings to the underlying database driver. * `entities` - Entities, or Entity Schemas, to be loaded and used for this data source. It accepts entity classes, entity schema classes, and directory paths from which to load. Directories support glob patterns. Example: `entities: [Post, Category, "entity/*.js", "modules/**/entity/*.js"]`. Learn more about [Entities](https://typeorm.io/docs/entity/entities.md). Learn more about [Entity Schemas](https://typeorm.io/docs/entity/separating-entity-definition.md). * `subscribers` - Subscribers to be loaded and used for this data source. It accepts both entity classes and directories from which to load. Directories support glob patterns. Example: `subscribers: [PostSubscriber, AppSubscriber, "subscriber/*.js", "modules/**/subscriber/*.js"]`. Learn more about [Subscribers](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). * `logging` - Indicates if logging is enabled or not. If set to `true` then query and error logging will be enabled. You can also specify different types of logging to be enabled, for example `["query", "error", "schema"]`. Learn more about [Logging](https://typeorm.io/docs/advanced-topics/logging.md). * `logger` - Logger to be used for logging purposes. Possible values are "advanced-console", "formatted-console", "simple-console" and "file". Default is "advanced-console". You can also specify a logger class that implements `Logger` interface. Learn more about [Logging](https://typeorm.io/docs/advanced-topics/logging.md). * `maxQueryExecutionTime` - If query execution time exceed this given max execution time (in milliseconds) then logger will log this query. * `poolSize` - Configure maximum number of active connections is the pool. * `namingStrategy` - Naming strategy to be used to name tables and columns in the database. * `entityPrefix` - Prefixes with the given string all tables (or collections) on this data source. * `entitySkipConstructor` - Indicates if TypeORM should skip constructors when deserializing entities from the database. Note that when you do not call the constructor both private properties and default properties will not operate as expected. * `dropSchema` - Drops the schema each time data source is being initialized. Be careful with this option and don't use this in production - otherwise you'll lose all production data. This option is useful during debug and development. * `synchronize` - Indicates if database schema should be auto created on every application launch. Be careful with this option and don't use this in production - otherwise you can lose production data. This option is useful during debug and development. As an alternative to it, you can use CLI and run schema:sync command. Note that for MongoDB database it does not create schema, because MongoDB is schemaless. Instead, it syncs just by creating indices. * `migrations` - [Migrations](https://typeorm.io/docs/migrations/why.md) to be loaded and used for this data source * `migrationsRun` - Indicates if [migrations](https://typeorm.io/docs/migrations/why.md) should be auto-run on every application launch. * `migrationsTableName` - Name of the table in the database which is going to contain information about executed [migrations](https://typeorm.io/docs/migrations/why.md). * `migrationsTransactionMode` - Controls transaction mode when running [migrations](https://typeorm.io/docs/migrations/why.md). * `metadataTableName` - Name of the table in the database which is going to contain information about table metadata. By default, this table is called "typeorm\_metadata". * `cache` - Enables entity result caching. You can also configure cache type and other cache options here. Read more about caching [here](https://typeorm.io/docs/query-builder/caching.md). * `isolateWhereStatements` - Enables where statement isolation, wrapping each where clause in brackets automatically. eg. `.where("user.firstName = :search OR user.lastName = :search")` becomes `WHERE (user.firstName = ? OR user.lastName = ?)` instead of `WHERE user.firstName = ? OR user.lastName = ?` * `invalidWhereValuesBehavior` - Controls how null and undefined values are handled in where conditions across all TypeORM operations (find operations, query builders, repository methods). * `null` behavior options: * `'ignore'` (default) - skips null properties * `'sql-null'` - transforms null to SQL NULL * `'throw'` - throws an error * `undefined` behavior options: * `'ignore'` (default) - skips undefined properties * `'throw'` - throws an error Example: `invalidWhereValuesBehavior: { null: 'sql-null', undefined: 'throw' }`. Learn more about [Null and Undefined Handling](https://typeorm.io/docs/data-source/null-and-undefined-handling.md). ## Data Source Options example[​](#data-source-options-example "Direct link to Data Source Options example") Here is a small example of data source options for mysql: ``` { host: "localhost", port: 3306, username: "test", password: "test", database: "test", logging: true, synchronize: true, entities: [ "entity/*.js" ], subscribers: [ "subscriber/*.js" ], entitySchemas: [ "schema/*.json" ], migrations: [ "migration/*.js" ] } ``` --- # Multiple data sources, databases, schemas and replication setup ## Using multiple data sources[​](#using-multiple-data-sources "Direct link to Using multiple data sources") To use multiple data sources connected to different databases, simply create multiple DataSource instances: ``` import { DataSource } from "typeorm" const db1DataSource = new DataSource({ type: "mysql", host: "localhost", port: 3306, username: "root", password: "admin", database: "db1", entities: [__dirname + "/entity/*{.js,.ts}"], synchronize: true, }) const db2DataSource = new DataSource({ type: "mysql", host: "localhost", port: 3306, username: "root", password: "admin", database: "db2", entities: [__dirname + "/entity/*{.js,.ts}"], synchronize: true, }) ``` ## Using multiple databases within a single data source[​](#using-multiple-databases-within-a-single-data-source "Direct link to Using multiple databases within a single data source") To use multiple databases in a single data source, you can specify database name per-entity: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity({ database: "secondDB" }) export class User { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string } ``` ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity({ database: "thirdDB" }) export class Photo { @PrimaryGeneratedColumn() id: number @Column() url: string } ``` `User` entity will be created inside `secondDB` database and `Photo` entity inside `thirdDB` database. All other entities will be created in a default database defined in the data source options. If you want to select data from a different database you only need to provide an entity: ``` const users = await dataSource .createQueryBuilder() .select() .from(User, "user") .addFrom(Photo, "photo") .andWhere("photo.userId = user.id") .getMany() // userId is not a foreign key since its cross-database request ``` This code will produce following SQL query (depend on database type): ``` SELECT * FROM "secondDB"."user" "user", "thirdDB"."photo" "photo" WHERE "photo"."userId" = "user"."id" ``` You can also specify a table path instead of the entity: ``` const users = await dataSource .createQueryBuilder() .select() .from("secondDB.user", "user") .addFrom("thirdDB.photo", "photo") .andWhere("photo.userId = user.id") .getMany() // userId is not a foreign key since its cross-database request ``` This feature is supported only in mysql and mssql databases. ## Using multiple schemas within a single data source[​](#using-multiple-schemas-within-a-single-data-source "Direct link to Using multiple schemas within a single data source") To use multiple schemas in your applications, just set `schema` on each entity: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity({ schema: "secondSchema" }) export class User { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string } ``` ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity({ schema: "thirdSchema" }) export class Photo { @PrimaryGeneratedColumn() id: number @Column() url: string } ``` `User` entity will be created inside `secondSchema` schema and `Photo` entity inside `thirdSchema` schema. All other entities will be created in a default database defined in the data source options. If you want to select data from a different schema you only need to provide an entity: ``` const users = await dataSource .createQueryBuilder() .select() .from(User, "user") .addFrom(Photo, "photo") .andWhere("photo.userId = user.id") .getMany() // userId is not a foreign key since its cross-database request ``` This code will produce following SQL query (depend on database type): ``` SELECT * FROM "secondSchema"."question" "question", "thirdSchema"."photo" "photo" WHERE "photo"."userId" = "user"."id" ``` You can also specify a table path instead of entity: ``` const users = await dataSource .createQueryBuilder() .select() .from("secondSchema.user", "user") // in mssql you can even specify a database: secondDB.secondSchema.user .addFrom("thirdSchema.photo", "photo") // in mssql you can even specify a database: thirdDB.thirdSchema.photo .andWhere("photo.userId = user.id") .getMany() ``` This feature is supported only in postgres and mssql databases. In mssql you can also combine schemas and databases, for example: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity({ database: "secondDB", schema: "public" }) export class User { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string } ``` ## Replication[​](#replication "Direct link to Replication") You can set up read/write replication using TypeORM. Example of replication options: ``` const datasource = new DataSource({ type: "mysql", logging: true, replication: { master: { host: "server1", port: 3306, username: "test", password: "test", database: "test", }, slaves: [ { host: "server2", port: 3306, username: "test", password: "test", database: "test", }, { host: "server3", port: 3306, username: "test", password: "test", database: "test", }, ], }, }) ``` With replication slaves defined, TypeORM will start sending all possible queries to slaves by default. * all queries performed by the `find` methods or `SelectQueryBuilder` will use a random `slave` instance * all write queries performed by `update`, `create`, `InsertQueryBuilder`, `UpdateQueryBuilder`, etc will use the `master` instance * all raw queries performed by calling `.query()` will use the `master` instance * all schema update operations are performed using the `master` instance ### Explicitly selecting query destinations[​](#explicitly-selecting-query-destinations "Direct link to Explicitly selecting query destinations") By default, TypeORM will send all read queries to a random read slave, and all writes to the master. This means when you first add the `replication` settings to your configuration, any existing read query runners that don't explicitly specify a replication mode will start going to a slave. This is good for scalability, but if some of those queries *must* return up to date data, then you need to explicitly pass a replication mode when you create a query runner. If you want to explicitly use the `master` for read queries, pass an explicit `ReplicationMode` when creating your `QueryRunner`; ``` const masterQueryRunner = dataSource.createQueryRunner("master") try { const postsFromMaster = await dataSource .createQueryBuilder(Post, "post", masterQueryRunner) // you can either pass QueryRunner as an optional argument with query builder .setQueryRunner(masterQueryRunner) // or use setQueryRunner which sets or overrides query builder's QueryRunner .getMany() } finally { await masterQueryRunner.release() } ``` If you want to use a slave in raw queries, pass `slave` as the replication mode when creating a query runner: ``` const slaveQueryRunner = dataSource.createQueryRunner("slave") try { const userFromSlave = await slaveQueryRunner.query( "SELECT * FROM users WHERE id = $1", [userId], slaveQueryRunner, ) } finally { return slaveQueryRunner.release() } ``` **Note**: Manually created `QueryRunner` instances must be explicitly released on their own. If you don't release your query runners, they will keep a connection checked out of the pool, and prevent other queries from using it. ### Adjusting the default destination for reads[​](#adjusting-the-default-destination-for-reads "Direct link to Adjusting the default destination for reads") If you don't want all reads to go to a `slave` instance by default, you can change the default read query destination by passing `defaultMode: "master"` in your replication options: ``` const datasource = new DataSource({ type: "mysql", logging: true, replication: { // set the default destination for read queries as the master instance defaultMode: "master", master: { host: "server1", port: 3306, username: "test", password: "test", database: "test", }, slaves: [ { host: "server2", port: 3306, username: "test", password: "test", database: "test", }, ], }, }) ``` With this mode, no queries will go to the read slaves by default, and you'll have to opt-in to sending queries to read slaves with explicit `.createQueryRunner("slave")` calls. If you're adding replication options to an existing app for the first time, this is a good option for ensuring no behavior changes right away, and instead you can slowly adopt read replicas on a query runner by query runner basis. ### Supported drivers[​](#supported-drivers "Direct link to Supported drivers") Replication is supported by the MySQL, PostgreSQL, SQL Server, Cockroach, Oracle, and Spanner connection drivers. MySQL replication supports extra configuration options: ``` { replication: { master: { host: "server1", port: 3306, username: "test", password: "test", database: "test" }, slaves: [{ host: "server2", port: 3306, username: "test", password: "test", database: "test" }, { host: "server3", port: 3306, username: "test", password: "test", database: "test" }], /** * If true, PoolCluster will attempt to reconnect when connection fails. (Default: true) */ canRetry: true, /** * If connection fails, node's errorCount increases. * When errorCount is greater than removeNodeErrorCount, remove a node in the PoolCluster. (Default: 5) */ removeNodeErrorCount: 5, /** * If connection fails, specifies the number of milliseconds before another connection attempt will be made. * If set to 0, then node will be removed instead and never re-used. (Default: 0) */ restoreNodeTimeout: 0, /** * Determines how slaves are selected: * RR: Select one alternately (Round-Robin). * RANDOM: Select the node by random function. * ORDER: Select the first node available unconditionally. */ selector: "RR" } } ``` --- # Handling null and undefined values in where conditions In 'WHERE' conditions the values `null` and `undefined` are not strictly valid values in TypeORM. Passing a known `null` value is disallowed by TypeScript (when you've enabled `strictNullChecks` in tsconfig.json) at compile time. But the default behavior is for `null` values encountered at runtime to be ignored. Similarly, `undefined` values are allowed by TypeScript and ignored at runtime. The acceptance of `null` and `undefined` values can sometimes cause unexpected results and requires caution. This is especially a concern when values are passed from user input without adequate validation. For example, calling `Repository.findOneBy({ id: undefined })` returns the first row from the table, and `Repository.findBy({ userId: null })` is unfiltered and returns all rows. The way in which `null` and `undefined` values are handled can be customised through the `invalidWhereValuesBehavior` configuration option in your data source options. This applies to all operations that support 'WHERE' conditions, including find operations, query builders, and repository methods. note The current behavior will be changing in future versions of TypeORM, we recommend setting both `null` and `undefined` behaviors to throw to prepare for these changes ## Default Behavior[​](#default-behavior "Direct link to Default Behavior") By default, TypeORM skips both `null` and `undefined` values in where conditions. This means that if you include a property with a `null` or `undefined` value in your where clause, it will be ignored: ``` // Both queries will return all posts, ignoring the text property const posts1 = await repository.find({ where: { text: null, }, }) const posts2 = await repository.find({ where: { text: undefined, }, }) ``` The correct way to match null values in where conditions is to use the `IsNull` operator (for details see [Find Options](https://typeorm.io/docs/working-with-entity-manager/find-options.md)): ``` const posts = await repository.find({ where: { text: IsNull(), }, }) ``` ## Configuration[​](#configuration "Direct link to Configuration") You can customize how null and undefined values are handled using the `invalidWhereValuesBehavior` option in your connection configuration: ``` const dataSource = new DataSource({ // ... other options invalidWhereValuesBehavior: { null: "ignore" | "sql-null" | "throw", undefined: "ignore" | "throw", }, }) ``` ### Null Behavior Options[​](#null-behavior-options "Direct link to Null Behavior Options") The `null` behavior can be set to one of three values: #### `'ignore'` (default)[​](#ignore-default "Direct link to ignore-default") JavaScript `null` values in where conditions are ignored and the property is skipped: ``` const dataSource = new DataSource({ // ... other options invalidWhereValuesBehavior: { null: "ignore", }, }) // This will return all posts, ignoring the text property const posts = await repository.find({ where: { text: null, }, }) ``` #### `'sql-null'`[​](#sql-null "Direct link to sql-null") JavaScript `null` values are transformed into SQL `NULL` conditions: ``` const dataSource = new DataSource({ // ... other options invalidWhereValuesBehavior: { null: "sql-null", }, }) // This will only return posts where the text column is NULL in the database const posts = await repository.find({ where: { text: null, }, }) ``` #### `'throw'`[​](#throw "Direct link to throw") JavaScript `null` values cause a TypeORMError to be thrown: ``` const dataSource = new DataSource({ // ... other options invalidWhereValuesBehavior: { null: "throw", }, }) // This will throw an error const posts = await repository.find({ where: { text: null, }, }) // Error: Null value encountered in property 'text' of a where condition. // To match with SQL NULL, the IsNull() operator must be used. // Set 'invalidWhereValuesBehavior.null' to 'ignore' or 'sql-null' in connection options to skip or handle null values. ``` ### Undefined Behavior Options[​](#undefined-behavior-options "Direct link to Undefined Behavior Options") The `undefined` behavior can be set to one of two values: #### `'ignore'` (default)[​](#ignore-default-1 "Direct link to ignore-default-1") JavaScript `undefined` values in where conditions are ignored and the property is skipped: ``` const dataSource = new DataSource({ // ... other options invalidWhereValuesBehavior: { undefined: "ignore", }, }) // This will return all posts, ignoring the text property const posts = await repository.find({ where: { text: undefined, }, }) ``` #### `'throw'`[​](#throw-1 "Direct link to throw-1") JavaScript `undefined` values cause a TypeORMError to be thrown: ``` const dataSource = new DataSource({ // ... other options invalidWhereValuesBehavior: { undefined: "throw", }, }) // This will throw an error const posts = await repository.find({ where: { text: undefined, }, }) // Error: Undefined value encountered in property 'text' of a where condition. // Set 'invalidWhereValuesBehavior.undefined' to 'ignore' in connection options to skip properties with undefined values. ``` Note that this only applies to explicitly set `undefined` values, not omitted properties. ## Using Both Options Together[​](#using-both-options-together "Direct link to Using Both Options Together") You can configure both behaviors independently for comprehensive control: ``` const dataSource = new DataSource({ // ... other options invalidWhereValuesBehavior: { null: "sql-null", undefined: "throw", }, }) ``` This configuration will: 1. Transform JavaScript `null` values to SQL `NULL` in where conditions 2. Throw an error if any `undefined` values are encountered 3. Still ignore properties that are not provided in the where clause This combination is useful when you want to: * Be explicit about searching for NULL values in the database * Catch potential programming errors where undefined values might slip into your queries ## Works with all where operations[​](#works-with-all-where-operations "Direct link to Works with all where operations") The `invalidWhereValuesBehavior` configuration applies to **all TypeORM operations** that support where conditions, not just repository find methods: ### Query Builders[​](#query-builders "Direct link to Query Builders") ``` // UpdateQueryBuilder await dataSource .createQueryBuilder() .update(Post) .set({ title: "Updated" }) .where({ text: null }) // Respects invalidWhereValuesBehavior .execute() // DeleteQueryBuilder await dataSource .createQueryBuilder() .delete() .from(Post) .where({ text: null }) // Respects invalidWhereValuesBehavior .execute() // SoftDeleteQueryBuilder await dataSource .createQueryBuilder() .softDelete() .from(Post) .where({ text: null }) // Respects invalidWhereValuesBehavior .execute() ``` ### Repository Methods[​](#repository-methods "Direct link to Repository Methods") ``` // Repository.update() await repository.update({ text: null }, { title: "Updated" }) // Respects invalidWhereValuesBehavior // Repository.delete() await repository.delete({ text: null }) // Respects invalidWhereValuesBehavior // EntityManager.update() await manager.update(Post, { text: null }, { title: "Updated" }) // Respects invalidWhereValuesBehavior // EntityManager.delete() await manager.delete(Post, { text: null }) // Respects invalidWhereValuesBehavior // EntityManager.softDelete() await manager.softDelete(Post, { text: null }) // Respects invalidWhereValuesBehavior ``` All these operations will consistently apply your configured `invalidWhereValuesBehavior` settings. --- # Google Spanner ## Installation[​](#installation "Direct link to Installation") ``` npm install @google-cloud/spanner ``` ## Data Source Options[​](#data-source-options "Direct link to Data Source Options") See [Data Source Options](https://typeorm.io/docs/data-source/data-source-options.md) for the common data source options. Provide authentication credentials to your application code by setting the environment variable `GOOGLE_APPLICATION_CREDENTIALS`: ``` # Linux/macOS export GOOGLE_APPLICATION_CREDENTIALS="KEY_PATH" # Windows set GOOGLE_APPLICATION_CREDENTIALS=KEY_PATH # Replace KEY_PATH with the path of the JSON file that contains your service account key. ``` To use Spanner with the emulator you should set `SPANNER_EMULATOR_HOST` environment variable: ``` # Linux/macOS export SPANNER_EMULATOR_HOST=localhost:9010 # Windows set SPANNER_EMULATOR_HOST=localhost:9010 ``` ## Column Types[​](#column-types "Direct link to Column Types") `bool`, `int64`, `float64`, `numeric`, `string`, `json`, `bytes`, `date`, `timestamp`, `array` --- # Microsoft SQLServer ## Installation[​](#installation "Direct link to Installation") ``` npm install mssql ``` ## Data Source Options[​](#data-source-options "Direct link to Data Source Options") See [Data Source Options](https://typeorm.io/docs/data-source/data-source-options.md) for the common data source options. Based on [tedious](https://tediousjs.github.io/node-mssql/) MSSQL implementation. See [SqlServerConnectionOptions.ts](https://github.com/typeorm/typeorm/tree/master/src/driver/sqlserver/SqlServerConnectionOptions.ts) for details on exposed attributes. * `url` - Connection url where the connection is performed. Please note that other data source options will override parameters set from url. * `host` - Database host. * `port` - Database host port. Default mssql port is `1433`. * `username` - Database username. * `password` - Database password. * `database` - Database name. * `schema` - Schema name. Default is "dbo". * `domain` - Once you set domain, the driver will connect to SQL Server using domain login. * `connectionTimeout` - Connection timeout in ms (default: `15000`). * `requestTimeout` - Request timeout in ms (default: `15000`). NOTE: msnodesqlv8 driver doesn't support timeouts < 1 second. * `stream` - Stream record sets/rows instead of returning them all at once as an argument of callback (default: `false`). You can also enable streaming for each request independently (`request.stream = true`). Always set to `true` if you plan to work with a large number of rows. * `pool.max` - The maximum number of connections there can be in the pool (default: `10`). * `pool.min` - The minimum of connections there can be in the pool (default: `0`). * `pool.maxWaitingClients` - maximum number of queued requests allowed, additional acquire calls will be called back with an error in a future cycle of the event loop. * `pool.acquireTimeoutMillis` - max milliseconds an `acquire` call will wait for a resource before timing out. (default no limit), if supplied should non-zero positive integer. * `pool.fifo` - if true the oldest resources will be first to be allocated. If false, the most recently released resources will be the first to be allocated. This, in effect, turns the pool's behaviour from a queue into a stack. boolean, (default `true`). * `pool.priorityRange` - int between 1 and x - if set, borrowers can specify their relative priority in the queue if no resources are available. see example. (default `1`). * `pool.evictionRunIntervalMillis` - How often to run eviction checks. Default: `0` (does not run). * `pool.numTestsPerRun` - Number of resources to check each eviction run. Default: `3`. * `pool.softIdleTimeoutMillis` - amount of time an object may sit idle in the pool before it is eligible for eviction by the idle object evictor (if any), with the extra condition that at least "min idle" object instances remain in the pool. Default `-1` (nothing can get evicted). * `pool.idleTimeoutMillis` - the minimum amount of time that an object may sit idle in the pool before it is eligible for eviction due to idle time. Supersedes `softIdleTimeoutMillis`. Default: `30000`. * `pool.errorHandler` - A function that gets called when the underlying pool emits `'error'` event. Takes a single parameter (error instance) and defaults to logging with `warn` level. * `options.fallbackToDefaultDb` - By default, if the database requested by `options.database` cannot be accessed, the connection will fail with an error. However, if `options.fallbackToDefaultDb` is set to `true`, then the user's default database will be used instead (Default: `false`). * `options.instanceName` - The instance name to connect to. The SQL Server Browser service must be running on the database server, and UDP port 1434 on the database server must be reachable. Mutually exclusive with `port`. (no default). * `options.enableAnsiNullDefault` - If true, `SET ANSI_NULL_DFLT_ON ON` will be set in the initial SQL. This means new columns will be nullable by default. See the [T-SQL documentation](https://msdn.microsoft.com/en-us/library/ms187375.aspx) for more details. (Default: `true`). * `options.cancelTimeout` - The number of milliseconds before the cancel (abort) of a request is considered failed (default: `5000`). * `options.packetSize` - The size of TDS packets (subject to negotiation with the server). Should be a power of 2. (default: `4096`). * `options.useUTC` - A boolean determining whether to pass time values in UTC or local time. (default: `false`). * `options.abortTransactionOnError` - A boolean determining whether to roll back a transaction automatically if any error is encountered during the given transaction's execution. This sets the value for `SET XACT_ABORT` during the initial SQL phase of a connection ([documentation](http://msdn.microsoft.com/en-us/library/ms188792.aspx)). * `options.localAddress` - A string indicating which network interface (ip address) to use when connecting to SQL Server. * `options.useColumnNames` - A boolean determining whether to return rows as arrays or key-value collections. (default: `false`). * `options.camelCaseColumns` - A boolean, controlling whether the column names returned will have the first letter converted to lower case (`true`) or not. This value is ignored if you provide a `columnNameReplacer`. (default: `false`). * `options.isolationLevel` - The default isolation level that transactions will be run with. The isolation levels are available from `require('tedious').ISOLATION_LEVEL`. * `READ_UNCOMMITTED` * `READ_COMMITTED` * `REPEATABLE_READ` * `SERIALIZABLE` * `SNAPSHOT` (default: `READ_COMMITTED`) * `options.connectionIsolationLevel` - The default isolation level for new connections. All out-of-transaction queries are executed with this setting. The isolation levels are available from `require('tedious').ISOLATION_LEVEL`. * `READ_UNCOMMITTED` * `READ_COMMITTED` * `REPEATABLE_READ` * `SERIALIZABLE` * `SNAPSHOT` (default: `READ_COMMITTED`) * `options.readOnlyIntent` - A boolean, determining whether the connection will request read-only access from a SQL Server Availability Group. For more information, see here. (default: `false`). * `options.encrypt` - A boolean determining whether the connection will be encrypted. Set to true if you're on Windows Azure. (default: `true`). * `options.cryptoCredentialsDetails` - When encryption is used, an object may be supplied that will be used for the first argument when calling [tls.createSecurePair](http://nodejs.org/docs/latest/api/tls.html#tls_tls_createsecurepair_credentials_isserver_requestcert_rejectunauthorized) (default: `{}`). * `options.rowCollectionOnDone` - A boolean, that when true will expose received rows in Requests' `done*` events. See done, [doneInProc](http://tediousjs.github.io/tedious/api-request.html#event_doneInProc) and [doneProc](http://tediousjs.github.io/tedious/api-request.html#event_doneProc). (default: `false`) Caution: If many rows are received, enabling this option could result in excessive memory usage. * `options.rowCollectionOnRequestCompletion` - A boolean, that when true will expose received rows in Requests' completion callback. See [new Request](http://tediousjs.github.io/tedious/api-request.html#function_newRequest). (default: `false`) Caution: If many rows are received, enabling this option could result in excessive memory usage. * `options.tdsVersion` - The version of TDS to use. If the server doesn't support the specified version, a negotiated version is used instead. The versions are available from `require('tedious').TDS_VERSION`. * `7_1` * `7_2` * `7_3_A` * `7_3_B` * `7_4` (default: `7_4`) * `options.appName` - Application name used for identifying a specific application in profiling, logging or tracing tools of SQL Server. (default: `node-mssql`) * `options.trustServerCertificate` - A boolean, controlling whether encryption occurs if there is no verifiable server certificate. (default: `false`) * `options.multiSubnetFailover` - A boolean, controlling whether the driver should connect to all IPs returned from DNS in parallel. (default: `false`) * `options.debug.packet` - A boolean, controlling whether `debug` events will be emitted with text describing packet details (default: `false`). * `options.debug.data` - A boolean, controlling whether `debug` events will be emitted with text describing packet data details (default: `false`). * `options.debug.payload` - A boolean, controlling whether `debug` events will be emitted with text describing packet payload details (default: `false`). * `options.debug.token` - A boolean, controlling whether `debug` events will be emitted with text describing token stream tokens (default: `false`). ## Column Types[​](#column-types "Direct link to Column Types") `int`, `bigint`, `bit`, `decimal`, `money`, `numeric`, `smallint`, `smallmoney`, `tinyint`, `float`, `real`, `date`, `datetime2`, `datetime`, `datetimeoffset`, `smalldatetime`, `time`, `char`, `varchar`, `text`, `nchar`, `nvarchar`, `ntext`, `binary`, `image`, `varbinary`, `hierarchyid`, `sql_variant`, `timestamp`, `uniqueidentifier`, `xml`, `geometry`, `geography`, `rowversion`, `vector` ### Vector Type (vector)[​](#vector-type-vector "Direct link to Vector Type (vector)") The `vector` data type is available in SQL Server for storing high-dimensional vectors, commonly used for: * Semantic search with embeddings * Recommendation systems * Similarity matching * Machine learning applications NOTE: general `halfvec` type support is unavailable because this feature is still in preview. See the Microsoft docs: [Vector data type](https://learn.microsoft.com/en-us/sql/t-sql/data-types/vector-data-type). #### Usage[​](#usage "Direct link to Usage") ``` @Entity() export class DocumentChunk { @PrimaryGeneratedColumn() id: number @Column("varchar") content: string // Vector column with 1998 dimensions @Column("vector", { length: 1998 }) embedding: number[] } ``` #### Vector Similarity Search[​](#vector-similarity-search "Direct link to Vector Similarity Search") SQL Server provides the `VECTOR_DISTANCE` function for calculating distances between vectors: ``` const queryEmbedding = [ /* your query vector */ ] const results = await dataSource.query( ` DECLARE @question AS VECTOR (1998) = @0; SELECT TOP (10) dc.*, VECTOR_DISTANCE('cosine', @question, embedding) AS distance FROM document_chunk dc ORDER BY VECTOR_DISTANCE('cosine', @question, embedding) `, [JSON.stringify(queryEmbedding)], ) ``` **Distance Metrics:** * `'cosine'` - Cosine distance (most common for semantic search) * `'euclidean'` - Euclidean (L2) distance * `'dot'` - Negative dot product **Requirements:** * SQL Server version with vector support enabled * Vector dimensions must be specified using the `length` option --- # MongoDB ## MongoDB support[​](#mongodb-support "Direct link to MongoDB support") TypeORM has basic MongoDB support. Most of TypeORM functionality is RDBMS-specific, this page contains all MongoDB-specific functionality documentation. ## Installation[​](#installation "Direct link to Installation") ``` npm install mongodb ``` ## Data Source Options[​](#data-source-options "Direct link to Data Source Options") * `url` - Connection url where the connection is performed. Please note that other data source options will override parameters set from url. * `host` - Database host. * `port` - Database host port. Default mongodb port is `27017`. * `username` - Database username (replacement for `auth.user`). * `password` - Database password (replacement for `auth.password`). * `database` - Database name. * `poolSize` - Set the maximum pool size for each server or proxy connection. * `tls` - Use a TLS/SSL connection (needs a mongod server with ssl support, 2.4 or higher). Default: `false`. * `tlsAllowInvalidCertificates` - Specifies whether the driver generates an error when the server's TLS certificate is invalid. Default: `false`. * `tlsCAFile` - Specifies the location of a local .pem file that contains the root certificate chain from the Certificate Authority. * `tlsCertificateKeyFile` - Specifies the location of a local .pem file that contains the client's TLS/SSL certificate and key. * `tlsCertificateKeyFilePassword` - Specifies the password to decrypt the `tlsCertificateKeyFile`. * `keepAlive` - The number of milliseconds to wait before initiating keepAlive on the TCP socket. Default: `30000`. * `connectTimeoutMS` - TCP Connection timeout setting. Default: `30000`. * `socketTimeoutMS` - TCP Socket timeout setting. Default: `360000`. * `replicaSet` - The name of the replica set to connect to. * `authSource` - If the database authentication is dependent on another databaseName. * `writeConcern` - The write concern. * `forceServerObjectId` - Force server to assign \_id values instead of driver. Default: `false`. * `serializeFunctions` - Serialize functions on any object. Default: `false`. * `ignoreUndefined` - Specify if the BSON serializer should ignore undefined fields. Default: `false`. * `raw` - Return document results as raw BSON buffers. Default: `false`. * `promoteLongs` - Promotes Long values to number if they fit inside the 53-bit resolution. Default: `true`. * `promoteBuffers` - Promotes Binary BSON values to native Node Buffers. Default: `false`. * `promoteValues` - Promotes BSON values to native types where possible, set to false to only receive wrapper types. Default: `true`. * `readPreference` - The preferred read preference. * `ReadPreference.PRIMARY` * `ReadPreference.PRIMARY_PREFERRED` * `ReadPreference.SECONDARY` * `ReadPreference.SECONDARY_PREFERRED` * `ReadPreference.NEAREST` * `pkFactory` - A primary key factory object for generation of custom \_id keys. * `readConcern` - Specify a read concern for the collection. (only MongoDB 3.2 or higher supported). * `maxStalenessSeconds` - Specify a maxStalenessSeconds value for secondary reads, minimum is 90 seconds. * `appName` - The name of the application that created this MongoClient instance. MongoDB 3.4 and newer will print this value in the server log upon establishing each connection. It is also recorded in the slow query log and profile collections * `authMechanism` - Sets the authentication mechanism that MongoDB will use to authenticate the connection. * `directConnection` - Specifies whether to force-dispatch all operations to the specified host. Additional options can be added to the `extra` object and will be passed directly to the client library. See more in `mongodb`'s documentation for [Connection Options](https://mongodb-node.netlify.app/docs/drivers/node/current/connect/connection-options/). ## Defining entities and columns[​](#defining-entities-and-columns "Direct link to Defining entities and columns") Defining entities and columns is almost the same as in relational databases, the main difference is that you must use `@ObjectIdColumn` instead of `@PrimaryColumn` or `@PrimaryGeneratedColumn`. Simple entity example: ``` import { Entity, ObjectId, ObjectIdColumn, Column } from "typeorm" @Entity() export class User { @ObjectIdColumn() _id: ObjectId @Column() firstName: string @Column() lastName: string } ``` And this is how you bootstrap the app: ``` import { DataSource } from "typeorm" const myDataSource = new DataSource({ type: "mongodb", host: "localhost", port: 27017, database: "test", }) ``` ## Defining subdocuments (embed documents)[​](#defining-subdocuments-embed-documents "Direct link to Defining subdocuments (embed documents)") Since MongoDB stores objects and objects inside objects (or documents inside documents), you can do the same in TypeORM: ``` import { Entity, ObjectId, ObjectIdColumn, Column } from "typeorm" export class Profile { @Column() about: string @Column() education: string @Column() career: string } ``` ``` import { Entity, ObjectId, ObjectIdColumn, Column } from "typeorm" export class Photo { @Column() url: string @Column() description: string @Column() size: number constructor(url: string, description: string, size: number) { this.url = url this.description = description this.size = size } } ``` ``` import { Entity, ObjectId, ObjectIdColumn, Column } from "typeorm" @Entity() export class User { @ObjectIdColumn() id: ObjectId @Column() firstName: string @Column() lastName: string @Column((type) => Profile) profile: Profile @Column((type) => Photo) photos: Photo[] } ``` If you save this entity: ``` import { getMongoManager } from "typeorm" const user = new User() user.firstName = "Timber" user.lastName = "Saw" user.profile = new Profile() user.profile.about = "About Trees and Me" user.profile.education = "Tree School" user.profile.career = "Lumberjack" user.photos = [ new Photo("me-and-trees.jpg", "Me and Trees", 100), new Photo("me-and-chakram.jpg", "Me and Chakram", 200), ] const manager = getMongoManager() await manager.save(user) ``` The following document will be saved in the database: ``` { "firstName": "Timber", "lastName": "Saw", "profile": { "about": "About Trees and Me", "education": "Tree School", "career": "Lumberjack" }, "photos": [ { "url": "me-and-trees.jpg", "description": "Me and Trees", "size": 100 }, { "url": "me-and-chakram.jpg", "description": "Me and Chakram", "size": 200 } ] } ``` ## Using `MongoEntityManager` and `MongoRepository`[​](#using-mongoentitymanager-and-mongorepository "Direct link to using-mongoentitymanager-and-mongorepository") You can use the majority of methods inside the `EntityManager` (except for RDBMS-specific, like `query` and `transaction`). For example: ``` const timber = await myDataSource.manager.findOneBy(User, { firstName: "Timber", lastName: "Saw", }) ``` For MongoDB there is also a separate `MongoEntityManager` which extends `EntityManager`. ``` const timber = await myDataSource.manager.findOneBy(User, { firstName: "Timber", lastName: "Saw", }) ``` Just like separate like `MongoEntityManager` there is a `MongoRepository` with extended `Repository`: ``` const timber = await myDataSource.getMongoRepository(User).findOneBy({ firstName: "Timber", lastName: "Saw", }) ``` Use Advanced options in find(): Equal: ``` const timber = await myDataSource.getMongoRepository(User).find({ where: { firstName: { $eq: "Timber" }, }, }) ``` LessThan: ``` const timber = await myDataSource.getMongoRepository(User).find({ where: { age: { $lt: 60 }, }, }) ``` In: ``` const timber = await myDataSource.getMongoRepository(User).find({ where: { firstName: { $in: ["Timber", "Zhang"] }, }, }) ``` Not in: ``` const timber = await myDataSource.getMongoRepository(User).find({ where: { firstName: { $not: { $in: ["Timber", "Zhang"] } }, }, }) ``` Or: ``` const timber = await myDataSource.getMongoRepository(User).find({ where: { $or: [{ firstName: "Timber" }, { firstName: "Zhang" }], }, }) ``` Querying subdocuments ``` const users = await myDataSource.getMongoRepository(User).find({ where: { "profile.education": { $eq: "Tree School" }, }, }) ``` Querying Array of subdocuments ``` // Query users with photos of size less than 500 const users = await myDataSource.getMongoRepository(User).find({ where: { "photos.size": { $lt: 500 }, }, }) ``` Both `MongoEntityManager` and `MongoRepository` contain a lot of useful MongoDB-specific methods: ### `createCursor`[​](#createcursor "Direct link to createcursor") Create a cursor for a query that can be used to iterate over results from MongoDB. ### `createEntityCursor`[​](#createentitycursor "Direct link to createentitycursor") Create a cursor for a query that can be used to iterate over results from MongoDB. This returns a modified version of the cursor that transforms each result into Entity models. ### `aggregate`[​](#aggregate "Direct link to aggregate") Execute an aggregation framework pipeline against the collection. ### `bulkWrite`[​](#bulkwrite "Direct link to bulkwrite") Perform a bulkWrite operation without a fluent API. ### `count`[​](#count "Direct link to count") Count the number of matching documents in the db to a query. ### `countDocuments`[​](#countdocuments "Direct link to countdocuments") Count the number of matching documents in the db to a query. ### `createCollectionIndex`[​](#createcollectionindex "Direct link to createcollectionindex") Create an index on the db and collection. ### `createCollectionIndexes`[​](#createcollectionindexes "Direct link to createcollectionindexes") Create multiple indexes in the collection, this method is only supported in MongoDB 2.6 or higher. Earlier versions of MongoDB will throw a "command not supported" error. Index specifications are defined at [createIndexes](http://docs.mongodb.org/manual/reference/command/createIndexes/). ### `deleteMany`[​](#deletemany "Direct link to deletemany") Delete multiple documents on MongoDB. ### `deleteOne`[​](#deleteone "Direct link to deleteone") Delete a document on MongoDB. ### `distinct`[​](#distinct "Direct link to distinct") The distinct command returns a list of distinct values for the given key across a collection. ### `dropCollectionIndex`[​](#dropcollectionindex "Direct link to dropcollectionindex") Drops an index from this collection. ### `dropCollectionIndexes`[​](#dropcollectionindexes "Direct link to dropcollectionindexes") Drops all indexes from the collection. ### `findOneAndDelete`[​](#findoneanddelete "Direct link to findoneanddelete") Find a document and delete it in one atomic operation, requires a write lock for the duration of the operation. ### `findOneAndReplace`[​](#findoneandreplace "Direct link to findoneandreplace") Find a document and replace it in one atomic operation, requires a write lock for the duration of the operation. ### `findOneAndUpdate`[​](#findoneandupdate "Direct link to findoneandupdate") Find a document and update it in one atomic operation, requires a write lock for the duration of the operation. ### `geoHaystackSearch`[​](#geohaystacksearch "Direct link to geohaystacksearch") Execute a geo search using a geo haystack index on a collection. ### `geoNear`[​](#geonear "Direct link to geonear") Execute the geoNear command to search for items in the collection. ### `group`[​](#group "Direct link to group") Run a group command across a collection. ### `collectionIndexes`[​](#collectionindexes "Direct link to collectionindexes") Retrieve all the indexes of the collection. ### `collectionIndexExists`[​](#collectionindexexists "Direct link to collectionindexexists") Retrieve if an index exists on the collection ### `collectionIndexInformation`[​](#collectionindexinformation "Direct link to collectionindexinformation") Retrieve this collection's index info. ### `initializeOrderedBulkOp`[​](#initializeorderedbulkop "Direct link to initializeorderedbulkop") Initiate an In order bulk write operation; operations will be serially executed in the order they are added, creating a new operation for each switch in types. ### `initializeUnorderedBulkOp`[​](#initializeunorderedbulkop "Direct link to initializeunorderedbulkop") Initiate an Out of order batch write operation. All operations will be buffered into insert/update/remove commands executed out of order. ### `insertMany`[​](#insertmany "Direct link to insertmany") Insert an array of documents into MongoDB. ### `insertOne`[​](#insertone "Direct link to insertone") Insert a single document into MongoDB. ### `isCapped`[​](#iscapped "Direct link to iscapped") Return if the collection is a capped collection. ### `listCollectionIndexes`[​](#listcollectionindexes "Direct link to listcollectionindexes") Get the list of all indexes information for the collection. ### `parallelCollectionScan`[​](#parallelcollectionscan "Direct link to parallelcollectionscan") Return N number of parallel cursors for a collection allowing parallel reading of the entire collection. There are no ordering guarantees for returned results ### `reIndex`[​](#reindex "Direct link to reindex") Reindex all indexes on the collection Warning: reIndex is a blocking operation (indexes are rebuilt in the foreground) and will be slow for large collections. ### `rename`[​](#rename "Direct link to rename") Change the name of an existing collection. ### `replaceOne`[​](#replaceone "Direct link to replaceone") Replace a document on MongoDB. ### `stats`[​](#stats "Direct link to stats") Get all the collection statistics. ### `updateMany`[​](#updatemany "Direct link to updatemany") Update multiple documents within the collection based on the filter. ### `updateOne`[​](#updateone "Direct link to updateone") Update a single document within the collection based on the filter. --- # MySQL / MariaDB MySQL, MariaDB and Amazon Aurora MySQL are supported as TypeORM drivers. ## Installation[​](#installation "Direct link to Installation") Either `mysql` or `mysql2` are required to connect to a MySQL/MariaDB data source. Only `mysql2` can connect to MySQL 8.0 or later and is recommended because it is still maintained. See more about [mysql2](https://sidorares.github.io/node-mysql2/docs/history-and-why-mysq2). ``` npm install mysql ``` or: ``` npm install mysql2 ``` ## Data Source Options[​](#data-source-options "Direct link to Data Source Options") See [Data Source Options](https://typeorm.io/docs/data-source/data-source-options.md) for the common data source options. You can use the data source types `mysql`, `mariadb` and `aurora-mysql` to connect to the respective databases. * `connectorPackage` - The database client, either `mysql` or `mysql2`. If the specified client cannot be loaded, it will fall back to the alternative. (Current default: `mysql`) * `url` - Connection url where the connection is performed. Please note that other data source options will override parameters set from url. * `host` - Database host. * `port` - Database host port. Default mysql port is `3306`. * `username` - Database username. * `password` - Database password. * `database` - Database name. * `socketPath` - Database socket path. * `poolSize` - Maximum number of clients the pool should contain for each connection. * `charset` and `collation` - The charset/collation for the connection. If an SQL-level charset is specified (like utf8mb4) then the default collation for that charset is used. * `timezone` - the timezone configured on the MySQL server. This is used to typecast server date/time values to JavaScript Date object and vice versa. This can be `local`, `Z`, or an offset in the form `+HH:MM` or `-HH:MM`. (Default: `local`) * `connectTimeout` - The milliseconds before a timeout occurs during the initial connection to the MySQL server. (Default: `10000`) * `acquireTimeout` - The milliseconds before a timeout occurs during the initial connection to the MySQL server. It differs from `connectTimeout` as it governs the TCP connection timeout whereas connectTimeout does not. (default: `10000`) * `insecureAuth` - Allow connecting to MySQL instances that ask for the old (insecure) authentication method. (Default: `false`) * `supportBigNumbers` - When dealing with big numbers (`BIGINT` and `DECIMAL` columns) in the database, you should enable this option (Default: `true`) * `bigNumberStrings` - Enabling both `supportBigNumbers` and `bigNumberStrings` forces big numbers (`BIGINT` and `DECIMAL` columns) to be always returned as JavaScript String objects (Default: `true`). Enabling `supportBigNumbers` but leaving `bigNumberStrings` disabled will return big numbers as String objects only when they cannot be accurately represented with [JavaScript Number objects](http://ecma262-5.com/ELS5_HTML.htm#Section_8.5) (which happens when they exceed the `[-2^53, +2^53]` range), otherwise they will be returned as Number objects. This option is ignored if `supportBigNumbers` is disabled. * `dateStrings` - Force date types (`TIMESTAMP`, `DATETIME`, `DATE`) to be returned as strings rather than inflated into JavaScript Date objects. Can be true/false or an array of type names to keep as strings. (Default: `false`) * `debug` - Prints protocol details to stdout. Can be true/false or an array of packet type names that should be printed. (Default: `false`) * `trace` - Generates stack traces on Error to include call site of library entrance ("long stack traces"). Slight performance penalty for most calls. (Default: `true`) * `multipleStatements` - Allow multiple mysql statements per query. Be careful with this, it could increase the scope of SQL injection attacks. (Default: `false`) * `legacySpatialSupport` - Use legacy spatial functions like `GeomFromText` and `AsText` which have been replaced by the standard-compliant `ST_GeomFromText` or `ST_AsText` in MySQL 8.0. (Current default: true) * `flags` - List of connection flags to use other than the default ones. It is also possible to blacklist default ones. For more information, check [Connection Flags](https://github.com/mysqljs/mysql#connection-flags). * `ssl` - object with SSL parameters or a string containing the name of the SSL profile. See [SSL options](https://github.com/mysqljs/mysql#ssl-options). * `enableQueryTimeout` - If a value is specified for maxQueryExecutionTime, in addition to generating a warning log when a query exceeds this time limit, the specified maxQueryExecutionTime value is also used as the timeout for the query. For more information, check [mysql timeouts](https://github.com/mysqljs/mysql#timeouts). Additional options can be added to the `extra` object and will be passed directly to the client library. See more in the [mysql connection options](https://github.com/mysqljs/mysql#connection-options) or the [mysql2 documentation](https://sidorares.github.io/node-mysql2/docs). ## Column Types[​](#column-types "Direct link to Column Types") `bit`, `int`, `integer`, `tinyint`, `smallint`, `mediumint`, `bigint`, `float`, `double`, `double precision`, `dec`, `decimal`, `numeric`, `fixed`, `bool`, `boolean`, `date`, `datetime`, `timestamp`, `time`, `year`, `char`, `nchar`, `national char`, `varchar`, `nvarchar`, `national varchar`, `text`, `tinytext`, `mediumtext`, `blob`, `longtext`, `tinyblob`, `mediumblob`, `longblob`, `enum`, `set`, `json`, `binary`, `varbinary`, `geometry`, `point`, `linestring`, `polygon`, `multipoint`, `multilinestring`, `multipolygon`, `geometrycollection`, `uuid`, `inet4`, `inet6` > Note: `uuid`, `inet4`, and `inet6` are only available for MariaDB and for the respective versions that made them available. ### `enum` column type[​](#enum-column-type "Direct link to enum-column-type") See [enum column type](https://typeorm.io/docs/entity/entities.md#enum-column-type). ### `set` column type[​](#set-column-type "Direct link to set-column-type") `set` column type is supported by `mariadb` and `mysql`. There are various possible column definitions: Using TypeScript enums: ``` export enum UserRole { ADMIN = "admin", EDITOR = "editor", GHOST = "ghost", } @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column({ type: "set", enum: UserRole, default: [UserRole.GHOST, UserRole.EDITOR], }) roles: UserRole[] } ``` Using an array with `set` values: ``` export type UserRoleType = "admin" | "editor" | "ghost" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column({ type: "set", enum: ["admin", "editor", "ghost"], default: ["ghost", "editor"], }) roles: UserRoleType[] } ``` ### Vector Types[​](#vector-types "Direct link to Vector Types") MySQL supports the [VECTOR type](https://dev.mysql.com/doc/refman/en/vector.html) since version 9.0, while in MariaDB, [vectors](https://mariadb.com/docs/server/reference/sql-structure/vectors/vector-overview) are available since 11.7. --- # Oracle ## Installation[​](#installation "Direct link to Installation") ``` npm install oracledb ``` By default, the [oracledb](https://github.com/oracle/node-oracledb) uses the "thin mode" to connect. To enable the "thick mode", you need to follow the installation instructions from their [user guide](https://node-oracledb.readthedocs.io/en/latest/user_guide/installation.html). ## Data Source Options[​](#data-source-options "Direct link to Data Source Options") See [Data Source Options](https://typeorm.io/docs/data-source/data-source-options.md) for the common data source options. * `sid` - The System Identifier (SID) identifies a specific database instance. For example, "sales". * `serviceName` - The Service Name is an identifier of a database service. For example, `sales.us.example.com`. ## Column Types[​](#column-types "Direct link to Column Types") `char`, `nchar`, `nvarchar2`, `varchar2`, `long`, `raw`, `long raw`, `number`, `numeric`, `float`, `dec`, `decimal`, `integer`, `int`, `smallint`, `real`, `double precision`, `date`, `timestamp`, `timestamp with time zone`, `timestamp with local time zone`, `interval year to month`, `interval day to second`, `bfile`, `blob`, `clob`, `nclob`, `rowid`, `urowid` --- # Postgres / CockroachDB PostgreSQL, CockroachDB and Amazon Aurora Postgres are supported as TypeORM drivers. Databases that are PostgreSQL-compatible can also be used with TypeORM via the `postgres` data source type. To use YugabyteDB, refer to [their ORM docs](https://docs.yugabyte.com/stable/drivers-orms/nodejs/typeorm/) to get started. Note that because some Postgres features are [not supported](https://docs.yugabyte.com/stable/develop/postgresql-compatibility/#unsupported-postgresql-features) by YugabyteDB, some TypeORM functionality may be limited. ## Installation[​](#installation "Direct link to Installation") ``` npm install pg ``` For streaming support: ``` npm install pg-query-stream ``` ## Data Source Options[​](#data-source-options "Direct link to Data Source Options") See [Data Source Options](https://typeorm.io/docs/data-source/data-source-options.md) for the common data source options. You can use the data source type `postgres`, `cockroachdb` or `aurora-postgres` to connect to the respective databases. * `url` - Connection url where the connection is performed. Please note that other data source options will override parameters set from url. * `host` - Database host. * `port` - Database host port. The default Postgres port is `5432`. * `username` - Database username. * `password` - Database password. * `database` - Database name. * `schema` - Schema name. Default is "public". * `connectTimeoutMS` - The milliseconds before a timeout occurs during the initial connection to the postgres server. If `undefined`, or set to `0`, there is no timeout. Defaults to `undefined`. * `ssl` - Object with ssl parameters. See [TLS/SSL](https://node-postgres.com/features/ssl). * `uuidExtension` - The Postgres extension to use when generating UUIDs. Defaults to `uuid-ossp`. It can be changed to `pgcrypto` if the `uuid-ossp` extension is unavailable. * `poolErrorHandler` - A function that gets called when the underlying pool emits `'error'` event. Takes a single parameter (error instance) and defaults to logging with `warn` level. * `maxTransactionRetries` - A maximum number of transaction retries in case of a 40001 error. Defaults to 5. * `logNotifications` - A boolean to determine whether postgres server [notice messages](https://www.postgresql.org/docs/current/plpgsql-errors-and-messages.html) and [notification events](https://www.postgresql.org/docs/current/sql-notify.html) should be included in client's logs with `info` level (default: `false`). * `installExtensions` - A boolean to control whether to install necessary postgres extensions automatically or not (default: `true`) * `applicationName` - A string visible in statistics and logs to help referencing an application to a connection (default: `undefined`) * `parseInt8` - A boolean to enable parsing 64-bit integers (int8) as JavaScript numbers. By default, `int8` (bigint) values are returned as strings to avoid overflows. JavaScript numbers are IEEE-754 and lose precision over the maximum safe integer (`Number.MAX_SAFE_INTEGER = +2^53`). If you require the full 64-bit range consider working with the returned strings or converting them to native `bigint` instead of using this option. Additional options can be added to the `extra` object and will be passed directly to the client library. See more in `pg`'s documentation for [Pool](https://node-postgres.com/apis/pool#new-pool) and [Client](https://node-postgres.com/apis/client#new-client). ## Column Types[​](#column-types "Direct link to Column Types") ### Column types for `postgres`[​](#column-types-for-postgres "Direct link to column-types-for-postgres") `int`, `int2`, `int4`, `int8`, `smallint`, `integer`, `bigint`, `decimal`, `numeric`, `real`, `float`, `float4`, `float8`, `double precision`, `money`, `character varying`, `varchar`, `character`, `char`, `text`, `citext`, `hstore`, `bytea`, `bit`, `varbit`, `bit varying`, `timetz`, `timestamptz`, `timestamp`, `timestamp without time zone`, `timestamp with time zone`, `date`, `time`, `time without time zone`, `time with time zone`, `interval`, `bool`, `boolean`, `enum`, `point`, `line`, `lseg`, `box`, `path`, `polygon`, `circle`, `cidr`, `inet`, `macaddr`, `macaddr8`, `tsvector`, `tsquery`, `uuid`, `xml`, `json`, `jsonb`, `jsonpath`, `int4range`, `int8range`, `numrange`, `tsrange`, `tstzrange`, `daterange`, `int4multirange`, `int8multirange`, `nummultirange`, `tsmultirange`, `tstzmultirange`, `multidaterange`, `geometry`, `geography`, `cube`, `ltree`, `vector`, `halfvec`. ### Column types for `cockroachdb`[​](#column-types-for-cockroachdb "Direct link to column-types-for-cockroachdb") `array`, `bool`, `boolean`, `bytes`, `bytea`, `blob`, `date`, `numeric`, `decimal`, `dec`, `float`, `float4`, `float8`, `double precision`, `real`, `inet`, `int`, `integer`, `int2`, `int8`, `int64`, `smallint`, `bigint`, `interval`, `string`, `character varying`, `character`, `char`, `char varying`, `varchar`, `text`, `time`, `time without time zone`, `timestamp`, `timestamptz`, `timestamp without time zone`, `timestamp with time zone`, `json`, `jsonb`, `uuid` Note: CockroachDB returns all numeric data types as `string`. However, if you omit the column type and define your property as `number` ORM will `parseInt` string into number. ### Vector columns[​](#vector-columns "Direct link to Vector columns") Vector columns can be used for similarity searches using PostgreSQL's vector operators: ``` // L2 distance (Euclidean) - <-> const results = await dataSource.sql` SELECT id, embedding FROM post ORDER BY embedding <-> ${"[1,2,3]"} LIMIT 5` // Cosine distance - <=> const results = await dataSource.sql` SELECT id, embedding FROM post ORDER BY embedding <=> ${"[1,2,3]"} LIMIT 5` // Inner product - <#> const results = await dataSource.sql` SELECT id, embedding FROM post ORDER BY embedding <#> ${"[1,2,3]"} LIMIT 5` ``` ### Spatial columns[​](#spatial-columns "Direct link to Spatial columns") TypeORM's PostgreSQL and CockroachDB support uses [GeoJSON](http://geojson.org/) as an interchange format, so geometry columns should be tagged either as `object` or `Geometry` (or subclasses, e.g. `Point`) after importing [`geojson` types](https://www.npmjs.com/package/@types/geojson) or using the TypeORM built-in GeoJSON types: ``` import { Entity, PrimaryColumn, Column, Point, LineString, MultiPoint } from "typeorm" @Entity() export class Thing { @PrimaryColumn() id: number @Column("geometry") point: Point @Column("geometry") linestring: LineString @Column("geometry", { spatialFeatureType: "MultiPoint", srid: 4326, }) multiPointWithSRID: MultiPoint } ... const thing = new Thing() thing.point = { type: "Point", coordinates: [116.443987, 39.920843], } thing.linestring = { type: "LineString", coordinates: [ [-87.623177, 41.881832], [-90.199402, 38.627003], [-82.446732, 38.413651], [-87.623177, 41.881832], ], } thing.multiPointWithSRID = { type: "MultiPoint", coordinates: [ [100.0, 0.0], [101.0, 1.0], ], } ``` TypeORM tries to do the right thing, but it's not always possible to determine when a value being inserted or the result of a PostGIS function should be treated as a geometry. As a result, you may find yourself writing code similar to the following, where values are converted into PostGIS `geometry`s from GeoJSON and into GeoJSON as `json`: ``` import { Point } from "typeorm" const origin: Point = { type: "Point", coordinates: [0, 0], } await dataSource.manager .createQueryBuilder(Thing, "thing") // convert stringified GeoJSON into a geometry with an SRID that matches the // table specification .where( "ST_Distance(geom, ST_SetSRID(ST_GeomFromGeoJSON(:origin), ST_SRID(geom))) > 0", ) .orderBy( "ST_Distance(geom, ST_SetSRID(ST_GeomFromGeoJSON(:origin), ST_SRID(geom)))", "ASC", ) .setParameters({ // stringify GeoJSON origin: JSON.stringify(origin), }) .getMany() await dataSource.manager .createQueryBuilder(Thing, "thing") // convert geometry result into GeoJSON, treated as JSON (so that TypeORM // will know to deserialize it) .select("ST_AsGeoJSON(ST_Buffer(geom, 0.1))::json geom") .from("thing") .getMany() ``` --- # SAP HANA ## Installation[​](#installation "Direct link to Installation") TypeORM relies on `@sap/hana-client` for establishing the database connection: ``` npm install @sap/hana-client ``` If you are using TypeORM 0.3.25 or earlier, `hdb-pool` is also required for managing the pool. ## Data Source Options[​](#data-source-options "Direct link to Data Source Options") See [Data Source Options](https://typeorm.io/docs/data-source/data-source-options.md) for the common data source options. * `host` - The hostname of the SAP HANA server. For example, `"localhost"`. * `port` - The port number of the SAP HANA server. For example, `30015`. * `username` - The username to connect to the SAP HANA server. For example, `"SYSTEM"`. * `password` - The password to connect to the SAP HANA server. For example, `"password"`. * `database` - The name of the database to connect to. For example, `"HXE"`. * `encrypt` - Whether to encrypt the connection. For example, `true`. * `sslValidateCertificate` - Whether to validate the SSL certificate. For example, `true`. * `key`, `cert` and `ca` - Private key, public certificate and certificate authority for the encrypted connection. * `pool` — Connection pool configuration object: * `maxConnectedOrPooled` (number) — Max active or idle connections in the pool (default: 10). * `maxPooledIdleTime` (seconds) — Time before an idle connection is closed (default: 30). * `maxWaitTimeoutIfPoolExhausted` (milliseconds) - Time to wait for a connection to become available (default: 0, no wait). Requires `@sap/hana-client` version `2.27` or later. * `pingCheck` (boolean) — Whether to validate connections before use (default: false). * `poolCapacity` (number) — Maximum number of connections to be kept available (default: no limit). See the official documentation of SAP HANA Client for more details as well as the `extra` properties: [Node.js Connection Properties](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/4fe9978ebac44f35b9369ef5a4a26f4c.html). ## Column Types[​](#column-types "Direct link to Column Types") SAP HANA 2.0 and SAP HANA Cloud support slightly different data types. Check the SAP Help pages for more information: * [SAP HANA 2.0 Data Types](https://help.sap.com/docs/SAP_HANA_PLATFORM/4fe29514fd584807ac9f2a04f6754767/20a1569875191014b507cf392724b7eb.html?locale=en-US) * [SAP HANA Cloud Data Types](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/data-types) TypeORM's `SapDriver` supports `tinyint`, `smallint`, `integer`, `bigint`, `smalldecimal`, `decimal`, `real`, `double`, `date`, `time`, `seconddate`, `timestamp`, `boolean`, `char`, `nchar`, `varchar`, `nvarchar`, `text`, `alphanum`, `shorttext`, `array`, `varbinary`, `blob`, `clob`, `nclob`, `st_geometry`, `st_point`, `real_vector` and `half_vector`. Some of these data types have been deprecated or removed in SAP HANA Cloud, and will be converted to the closest available alternative when connected to a Cloud database. ### Vector Types[​](#vector-types "Direct link to Vector Types") The `real_vector` and `half_vector` data types were introduced in SAP HANA Cloud (2024Q1 and 2025Q2 respectively), and require a supported version of `@sap/hana-client` as well. For consistency with PostgreSQL's vector support, TypeORM also provides aliases: * `vector` (alias for `real_vector`) - stores vectors as 4-byte floats * `halfvec` (alias for `half_vector`) - stores vectors as 2-byte floats for memory efficiency ``` @Entity() export class Document { @PrimaryGeneratedColumn() id: number // Using SAP HANA native type names @Column("real_vector", { length: 1536 }) embedding: Buffer | number[] @Column("half_vector", { length: 768 }) reduced_embedding: Buffer | number[] // Using cross-database aliases (recommended) @Column("vector", { length: 1536 }) universal_embedding: Buffer | number[] @Column("halfvec", { length: 768 }) universal_reduced_embedding: Buffer | number[] } ``` By default, the client will return a `Buffer` in the `fvecs`/`hvecs` format, which is more efficient. It is possible to let the driver convert the values to a `number[]` by adding `{ extra: { vectorOutputType: "Array" } }` to the connection options. Check the SAP HANA Client documentation for more information about [REAL\_VECTOR](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/0d197e4389c64e6b9cf90f6f698f62fe.html) or [HALF\_VECTOR](https://help.sap.com/docs/SAP_HANA_CLIENT/f1b440ded6144a54ada97ff95dac7adf/8bb854b4ce4a4299bed27c365b717e91.html). Use the appropriate [vector functions](https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-reference-guide/vector-functions) for similarity searches. --- # SQLite ## Installation[​](#installation "Direct link to Installation") * for **SQLite**: ``` npm install sqlite3 ``` * for **Better SQLite**: ``` npm install better-sqlite3 ``` * for **sql.js**: ``` npm install sql.js ``` * for **Capacitor**, **Cordova**, **Expo**, **NativeScript** and **React Native**, check the [supported platforms](https://typeorm.io/docs/help/supported-platforms.md). ## Data Source Options[​](#data-source-options "Direct link to Data Source Options") See [Data Source Options](https://typeorm.io/docs/data-source/data-source-options.md) for the common data source options. ### `sqlite` data source options[​](#sqlite-data-source-options "Direct link to sqlite-data-source-options") * `database` - Database path. For example, "mydb.sql" ### `better-sqlite3` data source options[​](#better-sqlite3-data-source-options "Direct link to better-sqlite3-data-source-options") * `database` - Database path. For example, "mydb.sql" * `statementCacheSize` - Cache size of the SQLite statement to speed up queries (default 100). * `prepareDatabase` - Function to run before a database is used in typeorm. You can access the original better-sqlite3 Database object here. * `nativeBinding` - Relative or absolute path to the native addon (better\_sqlite3.node). ### `sql.js` data source options[​](#sqljs-data-source-options "Direct link to sqljs-data-source-options") * `database`: The raw UInt8Array database that should be imported. * `sqlJsConfig`: Optional initialize config for sql.js. * `autoSave`: Enable automatic persistence of database changes, requires either `location` or `autoSaveCallback`. When set to `true`, every change is saved to the file system (Node.js) or to `localStorage`/`indexedDB` (browser) if `location` is specified, or the `autoSaveCallback` is invoked otherwise. * `autoSaveCallback`: A function that gets called when changes to the database are made and `autoSave` is enabled. The function gets a `UInt8Array` that represents the database. * `location`: The file location to load and save the database to. * `useLocalForage`: Enables the usage of the [localforage library](https://github.com/localForage/localForage) to save and load the database asynchronously from the indexedDB instead of using the synchrony local storage methods in a browser environment. The localforage node module needs to be added to your project, and the localforage.js should be imported in your page. ### `capacitor` data source options[​](#capacitor-data-source-options "Direct link to capacitor-data-source-options") * `database` - Database name (capacitor-sqlite will add the suffix `SQLite.db`) * `driver` - The capacitor-sqlite instance. For example, `new SQLiteConnection(CapacitorSQLite)`. * `mode` - Set the mode for database encryption: "no-encryption" | "encryption" | "secret" | "newsecret" * `version` - Database version * `journalMode` - The SQLite journal mode (optional) ### `cordova` data source options[​](#cordova-data-source-options "Direct link to cordova-data-source-options") * `database` - Database name * `location` - Where to save the database. See [cordova-sqlite-storage](https://github.com/litehelpers/Cordova-sqlite-storage#opening-a-database) for options. ### `expo` data source options[​](#expo-data-source-options "Direct link to expo-data-source-options") * `database` - Name of the database. For example, "mydb". * `driver` - The Expo SQLite module. For example, `require('expo-sqlite')`. ### `nativescript` data source options[​](#nativescript-data-source-options "Direct link to nativescript-data-source-options") * `database` - Database name ### `react-native` data source options[​](#react-native-data-source-options "Direct link to react-native-data-source-options") * `database` - Database name * `location` - Where to save the database. See [react-native-sqlite-storage](https://github.com/andpor/react-native-sqlite-storage#opening-a-database) for options. ## Column Types[​](#column-types "Direct link to Column Types") `int`, `int2`, `int8`, `integer`, `tinyint`, `smallint`, `mediumint`, `bigint`, `decimal`, `numeric`, `float`, `double`, `real`, `double precision`, `datetime`, `varying character`, `character`, `native character`, `varchar`, `nchar`, `nvarchar2`, `unsigned big int`, `boolean`, `blob`, `text`, `clob`, `date` --- # Embedded Entities There is an amazing way to reduce duplication in your app (using composition over inheritance) by using `embedded columns`. Embedded column is a column which accepts a class with its own columns and merges those columns into the current entity's database table. Example: Let's say we have `User`, `Employee` and `Student` entities. All those entities have few things in common - `first name` and `last name` properties ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: string @Column() firstName: string @Column() lastName: string @Column() isActive: boolean } ``` ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class Employee { @PrimaryGeneratedColumn() id: string @Column() firstName: string @Column() lastName: string @Column() salary: string } ``` ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class Student { @PrimaryGeneratedColumn() id: string @Column() firstName: string @Column() lastName: string @Column() faculty: string } ``` What we can do is to reduce `firstName` and `lastName` duplication by creating a new class with those columns: ``` import { Column } from "typeorm" export class Name { @Column() first: string @Column() last: string } ``` Then you can "connect" those columns in your entities: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" import { Name } from "./Name" @Entity() export class User { @PrimaryGeneratedColumn() id: string @Column(() => Name) name: Name @Column() isActive: boolean } ``` ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" import { Name } from "./Name" @Entity() export class Employee { @PrimaryGeneratedColumn() id: string @Column(() => Name) name: Name @Column() salary: number } ``` ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" import { Name } from "./Name" @Entity() export class Student { @PrimaryGeneratedColumn() id: string @Column(() => Name) name: Name @Column() faculty: string } ``` All columns defined in the `Name` entity will be merged into `user`, `employee` and `student`: ``` +-------------+--------------+----------------------------+ | user | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | nameFirst | varchar(255) | | | nameLast | varchar(255) | | | isActive | boolean | | +-------------+--------------+----------------------------+ +-------------+--------------+----------------------------+ | employee | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | nameFirst | varchar(255) | | | nameLast | varchar(255) | | | salary | int | | +-------------+--------------+----------------------------+ +-------------+--------------+----------------------------+ | student | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | nameFirst | varchar(255) | | | nameLast | varchar(255) | | | faculty | varchar(255) | | +-------------+--------------+----------------------------+ ``` This way code duplication in the entity classes is reduced. You can use as many columns (or relations) in embedded classes as you need. You even can have nested embedded columns inside embedded classes. --- # Entities ## What is an Entity?[​](#what-is-an-entity "Direct link to What is an Entity?") Entity is a class that maps to a database table (or collection when using MongoDB). You can create an entity by defining a new class and mark it with `@Entity()`: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string @Column() isActive: boolean } ``` This will create following database table: ``` +-------------+--------------+----------------------------+ | user | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | firstName | varchar(255) | | | lastName | varchar(255) | | | isActive | boolean | | +-------------+--------------+----------------------------+ ``` Basic entities consist of columns and relations. Each entity **MUST** have a primary column (or ObjectId column if are using MongoDB). Each entity must be registered in your data source options: ``` import { DataSource } from "typeorm" import { User } from "./entity/User" const myDataSource = new DataSource({ type: "mysql", host: "localhost", port: 3306, username: "test", password: "test", database: "test", entities: [User], }) ``` Or you can specify the whole directory with all entities inside - and all of them will be loaded: ``` import { DataSource } from "typeorm" const dataSource = new DataSource({ type: "mysql", host: "localhost", port: 3306, username: "test", password: "test", database: "test", entities: ["entity/*.js"], }) ``` If you want to use an alternative table name for the `User` entity you can specify it in `@Entity`: `@Entity("my_users")`. If you want to set a base prefix for all database tables in your application you can specify `entityPrefix` in data source options. When using an entity constructor its arguments **must be optional**. Since ORM creates instances of entity classes when loading from the database, therefore it is not aware of your constructor arguments. Learn more about parameters `@Entity` in [Decorators reference](https://typeorm.io/docs/help/decorator-reference.md). ## Entity columns[​](#entity-columns "Direct link to Entity columns") Since database tables consist of columns your entities must consist of columns too. Each entity class property you marked with `@Column` will be mapped to a database table column. ### Primary columns[​](#primary-columns "Direct link to Primary columns") Each entity must have at least one primary column. There are several types of primary columns: * `@PrimaryColumn()` creates a primary column which takes any value of any type. You can specify the column type. If you don't specify a column type it will be inferred from the property type. The example below will create id with `int` as type which you must manually assign before save. ``` import { Entity, PrimaryColumn } from "typeorm" @Entity() export class User { @PrimaryColumn() id: number } ``` * `@PrimaryGeneratedColumn()` creates a primary column which value will be automatically generated with an auto-increment value. It will create `int` column with `auto-increment`/`serial`/`sequence`/`identity` (depend on the database and configuration provided). You don't have to manually assign its value before save - value will be automatically generated. ``` import { Entity, PrimaryGeneratedColumn } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number } ``` * `@PrimaryGeneratedColumn("uuid")` creates a primary column which value will be automatically generated with `uuid`. Uuid is a unique string id. You don't have to manually assign its value before save - value will be automatically generated. ``` import { Entity, PrimaryGeneratedColumn } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn("uuid") id: string } ``` You can have composite primary columns as well: ``` import { Entity, PrimaryColumn } from "typeorm" @Entity() export class User { @PrimaryColumn() firstName: string @PrimaryColumn() lastName: string } ``` When you save entities using `save` it always tries to find an entity in the database with the given entity id (or ids). If id/ids are found then it will update this row in the database. If there is no row with the id/ids, a new row will be inserted. To find an entity by id you can use `manager.findOneBy` or `repository.findOneBy`. Example: ``` // find one by id with single primary key const person = await dataSource.manager.findOneBy(Person, { id: 1 }) const person = await dataSource.getRepository(Person).findOneBy({ id: 1 }) // find one by id with composite primary keys const user = await dataSource.manager.findOneBy(User, { firstName: "Timber", lastName: "Saw", }) const user = await dataSource.getRepository(User).findOneBy({ firstName: "Timber", lastName: "Saw", }) ``` ### Special columns[​](#special-columns "Direct link to Special columns") There are several special column types with additional functionality available: * `@CreateDateColumn` is a special column that is automatically set to the entity's insertion date. You don't need to set this column - it will be automatically set. * `@UpdateDateColumn` is a special column that is automatically set to the entity's update time each time you call `save` of entity manager or repository, or during `upsert` operations when an update occurs. You don't need to set this column - it will be automatically set. * `@DeleteDateColumn` is a special column that is automatically set to the entity's delete time each time you call soft-delete of entity manager or repository. You don't need to set this column - it will be automatically set. If the @DeleteDateColumn is set, the default scope will be "non-deleted". * `@VersionColumn` is a special column that is automatically set to the version of the entity (incremental number) each time you call `save` of entity manager or repository, or during `upsert` operations when an update occurs. You don't need to set this column - it will be automatically set. ## Column types[​](#column-types "Direct link to Column types") TypeORM supports all of the most commonly used database-supported column types. Column types are database-type specific - this provides more flexibility on how your database schema will look like. You can specify column type as first parameter of `@Column` or in the column options of `@Column`, for example: ``` @Column("int") ``` or ``` @Column({ type: "int" }) ``` If you want to specify additional type parameters you can do it via column options. For example: ``` @Column("varchar", { length: 200 }) ``` > Note about `bigint` type: `bigint` column type, used in SQL databases, doesn't fit into the regular `number` type and maps property to a `string` instead. ### `enum` column type[​](#enum-column-type "Direct link to enum-column-type") `enum` column type is supported by `postgres` and `mysql`. There are various possible column definitions: Using typescript enums: ``` export enum UserRole { ADMIN = "admin", EDITOR = "editor", GHOST = "ghost", } @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column({ type: "enum", enum: UserRole, default: UserRole.GHOST, }) role: UserRole } ``` > Note: String, numeric and heterogeneous enums are supported. Using array with enum values: ``` export type UserRoleType = "admin" | "editor" | "ghost", @Entity() export class User { @PrimaryGeneratedColumn() id: number; @Column({ type: "enum", enum: ["admin", "editor", "ghost"], default: "ghost" }) role: UserRoleType } ``` ### `simple-array` column type[​](#simple-array-column-type "Direct link to simple-array-column-type") There is a special column type called `simple-array` which can store primitive array values in a single string column. All values are separated by a comma. For example: ``` @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column("simple-array") names: string[] } ``` ``` const user = new User() user.names = ["Alexander", "Alex", "Sasha", "Shurik"] ``` Will be stored in a single database column as `Alexander,Alex,Sasha,Shurik` value. When you'll load data from the database, the names will be returned as an array of names, just like you stored them. Note you **MUST NOT** have any comma in values you write. ### `simple-json` column type[​](#simple-json-column-type "Direct link to simple-json-column-type") There is a special column type called `simple-json` which can store any values which can be stored in database via JSON.stringify. Very useful when you do not have json type in your database and you want to store and load object without any hassle. For example: ``` @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column("simple-json") profile: { name: string; nickname: string } } ``` ``` const user = new User() user.profile = { name: "John", nickname: "Malkovich" } ``` Will be stored in a single database column as `{"name":"John","nickname":"Malkovich"}` value. When you'll load data from the database, you will have your object/array/primitive back via JSON.parse ### Columns with generated values[​](#columns-with-generated-values "Direct link to Columns with generated values") You can create column with generated value using `@Generated` decorator. For example: ``` @Entity() export class User { @PrimaryColumn() id: number @Column() @Generated("uuid") uuid: string } ``` `uuid` value will be automatically generated and stored into the database. Besides "uuid" there is also "increment", "identity" (Postgres 10+ only) and "rowid" (CockroachDB only) generated types, however there are some limitations on some database platforms with this type of generation (for example some databases can only have one increment column, or some of them require increment to be a primary key). ### Vector columns[​](#vector-columns "Direct link to Vector columns") Vector columns are supported on MariaDB/MySQL, Microsoft SQL Server, PostgreSQL (via [`pgvector`](https://github.com/pgvector/pgvector) extension) and SAP HANA Cloud, enabling storing and querying vector embeddings for similarity search and machine learning applications. TypeORM supports both `vector` and `halfvec` column types across databases: * `vector` - stores vectors as 4-byte floats (single precision) * MariaDB/MySQL: native `vector` type * Microsoft SQL Server: native `vector` type * PostgreSQL: `vector` type, available via `pgvector` extension * SAP HANA Cloud: alias for `real_vector` type * `halfvec` - stores vectors as 2-byte floats (half precision) for memory efficiency * PostgreSQL: `halfvec` type, available via `pgvector` extension * SAP HANA Cloud: alias for `half_vector` type You can specify the number of vector dimensions using the `length` option: ``` @Entity() export class Post { @PrimaryGeneratedColumn() id: number // Vector without specified dimensions @Column("vector") embedding: number[] | Buffer // Vector with 3 dimensions: vector(3) @Column("vector", { length: 3 }) embedding_3d: number[] | Buffer // Half-precision vector with 4 dimensions: halfvec(4) (works on PostgreSQL and SAP HANA only) @Column("halfvec", { length: 4 }) halfvec_embedding: number[] | Buffer } ``` > **Note**: > > * **MariaDB/MySQL**: Vectors are supported since MariaDB 11.7 and MySQL 9 > * **Microsoft SQL Server**: Vector type support requires SQL Server 2025 (17.x) or newer. > * **PostgreSQL**: Vector columns require the `pgvector` extension to be installed. The extension provides the vector data types and similarity operators. > * **SAP HANA**: Vector columns require SAP HANA Cloud (2024Q1+) and a supported version of `@sap/hana-client`. ### Spatial columns[​](#spatial-columns "Direct link to Spatial columns") Microsoft SQLServer, MySQL/MariaDB, PostgreSQL/CockroachDB and SAP HANA all support spatial columns. TypeORM's support for each varies slightly between databases, particularly as the column names vary between databases. MS SQL, MySQL/MariaDB and SAP HANA use geometries in the [well-known text (WKT)](https://en.wikipedia.org/wiki/Well-known_text) format, so geometry columns should be tagged with the `string` type. ``` import { Entity, PrimaryColumn, Column } from "typeorm" @Entity() export class Thing { @PrimaryColumn() id: number @Column("point") point: string @Column("linestring") linestring: string } ... const thing = new Thing() thing.point = "POINT(1 1)" thing.linestring = "LINESTRING(0 0,1 1,2 2)" ``` For Postgres/CockroachDB, see [Postgis Data Types](https://typeorm.io/docs/drivers/postgres.md#spatial-columns) ## Column options[​](#column-options "Direct link to Column options") Column options defines additional options for your entity columns. You can specify column options on `@Column`: ``` @Column({ type: "varchar", length: 150, unique: true, // ... }) name: string; ``` List of available options in `ColumnOptions`: * `type: ColumnType` - Column type. One of the type listed [above](#column-types). * `name: string` - Column name in the database table. By default the column name is generated from the name of the property. You can change it by specifying your own name. * `length: number` - Column type's length. For example if you want to create `varchar(150)` type you specify column type and length options. * `onUpdate: string` - `ON UPDATE` trigger. Used only in [MySQL](https://dev.mysql.com/doc/refman/5.7/en/timestamp-initialization.html). * `nullable: boolean` - Makes column `NULL` or `NOT NULL` in the database. By default column is `nullable: false`. * `update: boolean` - Indicates if column value is updated by "save" operation. If false, you'll be able to write this value only when you first time insert the object. Default value is `true`. * `insert: boolean` - Indicates if column value is set the first time you insert the object. Default value is `true`. * `select: boolean` - Defines whether or not to hide this column by default when making queries. When set to `false`, the column data will not show with a standard query. By default column is `select: true` * `default: string` - Adds database-level column's `DEFAULT` value. * `primary: boolean` - Marks column as primary. Same if you use `@PrimaryColumn`. * `unique: boolean` - Marks column as unique column (creates unique constraint). * `comment: string` - Database's column comment. Not supported by all database types. * `precision: number` - The precision for a decimal (exact numeric) column (applies only for decimal column), which is the maximum number of digits that are stored for the values. Used in some column types. * `scale: number` - The scale for a decimal (exact numeric) column (applies only for decimal column), which represents the number of digits to the right of the decimal point and must not be greater than precision. Used in some column types. * `unsigned: boolean` - Puts `UNSIGNED` attribute on to a numeric column. Used only in MySQL. * `charset: string` - Defines a column character set. Not supported by all database types. * `collation: string` - Defines a column collation. * `enum: string[]|AnyEnum` - Used in `enum` column type to specify list of allowed enum values. You can specify array of values or specify a enum class. * `enumName: string` - Defines the name for the used enum. * `asExpression: string` - Generated column expression. Used only in [MySQL](https://dev.mysql.com/doc/refman/5.7/en/create-table-generated-columns.html). * `generatedType: "VIRTUAL"|"STORED"` - Generated column type. Used only in [MySQL](https://dev.mysql.com/doc/refman/5.7/en/create-table-generated-columns.html). * `hstoreType: "object"|"string"` - Return type of `HSTORE` column. Returns value as string or as object. Used only in [Postgres](https://www.postgresql.org/docs/9.6/static/hstore.html). * `array: boolean` - Used for postgres and cockroachdb column types which can be array (for example int\[]) * `transformer: { from(value: DatabaseType): EntityType, to(value: EntityType): DatabaseType }` - Used to marshal properties of arbitrary type `EntityType` into a type `DatabaseType` supported by the database. Array of transformers are also supported and will be applied in natural order when writing, and in reverse order when reading. e.g. `[lowercase, encrypt]` will first lowercase the string then encrypt it when writing, and will decrypt then do nothing when reading. * `utc: boolean` - Indicates if date values should be stored and retrieved in UTC timezone instead of local timezone. Only applies to `date` column type. Default value is `false` (uses local timezone for backward compatibility). Note: most of those column options are RDBMS-specific and aren't available in `MongoDB`. ## Entity inheritance[​](#entity-inheritance "Direct link to Entity inheritance") You can reduce duplication in your code by using entity inheritance. For example, you have `Photo`, `Question`, `Post` entities: ``` @Entity() export class Photo { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string @Column() size: string } @Entity() export class Question { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string @Column() answersCount: number } @Entity() export class Post { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string @Column() viewCount: number } ``` As you can see all those entities have common columns: `id`, `title`, `description`. To reduce duplication and produce a better abstraction we can create a base class called `Content` for them: ``` export abstract class Content { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string } @Entity() export class Photo extends Content { @Column() size: string } @Entity() export class Question extends Content { @Column() answersCount: number } @Entity() export class Post extends Content { @Column() viewCount: number } ``` All columns (relations, embeds, etc.) from parent entities (parent can extend other entity as well) will be inherited and created in final entities. ## Tree entities[​](#tree-entities "Direct link to Tree entities") TypeORM supports the Adjacency list and Closure table patterns of storing tree structures. ### Adjacency list[​](#adjacency-list "Direct link to Adjacency list") Adjacency list is a simple model with self-referencing. Benefit of this approach is simplicity, drawback is you can't load a big tree at once because of join limitations. Example: ``` import { Entity, Column, PrimaryGeneratedColumn, ManyToOne, OneToMany, } from "typeorm" @Entity() export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @Column() description: string @ManyToOne((type) => Category, (category) => category.children) parent: Category @OneToMany((type) => Category, (category) => category.parent) children: Category[] } ``` ### Closure table[​](#closure-table "Direct link to Closure table") A closure table stores relations between parent and child in a separate table in a special way. It's efficient in both reads and writes. To learn more about closure table take a look at [this awesome presentation by Bill Karwin](https://www.slideshare.net/billkarwin/models-for-hierarchical-data). Example: ``` import { Entity, Tree, Column, PrimaryGeneratedColumn, TreeChildren, TreeParent, TreeLevelColumn, } from "typeorm" @Entity() @Tree("closure-table") export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @Column() description: string @TreeChildren() children: Category[] @TreeParent() parent: Category @TreeLevelColumn() level: number } ``` --- # Entity Inheritance ## Concrete Table Inheritance[​](#concrete-table-inheritance "Direct link to Concrete Table Inheritance") You can reduce duplication in your code by using entity inheritance patterns. The simplest and the most effective is concrete table inheritance. For example, you have `Photo`, `Question`, `Post` entities: ``` @Entity() export class Photo { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string @Column() size: string } ``` ``` @Entity() export class Question { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string @Column() answersCount: number } ``` ``` @Entity() export class Post { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string @Column() viewCount: number } ``` As you can see all those entities have common columns: `id`, `title`, `description`. To reduce duplication and produce a better abstraction we can create a base class called `Content` for them: ``` export abstract class Content { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string } ``` ``` @Entity() export class Photo extends Content { @Column() size: string } ``` ``` @Entity() export class Question extends Content { @Column() answersCount: number } ``` ``` @Entity() export class Post extends Content { @Column() viewCount: number } ``` All columns (relations, embeds, etc.) from parent entities (parent can extend other entity as well) will be inherited and created in final entities. This example will create 3 tables - `photo`, `question` and `post`. ## Single Table Inheritance[​](#single-table-inheritance "Direct link to Single Table Inheritance") TypeORM also supports single table inheritance. Single table inheritance is a pattern when you have multiple classes with their own properties, but in the database they are stored in the same table. ``` @Entity() @TableInheritance({ column: { type: "varchar", name: "type" } }) export class Content { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string } ``` ``` @ChildEntity() export class Photo extends Content { @Column() size: string } ``` ``` @ChildEntity() export class Question extends Content { @Column() answersCount: number } ``` ``` @ChildEntity() export class Post extends Content { @Column() viewCount: number } ``` This will create a single table called `content` and all instances of photos, questions and posts will be saved into this table. ## Using embeddeds[​](#using-embeddeds "Direct link to Using embeddeds") There is an amazing way to reduce duplication in your app (using composition over inheritance) by using `embedded columns`. Read more about embedded entities [here](https://typeorm.io/docs/entity/embedded-entities.md). --- # Separating Entity Definition ## Defining Schemas[​](#defining-schemas "Direct link to Defining Schemas") You can define an entity and its columns right in the model, using decorators. But some people prefer to define an entity and its columns inside separate files which are called "entity schemas" in TypeORM. Simple definition example: ``` import { EntitySchema } from "typeorm" export const CategoryEntity = new EntitySchema({ name: "category", columns: { id: { type: Number, primary: true, generated: true, }, name: { type: String, }, }, }) ``` Example with relations: ``` import { EntitySchema } from "typeorm" export const PostEntity = new EntitySchema({ name: "post", columns: { id: { type: Number, primary: true, generated: true, }, title: { type: String, }, text: { type: String, }, }, relations: { categories: { type: "many-to-many", target: "category", // CategoryEntity }, }, }) ``` Complex example: ``` import { EntitySchema } from "typeorm" export const PersonSchema = new EntitySchema({ name: "person", columns: { id: { primary: true, type: "int", generated: "increment", }, firstName: { type: String, length: 30, }, lastName: { type: String, length: 50, nullable: false, }, age: { type: Number, nullable: false, }, countryCode: { type: String, length: 2, foreignKey: { target: "countries", // CountryEntity inverseSide: "code", }, }, cityId: { type: Number, foreignKey: { target: "cities", // CityEntity }, }, }, checks: [ { expression: `"firstName" <> 'John' AND "lastName" <> 'Doe'` }, { expression: `"age" > 18` }, ], indices: [ { name: "IDX_TEST", unique: true, columns: ["firstName", "lastName"], }, ], uniques: [ { name: "UNIQUE_TEST", columns: ["firstName", "lastName"], }, ], foreignKeys: [ { target: "cities", // CityEntity columnNames: ["cityId", "countryCode"], referencedColumnNames: ["id", "countryCode"], }, ], }) ``` If you want to make your entity typesafe, you can define a model and specify it in schema definition: ``` import { EntitySchema } from "typeorm" export interface Category { id: number name: string } export const CategoryEntity = new EntitySchema({ name: "category", columns: { id: { type: Number, primary: true, generated: true, }, name: { type: String, }, }, }) ``` ## Extending Schemas[​](#extending-schemas "Direct link to Extending Schemas") When using the `Decorator` approach it is easy to `extend` basic columns to an abstract class and simply extend this. For example, your `id`, `createdAt` and `updatedAt` columns may be defined in such a `BaseEntity`. For more details, see the documentation on [concrete table inheritance](https://typeorm.io/docs/entity/entity-inheritance.md#concrete-table-inheritance). When using the `EntitySchema` approach, this is not possible. However, you can use the `Spread Operator` (`...`) to your advantage. Reconsider the `Category` example from above. You may want to `extract` basic column descriptions and reuse it across your other schemas. This may be done in the following way: ``` import { EntitySchemaColumnOptions } from "typeorm" export const BaseColumnSchemaPart = { id: { type: Number, primary: true, generated: true, } as EntitySchemaColumnOptions, createdAt: { name: "created_at", type: "timestamp with time zone", createDate: true, } as EntitySchemaColumnOptions, updatedAt: { name: "updated_at", type: "timestamp with time zone", updateDate: true, } as EntitySchemaColumnOptions, } ``` Now you can use the `BaseColumnSchemaPart` in your other schema models, like this: ``` export const CategoryEntity = new EntitySchema({ name: "category", columns: { ...BaseColumnSchemaPart, // the CategoryEntity now has the defined id, createdAt, updatedAt columns! // in addition, the following NEW fields are defined name: { type: String, }, }, }) ``` You can use embedded entities in schema models, like this: ``` export interface Name { first: string last: string } export const NameEntitySchema = new EntitySchema({ name: "name", columns: { first: { type: "varchar", }, last: { type: "varchar", }, }, }) export interface User { id: string name: Name isActive: boolean } export const UserEntitySchema = new EntitySchema({ name: "user", columns: { id: { primary: true, generated: "uuid", type: "uuid", }, isActive: { type: "boolean", }, }, embeddeds: { name: { schema: NameEntitySchema, prefix: "name_", }, }, }) ``` Be sure to add the `extended` columns also to the `Category` interface (e.g., via `export interface Category extend BaseEntity`). ### Single Table Inheritance[​](#single-table-inheritance "Direct link to Single Table Inheritance") In order to use [Single Table Inheritance](https://typeorm.io/docs/entity/entity-inheritance.md#single-table-inheritance): 1. Add the `inheritance` option to the **parent** class schema, specifying the inheritance pattern ("STI") and the **discriminator** column, which will store the name of the *child* class on each row 2. Set the `type: "entity-child"` option for all **children** classes' schemas, while extending the *parent* class columns using the spread operator syntax described above ``` // entity.ts export abstract class Base { id!: number type!: string createdAt!: Date updatedAt!: Date } export class A extends Base { constructor(public a: boolean) { super() } } export class B extends Base { constructor(public b: number) { super() } } export class C extends Base { constructor(public c: string) { super() } } ``` ``` // schema.ts const BaseSchema = new EntitySchema({ target: Base, name: "Base", columns: { id: { type: Number, primary: true, generated: "increment", }, type: { type: String, }, createdAt: { type: Date, createDate: true, }, updatedAt: { type: Date, updateDate: true, }, }, // NEW: Inheritance options inheritance: { pattern: "STI", column: "type", }, }) const ASchema = new EntitySchema({ target: A, name: "A", type: "entity-child", // When saving instances of 'A', the "type" column will have the value // specified on the 'discriminatorValue' property discriminatorValue: "my-custom-discriminator-value-for-A", columns: { ...BaseSchema.options.columns, a: { type: Boolean, }, }, }) const BSchema = new EntitySchema({ target: B, name: "B", type: "entity-child", discriminatorValue: undefined, // Defaults to the class name (e.g. "B") columns: { ...BaseSchema.options.columns, b: { type: Number, }, }, }) const CSchema = new EntitySchema({ target: C, name: "C", type: "entity-child", discriminatorValue: "my-custom-discriminator-value-for-C", columns: { ...BaseSchema.options.columns, c: { type: String, }, }, }) ``` ## Using Schemas to Query / Insert Data[​](#using-schemas-to-query--insert-data "Direct link to Using Schemas to Query / Insert Data") Of course, you can use the defined schemas in your repositories or entity manager as you would use the decorators. Consider the previously defined `Category` example (with its `Interface` and `CategoryEntity` schema) in order to get some data or manipulate the database. ``` // request data const categoryRepository = dataSource.getRepository(CategoryEntity) const category = await categoryRepository.findOneBy({ id: 1, }) // category is properly typed! // insert a new category into the database const categoryDTO = { // note that the ID is autogenerated; see the schema above name: "new category", } const newCategory = await categoryRepository.save(categoryDTO) ``` --- # Tree Entities TypeORM supports the Adjacency list and Closure table patterns for storing tree structures. To learn more about the hierarchy table take a look at [this awesome presentation by Bill Karwin](https://www.slideshare.net/billkarwin/models-for-hierarchical-data). ## Adjacency list[​](#adjacency-list "Direct link to Adjacency list") Adjacency list is a simple model with self-referencing. Note that TreeRepository doesn't support Adjacency list. The benefit of this approach is simplicity, a drawback is that you can't load big trees all at once because of join limitations. To learn more about the benefits and use of Adjacency Lists look at [this article by Matthew Schinckel](http://schinckel.net/2014/09/13/long-live-adjacency-lists/). Example: ``` import { Entity, Column, PrimaryGeneratedColumn, ManyToOne, OneToMany, } from "typeorm" @Entity() export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @Column() description: string @ManyToOne((type) => Category, (category) => category.children) parent: Category @OneToMany((type) => Category, (category) => category.parent) children: Category[] } ``` ## Nested set[​](#nested-set "Direct link to Nested set") Nested set is another pattern of storing tree structures in the database. It is very efficient for reads, but bad for writes. You cannot have multiple roots in the nested set. Example: ``` import { Entity, Tree, Column, PrimaryGeneratedColumn, TreeChildren, TreeParent, TreeLevelColumn, } from "typeorm" @Entity() @Tree("nested-set") export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @TreeChildren() children: Category[] @TreeParent() parent: Category } ``` ## Materialized Path (aka Path Enumeration)[​](#materialized-path-aka-path-enumeration "Direct link to Materialized Path (aka Path Enumeration)") Materialized Path (also called Path Enumeration) is another pattern of storing tree structures in the database. It is simple and effective. Example: ``` import { Entity, Tree, Column, PrimaryGeneratedColumn, TreeChildren, TreeParent, TreeLevelColumn, } from "typeorm" @Entity() @Tree("materialized-path") export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @TreeChildren() children: Category[] @TreeParent() parent: Category } ``` ## Closure table[​](#closure-table "Direct link to Closure table") Closure table stores relations between parent and child in a separate table in a special way. It's efficient in both reading and writing. Example: ``` import { Entity, Tree, Column, PrimaryGeneratedColumn, TreeChildren, TreeParent, TreeLevelColumn, } from "typeorm" @Entity() @Tree("closure-table") export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @TreeChildren() children: Category[] @TreeParent() parent: Category } ``` You can specify the closure table name and/or closure table column names by setting optional parameter `options` into `@Tree("closure-table", options)`. `ancestorColumnName` and `descandantColumnName` are callback functions, which receive the primary column's metadata and return the column's name. ``` @Tree("closure-table", { closureTableName: "category_closure", ancestorColumnName: (column) => "ancestor_" + column.propertyName, descendantColumnName: (column) => "descendant_" + column.propertyName, }) ``` ## Working with tree entities[​](#working-with-tree-entities "Direct link to Working with tree entities") To bind tree entities to each other, it is required to set the parent in the child entity and then save them. for example: ``` const a1 = new Category() a1.name = "a1" await dataSource.manager.save(a1) const a11 = new Category() a11.name = "a11" a11.parent = a1 await dataSource.manager.save(a11) const a12 = new Category() a12.name = "a12" a12.parent = a1 await dataSource.manager.save(a12) const a111 = new Category() a111.name = "a111" a111.parent = a11 await dataSource.manager.save(a111) const a112 = new Category() a112.name = "a112" a112.parent = a11 await dataSource.manager.save(a112) ``` To load such a tree use `TreeRepository`: ``` const trees = await dataSource.manager.getTreeRepository(Category).findTrees() ``` `trees` will be the following: ``` [ { "id": 1, "name": "a1", "children": [ { "id": 2, "name": "a11", "children": [ { "id": 4, "name": "a111" }, { "id": 5, "name": "a112" } ] }, { "id": 3, "name": "a12" } ] } ] ``` There are other special methods to work with tree entities through `TreeRepository`: * `findTrees` - Returns all trees in the database with all their children, children of children, etc. ``` const treeCategories = await dataSource.manager .getTreeRepository(Category) .findTrees() // returns root categories with sub categories inside const treeCategoriesWithLimitedDepth = await dataSource.manager .getTreeRepository(Category) .findTrees({ depth: 2 }) // returns root categories with sub categories inside, up to depth 2 ``` * `findRoots` - Roots are entities that have no ancestors. Finds them all. Does not load children's leaves. ``` const rootCategories = await dataSource.manager .getTreeRepository(Category) .findRoots() // returns root categories without sub categories inside ``` * `findDescendants` - Gets all children (descendants) of the given entity. Returns them all in a flat array. ``` const children = await dataSource.manager .getTreeRepository(Category) .findDescendants(parentCategory) // returns all direct subcategories (without its nested categories) of a parentCategory ``` * `findDescendantsTree` - Gets all children (descendants) of the given entity. Returns them in a tree - nested into each other. ``` const childrenTree = await repository.findDescendantsTree(parentCategory) // returns all direct subcategories (with its nested categories) of a parentCategory const childrenTreeWithLimitedDepth = await repository.findDescendantsTree( parentCategory, { depth: 2 }, ) // returns all direct subcategories (with its nested categories) of a parentCategory, up to depth 2 ``` * `createDescendantsQueryBuilder` - Creates a query builder used to get descendants of the entities in a tree. ``` const children = await repository .createDescendantsQueryBuilder( "category", "categoryClosure", parentCategory, ) .andWhere("category.type = 'secondary'") .getMany() ``` * `countDescendants` - Gets the number of descendants of the entity. ``` const childrenCount = await dataSource.manager .getTreeRepository(Category) .countDescendants(parentCategory) ``` * `findAncestors` - Gets all parents (ancestors) of the given entity. Returns them all in a flat array. ``` const parents = await repository.findAncestors(childCategory) // returns all direct childCategory's parent categories (without "parent of parents") ``` * `findAncestorsTree` - Gets all parents (ancestors) of the given entity. Returns them in a tree - nested into each other. ``` const parentsTree = await dataSource.manager .getTreeRepository(Category) .findAncestorsTree(childCategory) // returns all direct childCategory's parent categories (with "parent of parents") ``` * `createAncestorsQueryBuilder` - Creates a query builder used to get the ancestors of the entities in a tree. ``` const parents = await repository .createAncestorsQueryBuilder("category", "categoryClosure", childCategory) .andWhere("category.type = 'secondary'") .getMany() ``` * `countAncestors` - Gets the number of ancestors of the entity. ``` const parentsCount = await dataSource.manager .getTreeRepository(Category) .countAncestors(childCategory) ``` For the following methods, options can be passed: * findTrees * findRoots * findDescendants * findDescendantsTree * findAncestors * findAncestorsTree The following options are available: * `relations` - Indicates what relations of entity should be loaded (simplified left join form). Examples: ``` const treeCategoriesWithRelations = await dataSource.manager .getTreeRepository(Category) .findTrees({ relations: ["sites"], }) // automatically joins the sites relation const parentsWithRelations = await dataSource.manager .getTreeRepository(Category) .findAncestors(childCategory, { relations: ["members"], }) // returns all direct childCategory's parent categories (without "parent of parents") and joins the 'members' relation ``` --- # View Entities ## What is a ViewEntity?[​](#what-is-a-viewentity "Direct link to What is a ViewEntity?") View entity is a class that maps to a database view. You can create a view entity by defining a new class and mark it with `@ViewEntity()`: `@ViewEntity()` accepts following options: * `name` - view name. If not specified, then view name is generated from entity class name. * `database` - database name in selected DB server. * `schema` - schema name. * `expression` - view definition. **Required parameter**. * `dependsOn` - List of other views on which the current views depends. If your view uses another view in its definition, you can add it here so that [migrations](https://typeorm.io/docs/migrations/why.md) are generated in the correct order. `expression` can be string with properly escaped columns and tables, depend on database used (postgres in example): ``` @ViewEntity({ expression: ` SELECT "post"."id" AS "id", "post"."name" AS "name", "category"."name" AS "categoryName" FROM "post" "post" LEFT JOIN "category" "category" ON "post"."categoryId" = "category"."id" ` }) ``` or an instance of QueryBuilder ``` @ViewEntity({ expression: (dataSource: DataSource) => dataSource .createQueryBuilder() .select("post.id", "id") .addSelect("post.name", "name") .addSelect("category.name", "categoryName") .from(Post, "post") .leftJoin(Category, "category", "category.id = post.categoryId") }) ``` **Note:** parameter binding is not supported due to drivers limitations. Use the literal parameters instead. ``` @ViewEntity({ expression: (dataSource: DataSource) => dataSource .createQueryBuilder() .select("post.id", "id") .addSelect("post.name", "name") .addSelect("category.name", "categoryName") .from(Post, "post") .leftJoin(Category, "category", "category.id = post.categoryId") .where("category.name = :name", { name: "Cars" }) // <-- this is wrong .where("category.name = 'Cars'") // <-- and this is right }) ``` Each view entity must be registered in your data source options: ``` import { DataSource } from "typeorm" import { UserView } from "./entity/UserView" const dataSource = new DataSource({ type: "mysql", host: "localhost", port: 3306, username: "test", password: "test", database: "test", entities: [UserView], }) ``` ## View Entity columns[​](#view-entity-columns "Direct link to View Entity columns") To map data from view into the correct entity columns you must mark entity columns with `@ViewColumn()` decorator and specify these columns as select statement aliases. example with string expression definition: ``` import { ViewEntity, ViewColumn } from "typeorm" @ViewEntity({ expression: ` SELECT "post"."id" AS "id", "post"."name" AS "name", "category"."name" AS "categoryName" FROM "post" "post" LEFT JOIN "category" "category" ON "post"."categoryId" = "category"."id" `, }) export class PostCategory { @ViewColumn() id: number @ViewColumn() name: string @ViewColumn() categoryName: string } ``` example using QueryBuilder: ``` import { ViewEntity, ViewColumn } from "typeorm" @ViewEntity({ expression: (dataSource: DataSource) => dataSource .createQueryBuilder() .select("post.id", "id") .addSelect("post.name", "name") .addSelect("category.name", "categoryName") .from(Post, "post") .leftJoin(Category, "category", "category.id = post.categoryId"), }) export class PostCategory { @ViewColumn() id: number @ViewColumn() name: string @ViewColumn() categoryName: string } ``` ## View Column options[​](#view-column-options "Direct link to View Column options") View Column options define additional options for your view entity columns, similar to [column options](https://typeorm.io/docs/entity/entities.md#column-options) for regular entities. You can specify view column options in `@ViewColumn`: ``` @ViewColumn({ name: "postName", // ... }) name: string; ``` List of available options in `ViewColumnOptions`: * `name: string` - Column name in the database view. * `transformer: { from(value: DatabaseType): EntityType, to(value: EntityType): DatabaseType }` - Used to unmarshal properties of arbitrary type `DatabaseType` supported by the database into a type `EntityType`. Arrays of transformers are also supported and are applied in reverse order when reading. Note that because database views are read-only, `transformer.to(value)` will never be used. ## Materialized View Indices[​](#materialized-view-indices "Direct link to Materialized View Indices") There's support for creation of indices for materialized views if using `PostgreSQL`. ``` @ViewEntity({ materialized: true, expression: (dataSource: DataSource) => dataSource .createQueryBuilder() .select("post.id", "id") .addSelect("post.name", "name") .addSelect("category.name", "categoryName") .from(Post, "post") .leftJoin(Category, "category", "category.id = post.categoryId"), }) export class PostCategory { @ViewColumn() id: number @Index() @ViewColumn() name: string @Index("catname-idx") @ViewColumn() categoryName: string } ``` However, `unique` is currently the only supported option for indices in materialized views. The rest of the indices options will be ignored. ``` @Index("name-idx", { unique: true }) @ViewColumn() name: string ``` ## Complete example[​](#complete-example "Direct link to Complete example") Lets create two entities and a view containing aggregated data from these entities: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToOne, JoinColumn, } from "typeorm" import { Category } from "./Category" @Entity() export class Post { @PrimaryGeneratedColumn() id: number @Column() name: string @Column() categoryId: number @ManyToOne(() => Category) @JoinColumn({ name: "categoryId" }) category: Category } ``` ``` import { ViewEntity, ViewColumn, DataSource } from "typeorm" @ViewEntity({ expression: (dataSource: DataSource) => dataSource .createQueryBuilder() .select("post.id", "id") .addSelect("post.name", "name") .addSelect("category.name", "categoryName") .from(Post, "post") .leftJoin(Category, "category", "category.id = post.categoryId"), }) export class PostCategory { @ViewColumn() id: number @ViewColumn() name: string @ViewColumn() categoryName: string } ``` then fill these tables with data and request all data from PostCategory view: ``` import { Category } from "./entity/Category" import { Post } from "./entity/Post" import { PostCategory } from "./entity/PostCategory" const category1 = new Category() category1.name = "Cars" await dataSource.manager.save(category1) const category2 = new Category() category2.name = "Airplanes" await dataSource.manager.save(category2) const post1 = new Post() post1.name = "About BMW" post1.categoryId = category1.id await dataSource.manager.save(post1) const post2 = new Post() post2.name = "About Boeing" post2.categoryId = category2.id await dataSource.manager.save(post2) const postCategories = await dataSource.manager.find(PostCategory) const postCategory = await dataSource.manager.findOneBy(PostCategory, { id: 1 }) ``` the result in `postCategories` will be: ``` [ PostCategory { id: 1, name: 'About BMW', categoryName: 'Cars' }, PostCategory { id: 2, name: 'About Boeing', categoryName: 'Airplanes' } ] ``` and in `postCategory`: ``` PostCategory { id: 1, name: 'About BMW', categoryName: 'Cars' } ``` --- # Getting Started TypeORM is an [ORM](https://en.wikipedia.org/wiki/Object-relational_mapping) that can run in Node.js, Browser, Cordova, Ionic, React Native, NativeScript, Expo, and Electron platforms and can be used with TypeScript and JavaScript (ES2021). Its goal is to always support the latest JavaScript features and provide additional features that help you to develop any kind of application that uses databases - from small applications with a few tables to large-scale enterprise applications with multiple databases. TypeORM supports more databases than any other JS/TS ORM: [Google Spanner](https://typeorm.io/docs/drivers/google-spanner.md), [Microsoft SqlServer](https://typeorm.io/docs/drivers/microsoft-sqlserver.md), [MongoDB](https://typeorm.io/docs/drivers/mongodb.md), [MySQL/MariaDB](https://typeorm.io/docs/drivers/mysql.md), [Oracle](https://typeorm.io/docs/drivers/oracle.md), [Postgres](https://typeorm.io/docs/drivers/postgres.md), [SAP HANA](https://typeorm.io/docs/drivers/sap.md) and [SQLite](https://typeorm.io/docs/drivers/sqlite.md), as well we derived databases and different drivers. TypeORM supports both [Active Record](https://typeorm.io/docs/guides/active-record-data-mapper.md#what-is-the-active-record-pattern) and [Data Mapper](https://typeorm.io/docs/guides/active-record-data-mapper.md#what-is-the-data-mapper-pattern) patterns, unlike all other JavaScript ORMs currently in existence, which means you can write high-quality, loosely coupled, scalable, maintainable applications in the most productive way. TypeORM is highly influenced by other ORMs, such as [Hibernate](http://hibernate.org/orm/), [Doctrine](http://www.doctrine-project.org/) and [Entity Framework](https://www.asp.net/entity-framework). ## Features[​](#features "Direct link to Features") * Supports both [DataMapper](https://typeorm.io/docs/guides/active-record-data-mapper.md#what-is-the-data-mapper-pattern) and [ActiveRecord](https://typeorm.io/docs/guides/active-record-data-mapper.md#what-is-the-active-record-pattern) (your choice). * Entities and columns. * Database-specific column types. * Entity manager. * Repositories and custom repositories. * Clean object-relational model. * Associations (relations). * Eager and lazy relations. * Unidirectional, bidirectional, and self-referenced relations. * Supports multiple inheritance patterns. * Cascades. * Indices. * Transactions. * [Migrations](https://typeorm.io/docs/migrations/why.md) with automatic generation. * Connection pooling. * Replication. * Using multiple database instances. * Working with multiple database types. * Cross-database and cross-schema queries. * Elegant-syntax, flexible and powerful QueryBuilder. * Left and inner joins. * Proper pagination for queries using joins. * Query caching. * Streaming raw results. * Logging. * Listeners and subscribers (hooks). * Supports closure table pattern. * Schema declaration in models or separate configuration files. * Supports MySQL / MariaDB / Postgres / CockroachDB / SQLite / Microsoft SQL Server / Oracle / SAP Hana / sql.js. * Supports MongoDB NoSQL database. * Works in Node.js / Browser / Ionic / Cordova / React Native / NativeScript / Expo / Electron platforms. * TypeScript and JavaScript support. * ESM and CommonJS support. * Produced code is performant, flexible, clean, and maintainable. * Follows all possible best practices. * CLI. And more... With TypeORM, your models look like this: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string @Column() age: number } ``` And your domain logic looks like this: ``` const userRepository = AppDataSource.getRepository(User) const user = new User() user.firstName = "Timber" user.lastName = "Saw" user.age = 25 await userRepository.save(user) const allUsers = await userRepository.find() const firstUser = await userRepository.findOneBy({ id: 1, }) // find by id const timber = await userRepository.findOneBy({ firstName: "Timber", lastName: "Saw", }) // find by firstName and lastName await userRepository.remove(timber) ``` Alternatively, if you prefer to use the `ActiveRecord` implementation, you can use it as well: ``` import { Entity, PrimaryGeneratedColumn, Column, BaseEntity } from "typeorm" @Entity() export class User extends BaseEntity { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string @Column() age: number } ``` And your domain logic will look this way: ``` const user = new User() user.firstName = "Timber" user.lastName = "Saw" user.age = 25 await user.save() const allUsers = await User.find() const firstUser = await User.findOneBy({ id: 1, }) const timber = await User.findOneBy({ firstName: "Timber", lastName: "Saw", }) await timber.remove() ``` ## Installation[​](#installation "Direct link to Installation") 1. Install the npm package: `npm install typeorm` 2. You need to install `reflect-metadata` shim: `npm install reflect-metadata` and import it somewhere in the global place of your app (for example in `app.ts`): `import "reflect-metadata"` 3. You may need to install node typings: `npm install @types/node --save-dev` 4. Install a database driver: see the documentation for each particular driver: [mongodb](https://typeorm.io/docs/drivers/mongodb.md#installation), [mssql](https://typeorm.io/docs/drivers/microsoft-sqlserver.md#installation), [mysql/mariadb](https://typeorm.io/docs/drivers/mysql.md#installation), [oracle](https://typeorm.io/docs/drivers/oracle.md#installation), [postgres](https://typeorm.io/docs/drivers/postgres.md#installation), [sap](https://typeorm.io/docs/drivers/sap.md#installation), [spanner](https://typeorm.io/docs/drivers/google-spanner.md#installation), [sqlite](https://typeorm.io/docs/drivers/sqlite.md#installation). ### TypeScript configuration[​](#typescript-configuration "Direct link to TypeScript configuration") Also, make sure you are using TypeScript version **4.5** or higher, and you have enabled the following settings in `tsconfig.json`: ``` "emitDecoratorMetadata": true, "experimentalDecorators": true, ``` ## Quick Start[​](#quick-start "Direct link to Quick Start") The quickest way to get started with TypeORM is to use its CLI commands to generate a starter project. Quick start works only if you are using TypeORM in a Node.js application. If you are using other platforms, proceed to the [step-by-step guide](#step-by-step-guide). To create a new project using CLI, run the following command: ``` npx typeorm init --name MyProject --database postgres ``` Where `name` is the name of your project and `database` is the database you'll use. Database can be one of the following values: `mysql`, `mariadb`, `postgres`, `cockroachdb`, `sqlite`, `mssql`, `sap`, `spanner`, `oracle`, `mongodb`, `cordova`, `react-native`, `expo`, `nativescript`. This command will generate a new project in the `MyProject` directory with the following files: ``` MyProject ├── src // place of your TypeScript code │ ├── entity // place where your entities (database models) are stored │ │ └── User.ts // sample entity │ ├── migration // place where your migrations are stored │ ├── data-source.ts // data source and all connection configuration │ └── index.ts // start point of your application ├── .gitignore // standard gitignore file ├── package.json // node module dependencies ├── README.md // simple readme file └── tsconfig.json // TypeScript compiler options ``` > You can also run `typeorm init` on an existing node project, but be careful - it may override some files you already have. The next step is to install new project dependencies: ``` cd MyProject npm install ``` After you have all dependencies installed, edit the `data-source.ts` file and put your own database connection configuration options in there: ``` export const AppDataSource = new DataSource({ type: "postgres", host: "localhost", port: 5432, username: "test", password: "test", database: "test", synchronize: true, logging: true, entities: [Post, Category], subscribers: [], migrations: [], }) ``` Particularly, most of the time you'll only need to configure `host`, `username`, `password`, `database` and maybe `port` options. Once you finish with configuration and all node modules are installed, you can run your application: ``` npm start ``` That's it, your application should successfully run and insert a new user into the database. You can continue to work with this project and integrate other modules you need and start creating more entities. > You can generate an ESM project by running `npx typeorm init --name MyProject --database postgres --module esm` command. > You can generate an even more advanced project with express installed by running `npx typeorm init --name MyProject --database mysql --express` command. > You can generate a docker-compose file by running `npx typeorm init --name MyProject --database postgres --docker` command. ## Step-by-Step Guide[​](#step-by-step-guide "Direct link to Step-by-Step Guide") What are you expecting from ORM? First, you are expecting it will create database tables for you and find / insert / update / delete your data without the pain of having to write lots of hardly maintainable SQL queries. This guide will show you how to set up TypeORM from scratch and make it do what you are expecting from an ORM. ### Create a model[​](#create-a-model "Direct link to Create a model") Working with a database starts with creating tables. How do you tell TypeORM to create a database table? The answer is - through the models. Your models in your app are your database tables. For example, you have a `Photo` model: ``` export class Photo { id: number name: string description: string filename: string views: number isPublished: boolean } ``` And you want to store photos in your database. To store things in the database, first, you need a database table, and database tables are created from your models. Not all models, but only those you define as *entities*. ### Create an entity[​](#create-an-entity "Direct link to Create an entity") *Entity* is your model decorated by an `@Entity` decorator. A database table will be created for such models. You work with entities everywhere in TypeORM. You can load/insert/update/remove and perform other operations with them. Let's make our `Photo` model an entity: ``` import { Entity } from "typeorm" @Entity() export class Photo { id: number name: string description: string filename: string views: number isPublished: boolean } ``` Now, a database table will be created for the `Photo` entity, and we'll be able to work with it anywhere in our app. We have created a database table, however, what table can exist without columns? Let's create a few columns in our database table. ### Adding table columns[​](#adding-table-columns "Direct link to Adding table columns") To add database columns, you need to decorate an entity's properties you want to make into a column with a `@Column` decorator. ``` import { Entity, Column } from "typeorm" @Entity() export class Photo { @Column() id: number @Column() name: string @Column() description: string @Column() filename: string @Column() views: number @Column() isPublished: boolean } ``` Now `id`, `name`, `description`, `filename`, `views`, and `isPublished` columns will be added to the `photo` table. Column types in the database are inferred from the property types you used, e.g. `number` will be converted into `integer`, `string` into `varchar`, `boolean` into `bool`, etc. But you can use any column type your database supports by explicitly specifying a column type into the `@Column` decorator. We generated a database table with columns, but there is one thing left. Each database table must have a column with a primary key. ### Creating a primary column[​](#creating-a-primary-column "Direct link to Creating a primary column") Each entity **must** have at least one primary key column. This is a requirement, and you can't avoid it. To make a column a primary key, you need to use the `@PrimaryColumn` decorator. ``` import { Entity, Column, PrimaryColumn } from "typeorm" @Entity() export class Photo { @PrimaryColumn() id: number @Column() name: string @Column() description: string @Column() filename: string @Column() views: number @Column() isPublished: boolean } ``` ### Creating an auto-generated column[​](#creating-an-auto-generated-column "Direct link to Creating an auto-generated column") Now, let's say you want your id column to be auto-generated (this is known as auto-increment / sequence / serial / generated identity column). To do that, you need to change the `@PrimaryColumn` decorator to a `@PrimaryGeneratedColumn` decorator: ``` import { Entity, Column, PrimaryGeneratedColumn } from "typeorm" @Entity() export class Photo { @PrimaryGeneratedColumn() id: number @Column() name: string @Column() description: string @Column() filename: string @Column() views: number @Column() isPublished: boolean } ``` ### Column data types[​](#column-data-types "Direct link to Column data types") Next, let's fix our data types. By default, the string is mapped to a varchar(255)-like type (depending on the database type). The number is mapped to an integer-like type (depending on the database type). We don't want all our columns to be limited to varchars or integers. Let's set up the correct data types: ``` import { Entity, Column, PrimaryGeneratedColumn } from "typeorm" @Entity() export class Photo { @PrimaryGeneratedColumn() id: number @Column({ length: 100, }) name: string @Column("text") description: string @Column() filename: string @Column("double") views: number @Column() isPublished: boolean } ``` Column types are database-specific. You can set any column type your database supports. More information on supported column types can be found [here](https://typeorm.io/docs/entity/entities.md#column-types). ### Creating a new `DataSource`[​](#creating-a-new-datasource "Direct link to creating-a-new-datasource") Now, when our entity is created, let's create `index.ts` file and set up our `DataSource` there: ``` import "reflect-metadata" import { DataSource } from "typeorm" import { Photo } from "./entity/Photo" const AppDataSource = new DataSource({ type: "postgres", host: "localhost", port: 5432, username: "root", password: "admin", database: "test", entities: [Photo], synchronize: true, logging: false, }) // to initialize the initial connection with the database, register all entities // and "synchronize" database schema, call "initialize()" method of a newly created database // once in your application bootstrap try { await AppDataSource.initialize() } catch (error) { console.log(error) } ``` We are using Postgres in this example, but you can use any other supported database. To use another database, change the `type` in the options to the database type you are using: `mysql`, `mariadb`, `postgres`, `cockroachdb`, `sqlite`, `mssql`, `oracle`, `sap`, `spanner`, `cordova`, `nativescript`, `react-native`, `expo`, or `mongodb`. Also make sure to use your own host, port, username, password, and database settings. We added our Photo entity to the list of entities for this data source. Each entity you are using in your connection must be listed there. Setting `synchronize` makes sure your entities will be synced with the database, every time you run the application. ### Running the application[​](#running-the-application "Direct link to Running the application") Now if you run your `index.ts`, a connection with the database will be initialized and a database table for your photos will be created. ``` +-------------+--------------+----------------------------+ | photo | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | name | varchar(100) | | | description | text | | | filename | varchar(255) | | | views | int | | | isPublished | boolean | | +-------------+--------------+----------------------------+ ``` ### Creating and inserting a photo into the database[​](#creating-and-inserting-a-photo-into-the-database "Direct link to Creating and inserting a photo into the database") Now let's create a new photo to save it in the database: ``` import { Photo } from "./entity/Photo" import { AppDataSource } from "./index" const photo = new Photo() photo.name = "Me and Bears" photo.description = "I am near polar bears" photo.filename = "photo-with-bears.jpg" photo.views = 1 photo.isPublished = true await AppDataSource.manager.save(photo) console.log("Photo has been saved. Photo id is", photo.id) ``` Once your entity is saved, it will get a newly generated id. `save` method returns an instance of the same object you pass to it. It's not a new copy of the object, it modifies its "id" and returns it. ### Using Entity Manager[​](#using-entity-manager "Direct link to Using Entity Manager") We just created a new photo and saved it in the database. We used `EntityManager` to save it. Using entity manager, you can manipulate any entity in your app. For example, let's load our saved entity: ``` import { Photo } from "./entity/Photo" import { AppDataSource } from "./index" const savedPhotos = await AppDataSource.manager.find(Photo) console.log("All photos from the db: ", savedPhotos) ``` `savedPhotos` will be an array of Photo objects with the data loaded from the database. Learn more about [EntityManager](https://typeorm.io/docs/working-with-entity-manager/working-with-entity-manager.md). ### Using Repositories[​](#using-repositories "Direct link to Using Repositories") Now let's refactor our code and use `Repository` instead of `EntityManager`. Each entity has its own repository which handles all operations with its entity. When you deal with entities a lot, Repositories are more convenient to use than EntityManagers: ``` import { Photo } from "./entity/Photo" import { AppDataSource } from "./index" const photo = new Photo() photo.name = "Me and Bears" photo.description = "I am near polar bears" photo.filename = "photo-with-bears.jpg" photo.views = 1 photo.isPublished = true const photoRepository = AppDataSource.getRepository(Photo) await photoRepository.save(photo) console.log("Photo has been saved") const savedPhotos = await photoRepository.find() console.log("All photos from the db: ", savedPhotos) ``` Learn more about Repository [here](https://typeorm.io/docs/working-with-entity-manager/working-with-repository.md). ### Loading from the database[​](#loading-from-the-database "Direct link to Loading from the database") Let's try more load operations using the Repository: ``` import { Photo } from "./entity/Photo" import { AppDataSource } from "./index" const photoRepository = AppDataSource.getRepository(Photo) const allPhotos = await photoRepository.find() console.log("All photos from the db: ", allPhotos) const firstPhoto = await photoRepository.findOneBy({ id: 1, }) console.log("First photo from the db: ", firstPhoto) const meAndBearsPhoto = await photoRepository.findOneBy({ name: "Me and Bears", }) console.log("Me and Bears photo from the db: ", meAndBearsPhoto) const allViewedPhotos = await photoRepository.findBy({ views: 1 }) console.log("All viewed photos: ", allViewedPhotos) const allPublishedPhotos = await photoRepository.findBy({ isPublished: true }) console.log("All published photos: ", allPublishedPhotos) const [photos, photosCount] = await photoRepository.findAndCount() console.log("All photos: ", photos) console.log("Photos count: ", photosCount) ``` ### Updating in the database[​](#updating-in-the-database "Direct link to Updating in the database") Now let's load a single photo from the database, update it, and save it: ``` import { Photo } from "./entity/Photo" import { AppDataSource } from "./index" const photoRepository = AppDataSource.getRepository(Photo) const photoToUpdate = await photoRepository.findOneBy({ id: 1, }) photoToUpdate.name = "Me, my friends and polar bears" await photoRepository.save(photoToUpdate) ``` Now photo with `id = 1` will be updated in the database. ### Removing from the database[​](#removing-from-the-database "Direct link to Removing from the database") Now let's remove our photo from the database: ``` import { Photo } from "./entity/Photo" import { AppDataSource } from "./index" const photoRepository = AppDataSource.getRepository(Photo) const photoToRemove = await photoRepository.findOneBy({ id: 1, }) await photoRepository.remove(photoToRemove) ``` Now photo with `id = 1` will be removed from the database. ### Creating a one-to-one relation[​](#creating-a-one-to-one-relation "Direct link to Creating a one-to-one relation") Let's create a one-to-one relationship with another class. Let's create a new class in `PhotoMetadata.ts`. This PhotoMetadata class is supposed to contain our photo's additional meta-information: ``` import { Entity, Column, PrimaryGeneratedColumn, OneToOne, JoinColumn, } from "typeorm" import { Photo } from "./Photo" @Entity() export class PhotoMetadata { @PrimaryGeneratedColumn() id: number @Column("int") height: number @Column("int") width: number @Column() orientation: string @Column() compressed: boolean @Column() comment: string @OneToOne(() => Photo) @JoinColumn() photo: Photo } ``` Here, we are using a new decorator called `@OneToOne`. It allows us to create a one-to-one relationship between two entities. We also add a `@JoinColumn` decorator, which indicates that this side of the relationship will own the relationship. Relations can be unidirectional or bidirectional. Only one side of the relation can be the owner. Using `@JoinColumn` decorator is required on the owner side of the relationship. If you run the app, you'll see a newly generated table, and it will contain a column with a foreign key for the photo relation: ``` +-------------+--------------+----------------------------+ | photo_metadata | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | height | int | | | width | int | | | comment | varchar(255) | | | compressed | boolean | | | orientation | varchar(255) | | | photoId | int | FOREIGN KEY | +-------------+--------------+----------------------------+ ``` ### Save a one-to-one relation[​](#save-a-one-to-one-relation "Direct link to Save a one-to-one relation") Now let's save a photo and its metadata and attach them to each other. ``` import { Photo } from "./entity/Photo" import { PhotoMetadata } from "./entity/PhotoMetadata" // Create a photo const photo = new Photo() photo.name = "Me and Bears" photo.description = "I am near polar bears" photo.filename = "photo-with-bears.jpg" photo.views = 1 photo.isPublished = true // Create a photo metadata const metadata = new PhotoMetadata() metadata.height = 640 metadata.width = 480 metadata.compressed = true metadata.comment = "cybershoot" metadata.orientation = "portrait" metadata.photo = photo // this way we connect them // Get entity repositories const photoRepository = AppDataSource.getRepository(Photo) const metadataRepository = AppDataSource.getRepository(PhotoMetadata) // First we should save a photo await photoRepository.save(photo) // The Photo is saved. Now we need to save a photo metadata await metadataRepository.save(metadata) // Done console.log( "Metadata is saved, and the relation between metadata and photo is created in the database too", ) ``` ### Inverse side of the relationship[​](#inverse-side-of-the-relationship "Direct link to Inverse side of the relationship") Relations can be unidirectional or bidirectional. Currently, our relation between PhotoMetadata and Photo is unidirectional. The owner of the relation is PhotoMetadata, and Photo doesn't know anything about PhotoMetadata. This makes it complicated to access PhotoMetadata from the Photo side. To fix this issue, we should add an inverse relation, and make relations between PhotoMetadata and Photo bidirectional. Let's modify our entities: ``` import { Entity, Column, PrimaryGeneratedColumn, OneToOne, JoinColumn, } from "typeorm" import { Photo } from "./Photo" @Entity() export class PhotoMetadata { /* ... other columns */ @OneToOne(() => Photo, (photo) => photo.metadata) @JoinColumn() photo: Photo } ``` ``` import { Entity, Column, PrimaryGeneratedColumn, OneToOne } from "typeorm" import { PhotoMetadata } from "./PhotoMetadata" @Entity() export class Photo { /* ... other columns */ @OneToOne(() => PhotoMetadata, (photoMetadata) => photoMetadata.photo) metadata: PhotoMetadata } ``` `photo => photo.metadata` is a function that returns the name of the inverse side of the relation. Here we show that the metadata property of the Photo class is where we store PhotoMetadata in the Photo class. Instead of passing a function that returns a property of the photo, you could alternatively spass a string to `@OneToOne` decorator, like `"metadata"`. But we used this function-typed approach to make our refactoring easier. Note that we should use the `@JoinColumn` decorator only on one side of a relation. Whichever side you put this decorator on will be the owning side of the relationship. The owning side of a relationship contains a column with a foreign key in the database. ### Relations in ESM projects[​](#relations-in-esm-projects "Direct link to Relations in ESM projects") If you use ESM in your TypeScript project, you should use the `Relation` wrapper type in relation properties to avoid circular dependency issues. Let's modify our entities: ``` import { Entity, Column, PrimaryGeneratedColumn, OneToOne, JoinColumn, Relation, } from "typeorm" import { Photo } from "./Photo" @Entity() export class PhotoMetadata { /* ... other columns */ @OneToOne(() => Photo, (photo) => photo.metadata) @JoinColumn() photo: Relation } ``` ``` import { Entity, Column, PrimaryGeneratedColumn, OneToOne, Relation, } from "typeorm" import { PhotoMetadata } from "./PhotoMetadata" @Entity() export class Photo { /* ... other columns */ @OneToOne(() => PhotoMetadata, (photoMetadata) => photoMetadata.photo) metadata: Relation } ``` ### Loading objects with their relations[​](#loading-objects-with-their-relations "Direct link to Loading objects with their relations") Now let's load our photo and its photo metadata in a single query. There are two ways to do it - using `find*` methods or using `QueryBuilder` functionality. Let's use `find*` method first. `find*` methods allow you to specify an object with the `FindOneOptions` / `FindManyOptions` interface. ``` import { Photo } from "./entity/Photo" import { PhotoMetadata } from "./entity/PhotoMetadata" import { AppDataSource } from "./index" const photoRepository = AppDataSource.getRepository(Photo) const photos = await photoRepository.find({ relations: { metadata: true, }, }) ``` Here, photos will contain an array of photos from the database, and each photo will contain its photo metadata. Learn more about Find Options in [this documentation](https://typeorm.io/docs/working-with-entity-manager/find-options.md). Using find options is good and dead simple, but if you need a more complex query, you should use `QueryBuilder` instead. `QueryBuilder` allows more complex queries to be used elegantly: ``` import { Photo } from "./entity/Photo" import { PhotoMetadata } from "./entity/PhotoMetadata" import { AppDataSource } from "./index" const photos = await AppDataSource.getRepository(Photo) .createQueryBuilder("photo") .innerJoinAndSelect("photo.metadata", "metadata") .getMany() ``` `QueryBuilder` allows the creation and execution of SQL queries of almost any complexity. When you work with `QueryBuilder`, think like you are creating an SQL query. In this example, "photo" and "metadata" are aliases applied to selected photos. You use aliases to access columns and properties of the selected data. ### Using cascades to automatically save related objects[​](#using-cascades-to-automatically-save-related-objects "Direct link to Using cascades to automatically save related objects") We can set up cascade options in our relations, in the cases when we want our related object to be saved whenever the other object is saved. Let's change our photo's `@OneToOne` decorator a bit: ``` export class Photo { // ... other columns @OneToOne(() => PhotoMetadata, (metadata) => metadata.photo, { cascade: true, }) metadata: PhotoMetadata } ``` Using `cascade` allows us not to separately save photos and separately save metadata objects now. Now we can simply save a photo object, and the metadata object will be saved automatically because of cascade options. ``` import { AppDataSource } from "./index" // create photo object const photo = new Photo() photo.name = "Me and Bears" photo.description = "I am near polar bears" photo.filename = "photo-with-bears.jpg" photo.isPublished = true // create photo metadata object const metadata = new PhotoMetadata() metadata.height = 640 metadata.width = 480 metadata.compressed = true metadata.comment = "cybershoot" metadata.orientation = "portrait" photo.metadata = metadata // this way we connect them // get repository const photoRepository = AppDataSource.getRepository(Photo) // saving a photo also save the metadata await photoRepository.save(photo) console.log("Photo is saved, photo metadata is saved too.") ``` Notice that we now set the photo's `metadata` property, instead of the metadata's `photo` property as before. The `cascade` feature only works if you connect the photo to its metadata from the photo's side. If you set the metadata side, the metadata would not be saved automatically. ### Creating a many-to-one / one-to-many relation[​](#creating-a-many-to-one--one-to-many-relation "Direct link to Creating a many-to-one / one-to-many relation") Let's create a many-to-one/one-to-many relation. Let's say a photo has one author, and each author can have many photos. First, let's create an `Author` class: ``` import { Entity, Column, PrimaryGeneratedColumn, OneToMany, JoinColumn, } from "typeorm" import { Photo } from "./Photo" @Entity() export class Author { @PrimaryGeneratedColumn() id: number @Column() name: string @OneToMany(() => Photo, (photo) => photo.author) // note: we will create author property in the Photo class below photos: Photo[] } ``` `Author` contains an inverse side of a relation. `OneToMany` is always an inverse side of the relation, and it can't exist without `ManyToOne` on the other side of the relation. Now let's add the owner side of the relation into the Photo entity: ``` import { Entity, Column, PrimaryGeneratedColumn, ManyToOne } from "typeorm" import { PhotoMetadata } from "./PhotoMetadata" import { Author } from "./Author" @Entity() export class Photo { /* ... other columns */ @ManyToOne(() => Author, (author) => author.photos) author: Author } ``` In many-to-one / one-to-many relations, the owner side is always many-to-one. It means that the class that uses `@ManyToOne` will store the id of the related object. After you run the application, the ORM will create the `author` table: ``` +-------------+--------------+----------------------------+ | author | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | name | varchar(255) | | +-------------+--------------+----------------------------+ ``` It will also modify the `photo` table, adding a new `author` column and creating a foreign key for it: ``` +-------------+--------------+----------------------------+ | photo | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | name | varchar(255) | | | description | varchar(255) | | | filename | varchar(255) | | | isPublished | boolean | | | authorId | int | FOREIGN KEY | +-------------+--------------+----------------------------+ ``` ### Creating a many-to-many relation[​](#creating-a-many-to-many-relation "Direct link to Creating a many-to-many relation") Let's create a many-to-many relation. Let's say a photo can be in many albums, and each album can contain many photos. Let's create an `Album` class: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany, JoinTable, } from "typeorm" @Entity() export class Album { @PrimaryGeneratedColumn() id: number @Column() name: string @ManyToMany(() => Photo, (photo) => photo.albums) @JoinTable() photos: Photo[] } ``` `@JoinTable` is required to specify that this is the owner side of the relationship. Now let's add the inverse side of our relation to the `Photo` class: ``` export class Photo { // ... other columns @ManyToMany(() => Album, (album) => album.photos) albums: Album[] } ``` After you run the application, the ORM will create an **album\_photos\_photo\_albums** *junction table*: ``` +-------------+--------------+----------------------------+ | album_photos_photo_albums | +-------------+--------------+----------------------------+ | album_id | int | PRIMARY KEY FOREIGN KEY | | photo_id | int | PRIMARY KEY FOREIGN KEY | +-------------+--------------+----------------------------+ ``` Remember to register the `Album` class with your connection in the ORM: ``` const options: DataSourceOptions = { // ... other options entities: [Photo, PhotoMetadata, Author, Album], } ``` Now let's insert albums and photos into our database: ``` import { AppDataSource } from "./index" // create a few albums const album1 = new Album() album1.name = "Bears" await AppDataSource.manager.save(album1) const album2 = new Album() album2.name = "Me" await AppDataSource.manager.save(album2) // create a few photos const photo = new Photo() photo.name = "Me and Bears" photo.description = "I am near polar bears" photo.filename = "photo-with-bears.jpg" photo.views = 1 photo.isPublished = true photo.albums = [album1, album2] await AppDataSource.manager.save(photo) // now our photo is saved and albums are attached to it // now lets load them: const loadedPhoto = await AppDataSource.getRepository(Photo).findOne({ where: { id: 1, }, relations: { albums: true, }, }) ``` `loadedPhoto` will be equal to: ``` { id: 1, name: "Me and Bears", description: "I am near polar bears", filename: "photo-with-bears.jpg", albums: [{ id: 1, name: "Bears" }, { id: 2, name: "Me" }] } ``` ### Using QueryBuilder[​](#using-querybuilder "Direct link to Using QueryBuilder") You can use QueryBuilder to build SQL queries of almost any complexity. For example, you can do this: ``` const photos = await AppDataSource.getRepository(Photo) .createQueryBuilder("photo") // First argument is an alias. Alias is what you are selecting - photos. You must specify it. .innerJoinAndSelect("photo.metadata", "metadata") .leftJoinAndSelect("photo.albums", "album") .where("photo.isPublished = true") .andWhere("(photo.name = :photoName OR photo.name = :bearName)") .orderBy("photo.id", "DESC") .skip(5) .take(10) .setParameters({ photoName: "My", bearName: "Mishka" }) .getMany() ``` This query selects all published photos with "My" or "Mishka" names. It will select results from position 5 (pagination offset) and will select only 10 results (pagination limit). The selection result will be ordered by id in descending order. The photo albums will be left joined and their metadata will be inner-joined. You'll use the query builder in your application a lot. Learn more about QueryBuilder [here](https://typeorm.io/docs/query-builder/select-query-builder.md). ## Samples[​](#samples "Direct link to Samples") Take a look at the samples in [sample](https://github.com/typeorm/typeorm/tree/master/sample) for examples of usage. There are a few repositories that you can clone and start with: * [Example how to use TypeORM with TypeScript](https://github.com/typeorm/typescript-example) * [Example how to use TypeORM with JavaScript](https://github.com/typeorm/javascript-example) * [Example how to use TypeORM with JavaScript and Babel](https://github.com/typeorm/babel-example) * [Example how to use TypeORM with TypeScript and SystemJS in Browser](https://github.com/typeorm/browser-example) * [Example how to use TypeORM with TypeScript and React in Browser](https://github.com/ItayGarin/typeorm-react-swc) * [Example how to use Express and TypeORM](https://github.com/typeorm/typescript-express-example) * [Example how to use Koa and TypeORM](https://github.com/typeorm/typescript-koa-example) * [Example how to use TypeORM with MongoDB](https://github.com/typeorm/mongo-typescript-example) * [Example how to use TypeORM in a Cordova app](https://github.com/typeorm/cordova-example) * [Example how to use TypeORM with an Ionic app](https://github.com/typeorm/ionic-example) * [Example how to use TypeORM with React Native](https://github.com/typeorm/react-native-example) * [Example how to use TypeORM with Nativescript-Vue](https://github.com/typeorm/nativescript-vue-typeorm-sample) * [Example how to use TypeORM with Nativescript-Angular](https://github.com/betov18x/nativescript-angular-typeorm-example) * [Example how to use TypeORM with Electron using JavaScript](https://github.com/typeorm/electron-javascript-example) * [Example how to use TypeORM with Electron using TypeScript](https://github.com/typeorm/electron-typescript-example) ## Extensions[​](#extensions "Direct link to Extensions") There are several extensions that simplify working with TypeORM and integrating it with other modules: * [TypeORM integration](https://github.com/typeorm/typeorm-typedi-extensions) with [TypeDI](https://github.com/pleerock/typedi) * [TypeORM integration](https://github.com/typeorm/typeorm-routing-controllers-extensions) with [routing-controllers](https://github.com/pleerock/routing-controllers) * Models generation from the existing database - [typeorm-model-generator](https://github.com/Kononnable/typeorm-model-generator) * Fixtures loader - [typeorm-fixtures-cli](https://github.com/RobinCK/typeorm-fixtures) * ER Diagram generator - [typeorm-uml](https://github.com/eugene-manuilov/typeorm-uml/) * another ER Diagram generator - [erdia](https://www.npmjs.com/package/erdia/) * Create, drop and seed database - [typeorm-extension](https://github.com/tada5hi/typeorm-extension) * Automatically update `data-source.ts` after generating [migrations](https://typeorm.io/docs/migrations/why.md)/entities - [typeorm-codebase-sync](https://www.npmjs.com/package/typeorm-codebase-sync) * Easy manipulation of `relations` objects - [typeorm-relations](https://npmjs.com/package/typeorm-relations) * Automatically generate `relations` based on a GraphQL query - [typeorm-relations-graphql](https://npmjs.com/package/typeorm-relations-graphql) ## Contributing[​](#contributing "Direct link to Contributing") Learn about contribution [here](https://github.com/typeorm/typeorm/blob/master/CONTRIBUTING.md) and how to set up your development environment [here](https://github.com/typeorm/typeorm/blob/master/DEVELOPER.md). This project exists thanks to all the people who contribute: [![](https://opencollective.com/typeorm/contributors.svg?width=890\&showBtn=false)](https://github.com/typeorm/typeorm/graphs/contributors) ## Sponsors[​](#sponsors "Direct link to Sponsors") Open source is hard and time-consuming. If you want to invest in TypeORM's future, you can become a sponsor and allow our core team to spend more time on TypeORM's improvements and new features. [Become a sponsor](https://opencollective.com/typeorm) [![](https://opencollective.com/typeorm/tiers/sponsor.svg?width=890)](https://opencollective.com/typeorm) ## Gold Sponsors[​](#gold-sponsors "Direct link to Gold Sponsors") Become a gold sponsor and get premium technical support from our core contributors. [Become a gold sponsor](https://opencollective.com/typeorm) [![](https://opencollective.com/typeorm/tiers/gold-sponsor.svg?width=890)](https://opencollective.com/typeorm) --- # Active Record vs Data Mapper ## What is the Active Record pattern?[​](#what-is-the-active-record-pattern "Direct link to What is the Active Record pattern?") In TypeORM you can use both the Active Record and the Data Mapper patterns. Using the Active Record approach, you define all your query methods inside the model itself, and you save, remove, and load objects using model methods. Simply said, the Active Record pattern is an approach to access your database within your models. You can read more about the Active Record pattern on [Wikipedia](https://en.wikipedia.org/wiki/Active_record_pattern). Example: ``` import { BaseEntity, Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class User extends BaseEntity { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string @Column() isActive: boolean } ``` All active-record entities must extend the `BaseEntity` class, which provides methods to work with the entity. Example of how to work with such entity: ``` // example how to save AR entity const user = new User() user.firstName = "Timber" user.lastName = "Saw" user.isActive = true await user.save() // example how to remove AR entity await user.remove() // example how to load AR entities const users = await User.find({ skip: 2, take: 5 }) const newUsers = await User.findBy({ isActive: true }) const timber = await User.findOneBy({ firstName: "Timber", lastName: "Saw" }) ``` `BaseEntity` has most of the methods of the standard `Repository`. Most of the time you don't need to use `Repository` or `EntityManager` with active record entities. Now let's say we want to create a function that returns users by first and last name. We can create such functions as a static method in a `User` class: ``` import { BaseEntity, Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class User extends BaseEntity { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string @Column() isActive: boolean static findByName(firstName: string, lastName: string) { return this.createQueryBuilder("user") .where("user.firstName = :firstName", { firstName }) .andWhere("user.lastName = :lastName", { lastName }) .getMany() } } ``` And use it just like other methods: ``` const timber = await User.findByName("Timber", "Saw") ``` ## What is the Data Mapper pattern?[​](#what-is-the-data-mapper-pattern "Direct link to What is the Data Mapper pattern?") In TypeORM you can use both the Active Record and Data Mapper patterns. Using the Data Mapper approach, you define all your query methods in separate classes called "repositories", and you save, remove, and load objects using repositories. In data mapper your entities are very dumb - they just define their properties and may have some "dummy" methods. Simply said, data mapper is an approach to access your database within repositories instead of models. You can read more about data mapper on [Wikipedia](https://en.wikipedia.org/wiki/Data_mapper_pattern). Example: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string @Column() isActive: boolean } ``` Example of how to work with such entity: ``` const userRepository = dataSource.getRepository(User) // example how to save DM entity const user = new User() user.firstName = "Timber" user.lastName = "Saw" user.isActive = true await userRepository.save(user) // example how to remove DM entity await userRepository.remove(user) // example how to load DM entities const users = await userRepository.find({ skip: 2, take: 5 }) const newUsers = await userRepository.findBy({ isActive: true }) const timber = await userRepository.findOneBy({ firstName: "Timber", lastName: "Saw", }) ``` In order to extend standard repository with custom methods, use [custom repository pattern](https://typeorm.io/docs/working-with-entity-manager/custom-repository.md). ## Which one should I choose?[​](#which-one-should-i-choose "Direct link to Which one should I choose?") The decision is up to you. Both strategies have their own cons and pros. One thing we should always keep in mind with software development is how we are going to maintain our applications. The `Data Mapper` approach helps with maintainability, which is more effective in larger apps. The `Active Record` approach helps keep things simple which works well in smaller apps. --- # Example using TypeORM with Express ## Initial setup[​](#initial-setup "Direct link to Initial setup") Let's create a simple application called "user" which stores users in the database and allows us to create, update, remove, and get a list of all users, as well as a single user by id within web api. First, create a directory called "user": ``` mkdir user ``` Then switch to the directory and create a new project: ``` cd user npm init ``` Finish the init process by filling in all required application information. Now we need to install and setup a TypeScript compiler. Lets install it first: ``` npm i typescript --save-dev ``` Then let's create a `tsconfig.json` file which contains the configuration required for the application to compile and run. Create it using your favorite editor and put the following configuration: ``` { "compilerOptions": { "lib": ["es5", "es6", "dom"], "target": "es5", "module": "commonjs", "moduleResolution": "node", "emitDecoratorMetadata": true, "experimentalDecorators": true } } ``` Now let's create a main application endpoint - `app.ts` inside the `src` directory: ``` mkdir src cd src touch app.ts ``` Let's add a simple `console.log` inside it: ``` console.log("Application is up and running") ``` Now it's time to run our application. To run it, you need to compile your typescript project first: ``` tsc ``` Once you compile it, you should have a `src/app.js` file generated. You can run it using: ``` node src/app.js ``` You should see the, "Application is up and running" message in your console right after you run the application. You must compile your files each time you make a change. Alternatively, you can set up watcher or install [ts-node](https://github.com/TypeStrong/ts-node) to avoid manual compilation each time. ## Adding Express to the application[​](#adding-express-to-the-application "Direct link to Adding Express to the application") Let's add Express to our application. First, let's install the packages we need: ``` npm install express npm install @types/express --save-dev ``` * `express` is the express engine itself. It allows us to create a web api * `@types/express` is used to have a type information when using express Let's edit the `src/app.ts` file and add express-related logic: ``` import * as express from "express" import { Request, Response } from "express" // create and setup express app const app = express() app.use(express.json()) // register routes app.get("/users", function (req: Request, res: Response) { // here we will have logic to return all users }) app.get("/users/:id", function (req: Request, res: Response) { // here we will have logic to return user by id }) app.post("/users", function (req: Request, res: Response) { // here we will have logic to save a user }) app.put("/users/:id", function (req: Request, res: Response) { // here we will have logic to update a user by a given user id }) app.delete("/users/:id", function (req: Request, res: Response) { // here we will have logic to delete a user by a given user id }) // start express server app.listen(3000) ``` Now you can compile and run your project. You should have an express server running now with working routes. However, those routes do not return any content yet. ## Adding TypeORM to the application[​](#adding-typeorm-to-the-application "Direct link to Adding TypeORM to the application") Finally, let's add TypeORM to the application. In this example, we will use `mysql` driver. Setup process for other drivers is similar. Let's install the required packages first: ``` npm install typeorm reflect-metadata mysql ``` * `typeorm` is the typeorm package itself * `reflect-metadata` is required to make decorators to work properly. Remember to import it before your TypeORM code. * `mysql` is the underlying database driver. If you are using a different database system, you must install the appropriate package Let's create `app-data-source.ts` where we set up initial database connection options: ``` import { DataSource } from "typeorm" export const myDataSource = new DataSource({ type: "mysql", host: "localhost", port: 3306, username: "test", password: "test", database: "test", entities: ["src/entity/*.js"], logging: true, synchronize: true, }) ``` Configure each option as you need. Learn more about options [here](https://typeorm.io/docs/data-source/data-source-options.md). Let's create a `user.entity.ts` entity inside `src/entity`: ``` import { Entity, Column, PrimaryGeneratedColumn } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() firstName: string @Column() lastName: string } ``` Let's change `src/app.ts` to establish database connection and start using `myDataSource`: ``` import "reflect-metadata" import * as express from "express" import { Request, Response } from "express" import { User } from "./entity/User" import { myDataSource } from "./app-data-source.ts" // establish database connection try { await myDataSource.initialize() console.log("Data Source has been initialized!") } catch (error) { console.error("Error during Data Source initialization:", error) } // create and setup express app const app = express() app.use(express.json()) // register routes app.get("/users", async function (req: Request, res: Response) { const users = await myDataSource.getRepository(User).find() res.json(users) }) app.get("/users/:id", async function (req: Request, res: Response) { const results = await myDataSource.getRepository(User).findOneBy({ id: req.params.id, }) return res.send(results) }) app.post("/users", async function (req: Request, res: Response) { const user = await myDataSource.getRepository(User).create(req.body) const results = await myDataSource.getRepository(User).save(user) return res.send(results) }) app.put("/users/:id", async function (req: Request, res: Response) { const user = await myDataSource.getRepository(User).findOneBy({ id: req.params.id, }) myDataSource.getRepository(User).merge(user, req.body) const results = await myDataSource.getRepository(User).save(user) return res.send(results) }) app.delete("/users/:id", async function (req: Request, res: Response) { const results = await myDataSource.getRepository(User).delete(req.params.id) return res.send(results) }) // start express server app.listen(3000) ``` Now you should have a basic express application connected to MySQL database up and running. --- # Migration from Sequelize to TypeORM ## Setting up a data source[​](#setting-up-a-data-source "Direct link to Setting up a data source") In sequelize you create a data source this way: ``` const sequelize = new Sequelize("database", "username", "password", { host: "localhost", dialect: "mysql", }) sequelize .authenticate() .then(() => { console.log("Data Source has been initialized successfully.") }) .catch((err) => { console.error("Error during Data Source initialization:", err) }) ``` In TypeORM you create a data source following way: ``` import { DataSource } from "typeorm" const dataSource = new DataSource({ type: "mysql", host: "localhost", username: "username", password: "password", }) dataSource .initialize() .then(() => { console.log("Data Source has been initialized successfully.") }) .catch((err) => { console.error("Error during Data Source initialization:", err) }) ``` Then you can use `dataSource` instance from anywhere in your app. Learn more about [Data Source](https://typeorm.io/docs/data-source/data-source.md) ## Schema synchronization[​](#schema-synchronization "Direct link to Schema synchronization") In sequelize you do schema synchronization this way: ``` Project.sync({ force: true }) Task.sync({ force: true }) ``` In TypeORM you just add `synchronize: true` in the data source options: ``` const dataSource = new DataSource({ type: "mysql", host: "localhost", username: "username", password: "password", synchronize: true, }) ``` ## Creating a models[​](#creating-a-models "Direct link to Creating a models") This is how models are defined in sequelize: ``` module.exports = function (sequelize, DataTypes) { const Project = sequelize.define("project", { title: DataTypes.STRING, description: DataTypes.TEXT, }) return Project } ``` ``` module.exports = function (sequelize, DataTypes) { const Task = sequelize.define("task", { title: DataTypes.STRING, description: DataTypes.TEXT, deadline: DataTypes.DATE, }) return Task } ``` In TypeORM these models are called entities and you can define them the following way: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class Project { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() description: string } ``` ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class Task { @PrimaryGeneratedColumn() id: number @Column() title: string @Column("text") description: string @Column() deadline: Date } ``` It's highly recommended defining one entity class per file. TypeORM allows you to use your classes as database models and provides a declarative way to define what part of your model will become part of your database table. The power of TypeScript gives you type hinting and other useful features that you can use in classes. Learn more about [Entities and columns](https://typeorm.io/docs/entity/entities.md) ## Other model settings[​](#other-model-settings "Direct link to Other model settings") The following in sequelize: ``` flag: { type: Sequelize.BOOLEAN, allowNull: true, defaultValue: true }, ``` Can be achieved in TypeORM like this: ``` @Column({ nullable: true, default: true }) flag: boolean; ``` Following in sequelize: ``` flag: { type: Sequelize.DATE, defaultValue: Sequelize.NOW } ``` Is written like this in TypeORM: ``` @Column({ default: () => "NOW()" }) myDate: Date; ``` Following in sequelize: ``` someUnique: { type: Sequelize.STRING, unique: true }, ``` Can be achieved this way in TypeORM: ``` @Column({ unique: true }) someUnique: string; ``` Following in sequelize: ``` fieldWithUnderscores: { type: Sequelize.STRING, field: "field_with_underscores" }, ``` Translates to this in TypeORM: ``` @Column({ name: "field_with_underscores" }) fieldWithUnderscores: string; ``` Following in sequelize: ``` incrementMe: { type: Sequelize.INTEGER, autoIncrement: true }, ``` Can be achieved this way in TypeORM: ``` @Column() @Generated() incrementMe: number; ``` Following in sequelize: ``` identifier: { type: Sequelize.STRING, primaryKey: true }, ``` Can be achieved this way in TypeORM: ``` @Column({ primary: true }) identifier: string; ``` To create `createDate` and `updateDate`-like columns you need to defined two columns (name it what you want) in your entity: ``` @CreateDateColumn(); createDate: Date; @UpdateDateColumn(); updateDate: Date; ``` ### Working with models[​](#working-with-models "Direct link to Working with models") To create and save a new model in sequelize you write: ``` const employee = await Employee.create({ name: "John Doe", title: "senior engineer", }) ``` In TypeORM there are several ways to create and save a new model: ``` const employee = new Employee() // you can use constructor parameters as well employee.name = "John Doe" employee.title = "senior engineer" await dataSource.getRepository(Employee).save(employee) ``` or active record pattern ``` const employee = Employee.create({ name: "John Doe", title: "senior engineer" }) await employee.save() ``` if you want to load an existing entity from the database and replace some of its properties you can use the following method: ``` const employee = await Employee.preload({ id: 1, name: "John Doe" }) ``` Learn more about [Active Record vs Data Mapper](https://typeorm.io/docs/guides/active-record-data-mapper.md) and [Repository API](https://typeorm.io/docs/working-with-entity-manager/repository-api.md). To access properties in sequelize you do the following: ``` console.log(employee.get("name")) ``` In TypeORM you simply do: ``` console.log(employee.name) ``` To create an index in sequelize you do: ``` sequelize.define( "user", {}, { indexes: [ { unique: true, fields: ["firstName", "lastName"], }, ], }, ) ``` In TypeORM you do: ``` @Entity() @Index(["firstName", "lastName"], { unique: true }) export class User {} ``` Learn more about [Indices](https://typeorm.io/docs/advanced-topics/indices.md) --- # SQL Tag TypeORM provides a way to write SQL queries using template literals with automatic parameter handling based on your database type. This feature helps prevent SQL injection while making queries more readable. The SQL tag is implemented as a wrapper around the `.query` method, providing an alternative interface while maintaining the same underlying functionality. ## Basic Usage[​](#basic-usage "Direct link to Basic Usage") The `sql` tag is available on DataSource, EntityManager, Repository and QueryRunner instances: ``` const users = await dataSource.sql`SELECT * FROM users WHERE name = ${"John"}` ``` ## Parameter Handling[​](#parameter-handling "Direct link to Parameter Handling") Parameters are automatically escaped and formatted according to your database type: * **PostgreSQL**, **CockroachDB**, **Aurora PostgreSQL** uses `$1`, `$2`, etc.: ``` // Query becomes: SELECT * FROM users WHERE name = $1 const users = await dataSource.sql`SELECT * FROM users WHERE name = ${"John"}` ``` * **MySQL**, **MariaDB**, **Aurora MySQL**, **SAP**, **SQLite** use `?`: ``` // Query becomes: SELECT * FROM users WHERE name = ? const users = await dataSource.sql`SELECT * FROM users WHERE name = ${"John"}` ``` * **Oracle** uses `:1`, `:2`, etc.: ``` // Query becomes: SELECT * FROM users WHERE name = :1 const users = await dataSource.sql`SELECT * FROM users WHERE name = ${"John"}` ``` * **MSSQL** uses `@1`, `@2`, etc.: ``` // Query becomes: SELECT * FROM users WHERE name = @1 const users = await dataSource.sql`SELECT * FROM users WHERE name = ${"John"}` ``` ### Multiple Parameters[​](#multiple-parameters "Direct link to Multiple Parameters") You can use multiple parameters and complex expressions: ``` const name = "John" const age = 30 const active = true const users = await dataSource.sql` SELECT * FROM users WHERE name LIKE ${name + "%"} AND age > ${age} AND is_active = ${active} ` ``` ### Expanding Parameter Lists[​](#expanding-parameter-lists "Direct link to Expanding Parameter Lists") To transform an array of values into a dynamic list of parameters in a template expression, wrap the array in a function. This is commonly used to write an `IN (...)` expression in SQL, where each value in the list must be supplied as a separate parameter: ``` // Query becomes: SELECT * FROM users WHERE id IN (?, ?, ?) const users = await dataSource.sql` SELECT * FROM users WHERE id IN (${() => [1, 2, 3]}) ` ``` ### Interpolating Unescaped Expressions[​](#interpolating-unescaped-expressions "Direct link to Interpolating Unescaped Expressions") When you want to insert a template expression which should *not* be transformed into a database parameter, wrap the string in a function. This can be used to dynamically define column, table or schema names which can't be parameterized, or to conditionally set clauses in the SQL. **Caution!** No escaping is performed on raw SQL inserted in this way. It is not safe to use this with values sourced from user input. ``` // Query becomes: SELECT * FROM dynamic_table_name const rawData = await dataSource.sql` SELECT * FROM ${() => "dynamic_table_name"} ` ``` ## Features[​](#features "Direct link to Features") * **SQL Injection Prevention**: Parameters are properly escaped * **Database Agnostic**: Parameter formatting is handled based on your database type * **Readable Queries**: Template literals can make queries more readable than parameter arrays ## Comparison with Query Method[​](#comparison-with-query-method "Direct link to Comparison with Query Method") The traditional `query` method requires manual parameter placeholder handling: ``` // Traditional query method await dataSource.query("SELECT * FROM users WHERE name = $1 AND age > $2", [ "John", 30, ]) // SQL tag alternative await dataSource.sql`SELECT * FROM users WHERE name = ${"John"} AND age > ${30}` ``` The SQL tag handles parameter formatting automatically, which can reduce potential errors. --- # Using with JavaScript TypeORM can be used not only with TypeScript, but also with JavaScript. Everything is the same, except you need to omit types and if your platform does not support ES6 classes then you need to define objects with all required metadata. ##### app.js[​](#appjs "Direct link to app.js") ``` var typeorm = require("typeorm") var dataSource = new typeorm.DataSource({ type: "postgres", host: "localhost", port: 5432, username: "test", password: "admin", database: "test", synchronize: true, entities: [require("./entity/Post"), require("./entity/Category")], }) dataSource .initialize() .then(function () { var category1 = { name: "TypeScript", } var category2 = { name: "Programming", } var post = { title: "Control flow based type analysis", text: "TypeScript 2.0 implements a control flow-based type analysis for local variables and parameters.", categories: [category1, category2], } var postRepository = dataSource.getRepository("Post") postRepository .save(post) .then(function (savedPost) { console.log("Post has been saved: ", savedPost) console.log("Now lets load all posts: ") return postRepository.find() }) .then(function (allPosts) { console.log("All posts: ", allPosts) }) }) .catch(function (error) { console.log("Error: ", error) }) ``` ##### entity/Category.js[​](#entitycategoryjs "Direct link to entity/Category.js") ``` var EntitySchema = require("typeorm").EntitySchema module.exports = new EntitySchema({ name: "Category", // Will use table name `category` as default behaviour. tableName: "categories", // Optional: Provide `tableName` property to override the default behaviour for table name. columns: { id: { primary: true, type: "int", generated: true, }, name: { type: "varchar", }, }, }) ``` ##### entity/Post.js[​](#entitypostjs "Direct link to entity/Post.js") ``` var EntitySchema = require("typeorm").EntitySchema module.exports = new EntitySchema({ name: "Post", // Will use table name `post` as default behaviour. tableName: "posts", // Optional: Provide `tableName` property to override the default behaviour for table name. columns: { id: { primary: true, type: "int", generated: true, }, title: { type: "varchar", }, text: { type: "text", }, }, relations: { categories: { target: "Category", type: "many-to-many", joinTable: true, cascade: true, }, }, }) ``` You can check out this example [typeorm/javascript-example](https://github.com/typeorm/javascript-example) to learn more. --- # Using Validation To use validation use [class-validator](https://github.com/pleerock/class-validator). Example how to use class-validator with TypeORM: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" import { Contains, IsInt, Length, IsEmail, IsFQDN, IsDate, Min, Max, } from "class-validator" @Entity() export class Post { @PrimaryGeneratedColumn() id: number @Column() @Length(10, 20) title: string @Column() @Contains("hello") text: string @Column() @IsInt() @Min(0) @Max(10) rating: number @Column() @IsEmail() email: string @Column() @IsFQDN() site: string @Column() @IsDate() createDate: Date } ``` Validation: ``` import { validate } from "class-validator" let post = new Post() post.title = "Hello" // should not pass post.text = "this is a great post about hell world" // should not pass post.rating = 11 // should not pass post.email = "google.com" // should not pass post.site = "googlecom" // should not pass const errors = await validate(post) if (errors.length > 0) { throw new Error(`Validation failed!`) } else { await dataSource.manager.save(post) } ``` --- # Decorator reference ## Entity decorators[​](#entity-decorators "Direct link to Entity decorators") #### `@Entity`[​](#entity "Direct link to entity") Marks your model as an entity. Entity is a class which is transformed into a database table. You can specify the table name in the entity: ``` @Entity("users") export class User {} ``` This code will create a database table named "users". You can also specify some additional entity options: * `name` - table name. If not specified, then table name is generated from entity class name. * `database` - database name in selected DB server. * `schema` - schema name. * `engine` - database engine to be set during table creation (works only in some databases). * `synchronize` - entities marked with `false` are skipped from schema updates. * `orderBy` - specifies default ordering for entities when using `find` operations and `QueryBuilder`. Example: ``` @Entity({ name: "users", engine: "MyISAM", database: "example_dev", schema: "schema_with_best_tables", synchronize: false, orderBy: { name: "ASC", id: "DESC", }, }) export class User {} ``` Learn more about [Entities](https://typeorm.io/docs/entity/entities.md). #### `@ViewEntity`[​](#viewentity "Direct link to viewentity") View entity is a class that maps to a database view. `@ViewEntity()` accepts following options: * `name` - view name. If not specified, then view name is generated from entity class name. * `database` - database name in selected DB server. * `schema` - schema name. * `expression` - view definition. **Required parameter**. `expression` can be string with properly escaped columns and tables, depend on database used (postgres in example): ``` @ViewEntity({ expression: ` SELECT "post"."id" "id", "post"."name" AS "name", "category"."name" AS "categoryName" FROM "post" "post" LEFT JOIN "category" "category" ON "post"."categoryId" = "category"."id" `, }) export class PostCategory {} ``` or an instance of QueryBuilder ``` @ViewEntity({ expression: (dataSource: DataSource) => dataSource .createQueryBuilder() .select("post.id", "id") .addSelect("post.name", "name") .addSelect("category.name", "categoryName") .from(Post, "post") .leftJoin(Category, "category", "category.id = post.categoryId"), }) export class PostCategory {} ``` **Note:** parameter binding is not supported due to drivers limitations. Use the literal parameters instead. ``` @ViewEntity({ expression: (dataSource: DataSource) => dataSource .createQueryBuilder() .select("post.id", "id") .addSelect("post.name", "name") .addSelect("category.name", "categoryName") .from(Post, "post") .leftJoin(Category, "category", "category.id = post.categoryId") .where("category.name = :name", { name: "Cars" }) // <-- this is wrong .where("category.name = 'Cars'"), // <-- and this is right }) export class PostCategory {} ``` Learn more about [View Entities](https://typeorm.io/docs/entity/view-entities.md). ## Column decorators[​](#column-decorators "Direct link to Column decorators") #### `@Column`[​](#column "Direct link to column") Marks a property in your entity as a table column. Example: ``` @Entity("users") export class User { @Column({ primary: true }) id: number @Column({ type: "varchar", length: 200, unique: true }) firstName: string @Column({ nullable: true }) lastName: string @Column({ default: false }) isActive: boolean } ``` `@Column` accept several options you can use: * `type: ColumnType` - Column type. One of the [supported column types](https://typeorm.io/docs/entity/entities.md#column-types). * `name: string` - Column name in the database table. By default, the column name is generated from the name of the property. You can change it by specifying your own name. * `length: string|number` - Column type's length. For example, if you want to create `varchar(150)` type you specify column type and length options. * `width: number` - column type's display width. Used only for [MySQL integer types](https://dev.mysql.com/doc/refman/5.7/en/integer-types.html). *Deprecated* in newer MySQL versions, will be removed from TypeORM in an upcoming version. * `onUpdate: string` - `ON UPDATE` trigger. Used only in [MySQL](https://dev.mysql.com/doc/refman/5.7/en/timestamp-initialization.html). * `nullable: boolean` - determines whether the column can become `NULL` or always has to be `NOT NULL`. By default column is `nullable: false`. * `update: boolean` - Indicates if column value is updated by "save" operation. If false, you'll be able to write this value only when you first time insert the object. Default value is `true`. * `insert: boolean` - Indicates if column value is set the first time you insert the object. Default value is `true`. * `select: boolean` - Defines whether or not to hide this column by default when making queries. When set to `false`, the column data will not show with a standard query. By default column is `select: true` * `default: string` - Adds database-level column's `DEFAULT` value. * `primary: boolean` - Marks column as primary. Same as using `@PrimaryColumn`. * `unique: boolean` - Marks column as unique column (creates unique constraint). Default value is false. * `comment: string` - Database's column comment. Not supported by all database types. * `precision: number` - The precision for a decimal (exact numeric) column (applies only for decimal column), which is the maximum number of digits that are stored for the values. Used in some column types. * `scale: number` - The scale for a decimal (exact numeric) column (applies only for decimal column), which represents the number of digits to the right of the decimal point and must not be greater than precision. Used in some column types. * `zerofill: boolean` - Puts `ZEROFILL` attribute on to a numeric column. Used only in MySQL. If `true`, MySQL automatically adds the `UNSIGNED` attribute to this column. *Deprecated* in newer MySQL versions, will be removed from TypeORM in an upcoming version. Use a character column and the `LPAD` function as suggested by MySQL. * `unsigned: boolean` - Puts `UNSIGNED` attribute on to a numeric column. Used only in MySQL. * `charset: string` - Defines a column character set. Not supported by all database types. * `collation: string` - Defines a column collation. * `enum: string[]|AnyEnum` - Used in `enum` column type to specify list of allowed enum values. You can specify array of values or specify a enum class. * `enumName: string` - A name for generated enum type. If not specified, TypeORM will generate a enum type from entity and column names - so it's necessary if you intend to use the same enum type in different tables. * `primaryKeyConstraintName: string` - A name for the primary key constraint. If not specified, then constraint name is generated from the table name and the names of the involved columns. * `asExpression: string` - Generated column expression. Used only in [MySQL](https://dev.mysql.com/doc/refman/5.7/en/create-table-generated-columns.html) and [Postgres](https://www.postgresql.org/docs/12/ddl-generated-columns.html). * `generatedType: "VIRTUAL"|"STORED"` - Generated column type. Used only in [MySQL](https://dev.mysql.com/doc/refman/5.7/en/create-table-generated-columns.html) and [Postgres (Only "STORED")](https://www.postgresql.org/docs/12/ddl-generated-columns.html). * `hstoreType: "object"|"string"` - Return type of `HSTORE` column. Returns value as string or as object. Used only in [Postgres](https://www.postgresql.org/docs/9.6/static/hstore.html). * `array: boolean` - Used for postgres and cockroachdb column types which can be array (for example int\[]). * `transformer: ValueTransformer|ValueTransformer[]` - Specifies a value transformer (or array of value transformers) that is to be used to (un)marshal this column when reading or writing to the database. In case of an array, the value transformers will be applied in the natural order from entityValue to databaseValue, and in reverse order from databaseValue to entityValue. * `spatialFeatureType: string` - Optional feature type (`Point`, `Polygon`, `LineString`, `Geometry`) used as a constraint on a spatial column. If not specified, it will behave as though `Geometry` was provided. Used only in PostgreSQL and CockroachDB. * `srid: number` - Optional [Spatial Reference ID](https://postgis.net/docs/using_postgis_dbmanagement.html#spatial_ref_sys) used as a constraint on a spatial column. If not specified, it will default to `0`. Standard geographic coordinates (latitude/longitude in the WGS84 datum) correspond to [EPSG 4326](http://spatialreference.org/ref/epsg/wgs-84/). Used only in PostgreSQL and CockroachDB. Learn more about [entity columns](https://typeorm.io/docs/entity/entities.md#entity-columns). #### `@PrimaryColumn`[​](#primarycolumn "Direct link to primarycolumn") Marks a property in your entity as a table primary column. Same as `@Column` decorator but sets its `primary` option to true. Example: ``` @Entity() export class User { @PrimaryColumn() id: number } ``` `@PrimaryColumn()` supports custom primary key constraint name: ``` @Entity() export class User { @PrimaryColumn({ primaryKeyConstraintName: "pk_user_id" }) id: number } ``` > Note: when using `primaryKeyConstraintName` with multiple primary keys, the constraint name must be the same for all primary columns. Learn more about [entity columns](https://typeorm.io/docs/entity/entities.md#entity-columns). #### `@PrimaryGeneratedColumn`[​](#primarygeneratedcolumn "Direct link to primarygeneratedcolumn") Marks a property in your entity as a table-generated primary column. Column it creates is primary and its value is auto-generated. Example: ``` @Entity() export class User { @PrimaryGeneratedColumn() id: number } ``` `@PrimaryGeneratedColumn()` supports custom primary key constraint name: ``` @Entity() export class User { @PrimaryGeneratedColumn({ primaryKeyConstraintName: "pk_user_id" }) id: number } ``` There are four generation strategies: * `increment` - uses AUTO\_INCREMENT / SERIAL / SEQUENCE (depend on database type) to generate incremental number. * `identity` - only for [PostgreSQL 10+](https://www.postgresql.org/docs/13/sql-createtable.html). Postgres versions above 10 support the SQL-Compliant **IDENTITY** column. When marking the generation strategy as `identity` the column will be produced using `GENERATED [ALWAYS|BY DEFAULT] AS IDENTITY` * `uuid` - generates unique `uuid` string. * `rowid` - only for [CockroachDB](https://www.cockroachlabs.com/docs/stable/serial.html). Value is automatically generated using the `unique_rowid()` function. This produces a 64-bit integer from the current timestamp and ID of the node executing the `INSERT` or `UPSERT` operation. > Note: property with a `rowid` generation strategy must be a `string` data type Default generation strategy is `increment`, to change it to another strategy, simply pass it as the first argument to decorator: ``` @Entity() export class User { @PrimaryGeneratedColumn("uuid") id: string } ``` Learn more about [entity columns](https://typeorm.io/docs/entity/entities.md#entity-columns). #### `@ObjectIdColumn`[​](#objectidcolumn "Direct link to objectidcolumn") Marks a property in your entity as ObjectId. This decorator is only used in MongoDB. Every entity in MongoDB must have a ObjectId column. Example: ``` @Entity() export class User { @ObjectIdColumn() id: ObjectId } ``` Learn more about [MongoDB](https://typeorm.io/docs/drivers/mongodb.md). #### `@CreateDateColumn`[​](#createdatecolumn "Direct link to createdatecolumn") Special column that is automatically set to the entity's insertion time. You don't need to write a value into this column - it will be automatically set. Example: ``` @Entity() export class User { @CreateDateColumn() createdDate: Date } ``` #### `@UpdateDateColumn`[​](#updatedatecolumn "Direct link to updatedatecolumn") Special column that is automatically set to the entity's update time each time you call `save` from entity manager or repository. You don't need to write a value into this column - it will be automatically set. This column is also automatically updated during `upsert` operations when an update occurs due to a conflict. ``` @Entity() export class User { @UpdateDateColumn() updatedDate: Date } ``` #### `@DeleteDateColumn`[​](#deletedatecolumn "Direct link to deletedatecolumn") Special column that is automatically set to the entity's delete time each time you call soft-delete of entity manager or repository. You don't need to set this column - it will be automatically set. TypeORM's own soft delete functionality utilizes global scopes to only pull "non-deleted" entities from the database. If the @DeleteDateColumn is set, the default scope will be "non-deleted". ``` @Entity() export class User { @DeleteDateColumn() deletedDate: Date } ``` #### `@VersionColumn`[​](#versioncolumn "Direct link to versioncolumn") Special column that is automatically set to the entity's version (incremental number) each time you call `save` from entity manager or repository. You don't need to write a value into this column - it will be automatically set. This column is also automatically updated during `upsert` operations when an update occurs due to a conflict. ``` @Entity() export class User { @VersionColumn() version: number } ``` #### `@Generated`[​](#generated "Direct link to generated") Marks column to be a generated value. For example: ``` @Entity() export class User { @Column() @Generated("uuid") uuid: string } ``` Value will be generated only once, before inserting the entity into the database. #### `@VirtualColumn`[​](#virtualcolumn "Direct link to virtualcolumn") Special column that is never saved to the database and thus acts as a readonly property. Each time you call `find` or `findOne` from the entity manager, the value is recalculated based on the query function that was provided in the VirtualColumn Decorator. The alias argument passed to the query references the exact entity alias of the generated query behind the scenes. ``` @Entity({ name: "companies", alias: "COMP" }) export class Company extends BaseEntity { @PrimaryColumn("varchar", { length: 50 }) name: string @VirtualColumn({ query: (alias) => `SELECT COUNT("name") FROM "employees" WHERE "companyName" = ${alias}.name`, }) totalEmployeesCount: number @OneToMany((type) => Employee, (employee) => employee.company) employees: Employee[] } @Entity({ name: "employees" }) export class Employee extends BaseEntity { @PrimaryColumn("varchar", { length: 50 }) name: string @ManyToOne((type) => Company, (company) => company.employees) company: Company } ``` ## Relation decorators[​](#relation-decorators "Direct link to Relation decorators") #### `@OneToOne`[​](#onetoone "Direct link to onetoone") One-to-one is a relation where A contains only one instance of B, and B contains only one instance of A. Let's take for example `User` and `Profile` entities. User can have only a single profile, and a single profile is owned by only a single user. Example: ``` import { Entity, OneToOne, JoinColumn } from "typeorm" import { Profile } from "./Profile" @Entity() export class User { @OneToOne((type) => Profile, (profile) => profile.user) @JoinColumn() profile: Profile } ``` Learn more about [one-to-one relations](https://typeorm.io/docs/relations/one-to-one-relations.md). #### `@ManyToOne`[​](#manytoone "Direct link to manytoone") Many-to-one / one-to-many is a relation where A contains multiple instances of B, but B contains only one instance of A. Let's take for example `User` and `Photo` entities. User can have multiple photos, but each photo is owned by only one single user. Example: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToOne } from "typeorm" import { User } from "./User" @Entity() export class Photo { @PrimaryGeneratedColumn() id: number @Column() url: string @ManyToOne((type) => User, (user) => user.photos) user: User } ``` Learn more about [many-to-one / one-to-many relations](https://typeorm.io/docs/relations/many-to-one-one-to-many-relations.md). #### `@OneToMany`[​](#onetomany "Direct link to onetomany") Many-to-one / one-to-many is a relation where A contains multiple instances of B, but B contains only one instance of A. Let's take for example `User` and `Photo` entities. User can have multiple photos, but each photo is owned by only a single user. Example: ``` import { Entity, PrimaryGeneratedColumn, Column, OneToMany } from "typeorm" import { Photo } from "./Photo" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @OneToMany((type) => Photo, (photo) => photo.user) photos: Photo[] } ``` Learn more about [many-to-one / one-to-many relations](https://typeorm.io/docs/relations/many-to-one-one-to-many-relations.md). #### `@ManyToMany`[​](#manytomany "Direct link to manytomany") Many-to-many is a relation where A contains multiple instances of B, and B contain multiple instances of A. Let's take for example `Question` and `Category` entities. Question can have multiple categories, and each category can have multiple questions. Example: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany, JoinTable, } from "typeorm" import { Category } from "./Category" @Entity() export class Question { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string @ManyToMany((type) => Category) @JoinTable() categories: Category[] } ``` Learn more about [many-to-many relations](https://typeorm.io/docs/relations/many-to-many-relations.md). #### `@JoinColumn`[​](#joincolumn "Direct link to joincolumn") Defines which side of the relation contains the join column with a foreign key and allows you to customize the join column name, referenced column name and foreign key name. Example: ``` @Entity() export class Post { @ManyToOne((type) => Category) @JoinColumn({ name: "cat_id", referencedColumnName: "name", foreignKeyConstraintName: "fk_cat_id", }) category: Category } ``` #### `@JoinTable`[​](#jointable "Direct link to jointable") Used for `many-to-many` relations and describes join columns of the "junction" table. Junction table is a special, separate table created automatically by TypeORM with columns referenced to the related entities. You can change the name of the generated "junction" table, the column names inside the junction table, their referenced columns with the `joinColumn`- and `inverseJoinColumn` attributes, and the created foreign keys names. You can also set parameter `synchronize` to false to skip schema update(same way as in @Entity) Example: ``` @Entity() export class Post { @ManyToMany((type) => Category) @JoinTable({ name: "question_categories", joinColumn: { name: "question", referencedColumnName: "id", foreignKeyConstraintName: "fk_question_categories_questionId", }, inverseJoinColumn: { name: "category", referencedColumnName: "id", foreignKeyConstraintName: "fk_question_categories_categoryId", }, synchronize: false, }) categories: Category[] } ``` If the destination table has composite primary keys, then an array of properties must be sent to the `@JoinTable` decorator. #### `@RelationId`[​](#relationid "Direct link to relationid") Loads id (or ids) of specific relations into properties. For example, if you have a many-to-one `category` in your `Post` entity, you can have a new category id by marking a new property with `@RelationId`. Example: ``` @Entity() export class Post { @ManyToOne((type) => Category) category: Category @RelationId((post: Post) => post.category) // you need to specify target relation categoryId: number } ``` This functionality works for all kind of relations, including `many-to-many`: ``` @Entity() export class Post { @ManyToMany((type) => Category) categories: Category[] @RelationId((post: Post) => post.categories) categoryIds: number[] } ``` Relation id is used only for representation. The underlying relation is not added/removed/changed when chaining the value. ## Subscriber and listener decorators[​](#subscriber-and-listener-decorators "Direct link to Subscriber and listener decorators") #### `@AfterLoad`[​](#afterload "Direct link to afterload") You can define a method with any name in entity and mark it with `@AfterLoad` and TypeORM will call it each time the entity is loaded using `QueryBuilder` or repository/manager find methods. Example: ``` @Entity() export class Post { @AfterLoad() updateCounters() { if (this.likesCount === undefined) this.likesCount = 0 } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@BeforeInsert`[​](#beforeinsert "Direct link to beforeinsert") You can define a method with any name in entity and mark it with `@BeforeInsert` and TypeORM will call it before the entity is inserted using repository/manager `save`. Example: ``` @Entity() export class Post { @BeforeInsert() updateDates() { this.createdDate = new Date() } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@AfterInsert`[​](#afterinsert "Direct link to afterinsert") You can define a method with any name in entity and mark it with `@AfterInsert` and TypeORM will call it after the entity is inserted using repository/manager `save`. Example: ``` @Entity() export class Post { @AfterInsert() resetCounters() { this.counters = 0 } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@BeforeUpdate`[​](#beforeupdate "Direct link to beforeupdate") You can define a method with any name in the entity and mark it with `@BeforeUpdate` and TypeORM will call it before an existing entity is updated using repository/manager `save`. Example: ``` @Entity() export class Post { @BeforeUpdate() updateDates() { this.updatedDate = new Date() } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@AfterUpdate`[​](#afterupdate "Direct link to afterupdate") You can define a method with any name in the entity and mark it with `@AfterUpdate` and TypeORM will call it after an existing entity is updated using repository/manager `save`. Example: ``` @Entity() export class Post { @AfterUpdate() updateCounters() { this.counter = 0 } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@BeforeRemove`[​](#beforeremove "Direct link to beforeremove") You can define a method with any name in the entity and mark it with `@BeforeRemove` and TypeORM will call it before an entity is removed using repository/manager `remove`. Example: ``` @Entity() export class Post { @BeforeRemove() updateStatus() { this.status = "removed" } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@AfterRemove`[​](#afterremove "Direct link to afterremove") You can define a method with any name in the entity and mark it with `@AfterRemove` and TypeORM will call it after the entity is removed using repository/manager `remove`. Example: ``` @Entity() export class Post { @AfterRemove() updateStatus() { this.status = "removed" } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@BeforeSoftRemove`[​](#beforesoftremove "Direct link to beforesoftremove") You can define a method with any name in the entity and mark it with `@BeforeSoftRemove` and TypeORM will call it before an entity is soft removed using repository/manager `softRemove`. Example: ``` @Entity() export class Post { @BeforeSoftRemove() updateStatus() { this.status = "soft-removed" } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@AfterSoftRemove`[​](#aftersoftremove "Direct link to aftersoftremove") You can define a method with any name in the entity and mark it with `@AfterSoftRemove` and TypeORM will call it after the entity is soft removed using repository/manager `softRemove`. Example: ``` @Entity() export class Post { @AfterSoftRemove() updateStatus() { this.status = "soft-removed" } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@BeforeRecover`[​](#beforerecover "Direct link to beforerecover") You can define a method with any name in the entity and mark it with `@BeforeRecover` and TypeORM will call it before an entity is recovered using repository/manager `recover`. Example: ``` @Entity() export class Post { @BeforeRecover() updateStatus() { this.status = "recovered" } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@AfterRecover`[​](#afterrecover "Direct link to afterrecover") You can define a method with any name in the entity and mark it with `@AfterRecover` and TypeORM will call it after the entity is recovered using repository/manager `recover`. Example: ``` @Entity() export class Post { @AfterRecover() updateStatus() { this.status = "recovered" } } ``` Learn more about [listeners](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). #### `@EventSubscriber`[​](#eventsubscriber "Direct link to eventsubscriber") Marks a class as an event subscriber which can listen to specific entity events or any entity's events. Events are fired using `QueryBuilder` and repository/manager methods. Example: ``` @EventSubscriber() export class PostSubscriber implements EntitySubscriberInterface { /** * Indicates that this subscriber only listen to Post events. */ listenTo() { return Post } /** * Called before post insertion. */ beforeInsert(event: InsertEvent) { console.log(`BEFORE POST INSERTED: `, event.entity) } } ``` You can implement any method from `EntitySubscriberInterface`. To listen to any entity, you just omit the `listenTo` method and use `any`: ``` @EventSubscriber() export class PostSubscriber implements EntitySubscriberInterface { /** * Called before entity insertion. */ beforeInsert(event: InsertEvent) { console.log(`BEFORE ENTITY INSERTED: `, event.entity) } } ``` Learn more about [subscribers](https://typeorm.io/docs/advanced-topics/listeners-and-subscribers.md). ## Other decorators[​](#other-decorators "Direct link to Other decorators") #### `@Index`[​](#index "Direct link to index") This decorator allows you to create a database index for a specific column or columns. It also allows you to mark column or columns to be unique. This decorator can be applied to columns or an entity itself. Use it on a column when an index on a single column is needed and use it on the entity when a single index on multiple columns is required. Examples: ``` @Entity() export class User { @Index() @Column() firstName: string @Index({ unique: true }) @Column() lastName: string } ``` ``` @Entity() @Index(["firstName", "lastName"]) @Index(["lastName", "middleName"]) @Index(["firstName", "lastName", "middleName"], { unique: true }) export class User { @Column() firstName: string @Column() lastName: string @Column() middleName: string } ``` Learn more about [indices](https://typeorm.io/docs/advanced-topics/indices.md). #### `@Unique`[​](#unique "Direct link to unique") This decorator allows you to create a database unique constraint for a specific column or columns. This decorator can be applied only to an entity itself. You must specify the entity field names (not database column names) as arguments. Examples: ``` @Entity() @Unique(["firstName"]) @Unique(["lastName", "middleName"]) @Unique("UQ_NAMES", ["firstName", "lastName", "middleName"]) export class User { @Column({ name: "first_name" }) firstName: string @Column({ name: "last_name" }) lastName: string @Column({ name: "middle_name" }) middleName: string } ``` > Note: MySQL stores unique constraints as unique indices #### `@Check`[​](#check "Direct link to check") This decorator allows you to create a database check constraint for a specific column or columns. This decorator can be applied only to an entity itself. Examples: ``` @Entity() @Check(`"firstName" <> 'John' AND "lastName" <> 'Doe'`) @Check(`"age" > 18`) export class User { @Column() firstName: string @Column() lastName: string @Column() age: number } ``` > Note: MySQL does not support check constraints. #### `@Exclusion`[​](#exclusion "Direct link to exclusion") This decorator allows you to create a database exclusion constraint for a specific column or columns. This decorator can be applied only to an entity itself. Examples: ``` @Entity() @Exclusion(`USING gist ("room" WITH =, tsrange("from", "to") WITH &&)`) export class RoomBooking { @Column() room: string @Column() from: Date @Column() to: Date } ``` > Note: Only PostgreSQL supports exclusion constraints. #### `@ForeignKey`[​](#foreignkey "Direct link to foreignkey") This decorator allows you to create a database foreign key for a specific column or columns. This decorator can be applied to columns or an entity itself. Use it on a column when an foreign key on a single column is needed and use it on the entity when a single foreign key on multiple columns is required. > Note: **Do not use this decorator with relations.** Foreign keys are created automatically for relations which you define using [Relation decorators](#relation-decorators) (`@ManyToOne`, `@OneToOne`, etc). The `@ForeignKey` decorator should only be used to create foreign keys in the database when you don't want to define an equivalent entity relationship. Examples: ``` @Entity("orders") @ForeignKey(() => City, ["cityId", "countryCode"], ["id", "countryCode"]) export class Order { @PrimaryColumn() id: number @Column("uuid", { name: "user_uuid" }) @ForeignKey("User", "uuid", { name: "FK_user_uuid" }) userUuid: string @Column({ length: 2 }) @ForeignKey(() => Country, "code") countryCode: string @Column() @ForeignKey("cities") cityId: number @Column() dispatchCountryCode: string @ManyToOne(() => Country) dispatchCountry: Country @Column() dispatchCityId: number @ManyToOne(() => City) dispatchCity: City } ``` ``` @Entity("cities") @Unique(["id", "countryCode"]) export class City { @PrimaryColumn() id: number @Column({ length: 2 }) @ForeignKey("countries", { onDelete: "CASCADE", onUpdate: "CASCADE" }) countryCode: string @Column() name: string } ``` ``` @Entity("countries") export class Country { @PrimaryColumn({ length: 2 }) code: string @Column() name: string } ``` ``` @Entity("users") export class User { @PrimaryColumn({ name: "ref" }) id: number @Column("uuid", { unique: true }) uuid: string } ``` --- # FAQ ## How do I update a database schema?[​](#how-do-i-update-a-database-schema "Direct link to How do I update a database schema?") One of the main responsibilities of TypeORM is to keep your database tables in sync with your entities. There are two ways that help you achieve this: * Use `synchronize: true` in data source options: ``` import { DataSource } from "typeorm" const myDataSource = new DataSource({ // ... synchronize: true, }) ``` This option automatically syncs your database tables with the given entities each time you run this code. This option is perfect during development, but in production you may not want this option to be enabled. * Use command line tools and run schema sync manually in the command line: ``` typeorm schema:sync ``` This command will execute schema synchronization. Schema sync is extremely fast. If you are considering to disable synchronize option during development because of performance issues, first check how fast it is. ## How do I change a column name in the database?[​](#how-do-i-change-a-column-name-in-the-database "Direct link to How do I change a column name in the database?") By default, column names are generated from property names. You can simply change it by specifying a `name` column option: ``` @Column({ name: "is_active" }) isActive: boolean; ``` ## How can I set the default value to some function, for example `NOW()`?[​](#how-can-i-set-the-default-value-to-some-function-for-example-now "Direct link to how-can-i-set-the-default-value-to-some-function-for-example-now") `default` column option supports a function. If you are passing a function which returns a string, it will use that string as a default value without escaping it. For example: ``` @Column({ default: () => "NOW()" }) date: Date; ``` ## How to do validation?[​](#how-to-do-validation "Direct link to How to do validation?") Validation is not part of TypeORM because validation is a separate process not really related to what TypeORM does. If you want to use validation use [class-validator](https://github.com/pleerock/class-validator) - it works perfectly with TypeORM. ## What does "owner side" in a relations mean or why we need to use `@JoinColumn` and `@JoinTable`?[​](#what-does-owner-side-in-a-relations-mean-or-why-we-need-to-use-joincolumn-and-jointable "Direct link to what-does-owner-side-in-a-relations-mean-or-why-we-need-to-use-joincolumn-and-jointable") Let's start with `one-to-one` relation. Let's say we have two entities: `User` and `Photo`: ``` @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @OneToOne() photo: Photo } ``` ``` @Entity() export class Photo { @PrimaryGeneratedColumn() id: number @Column() url: string @OneToOne() user: User } ``` This example does not have a `@JoinColumn` which is incorrect. Why? Because to make a real relation, we need to create a column in the database. We need to create a column `userId` in `photo` or `photoId` in `user`. But which column should be created - `userId` or `photoId`? TypeORM cannot decide for you. To make a decision, you must use `@JoinColumn` on one of the sides. If you put `@JoinColumn` in `Photo` then a column called `userId` will be created in the `photo` table. If you put `@JoinColumn` in `User` then a column called `photoId` will be created in the `user` table. The side with `@JoinColumn` will be called the "owner side of the relationship". The other side of the relation, without `@JoinColumn`, is called the "inverse (non-owner) side of relationship". It is the same in a `@ManyToMany` relation. You use `@JoinTable` to show the owner side of the relation. In `@ManyToOne` or `@OneToMany` relations, `@JoinColumn` is not necessary because both decorators are different, and the table where you put the `@ManyToOne` decorator will have the relational column. `@JoinColumn` and `@JoinTable` decorators can also be used to specify additional join column / junction table settings, like join column name or junction table name. ## How do I add extra columns into many-to-many (junction) table?[​](#how-do-i-add-extra-columns-into-many-to-many-junction-table "Direct link to How do I add extra columns into many-to-many (junction) table?") It's not possible to add extra columns into a table created by a many-to-many relation. You'll need to create a separate entity and bind it using two many-to-one relations with the target entities (the effect will be same as creating a many-to-many table), and add extra columns in there. You can read more about this in [Many-to-Many relations](https://typeorm.io/docs/relations/many-to-many-relations.md#many-to-many-relations-with-custom-properties). ## How to handle outDir TypeScript compiler option?[​](#how-to-handle-outdir-typescript-compiler-option "Direct link to How to handle outDir TypeScript compiler option?") When you are using the `outDir` compiler option, don't forget to copy assets and resources your app is using into the output directory. Otherwise, make sure to setup correct paths to those assets. One important thing to know is that when you remove or move entities, the old entities are left untouched inside the output directory. For example, you create a `Post` entity and rename it to `Blog`, you no longer have `Post.ts` in your project. However, `Post.js` is left inside the output directory. Now, when TypeORM reads entities from your output directory, it sees two entities - `Post` and `Blog`. This may be a source of bugs. That's why when you remove and move entities with `outDir` enabled, it's strongly recommended to remove your output directory and recompile the project again. ## How to use TypeORM with ts-node?[​](#how-to-use-typeorm-with-ts-node "Direct link to How to use TypeORM with ts-node?") You can prevent compiling files each time using [ts-node](https://github.com/TypeStrong/ts-node). If you are using ts-node, you can specify `ts` entities inside data source options: ``` { entities: ["src/entity/*.ts"], subscribers: ["src/subscriber/*.ts"] } ``` Also, if you are compiling js files into the same folder where your typescript files are, make sure to use the `outDir` compiler option to prevent [this issue](https://github.com/TypeStrong/ts-node/issues/432). Also, if you want to use the ts-node CLI, you can execute TypeORM the following way: ``` npx typeorm-ts-node-commonjs schema:sync ``` For ESM projects use this instead: ``` npx typeorm-ts-node-esm schema:sync ``` ## How to use Webpack for the backend?[​](#how-to-use-webpack-for-the-backend "Direct link to How to use Webpack for the backend?") Webpack produces warnings due to what it views as missing require statements -- require statements for all drivers supported by TypeORM. To suppress these warnings for unused drivers, you will need to edit your webpack config file. ``` const FilterWarningsPlugin = require('webpack-filter-warnings-plugin'); module.exports = { ... plugins: [ //ignore the drivers you don't want. This is the complete list of all drivers -- remove the suppressions for drivers you want to use. new FilterWarningsPlugin({ exclude: [/mongodb/, /mssql/, /mysql/, /mysql2/, /oracledb/, /pg/, /pg-native/, /pg-query-stream/, /react-native-sqlite-storage/, /redis/, /sqlite3/, /sql.js/, /typeorm-aurora-data-api-driver/] }) ] }; ``` ### Bundling Migration Files[​](#bundling-migration-files "Direct link to Bundling Migration Files") By default Webpack tries to bundle everything into one file. This can be problematic when your project has migration files which are meant to be executed after bundled code is deployed to production. To make sure all your [migrations](https://typeorm.io/docs/migrations/why.md) can be recognized and executed by TypeORM, you may need to use "Object Syntax" for the `entry` configuration for the migration files only. ``` const glob = require("glob") const path = require("path") module.exports = { // ... your webpack configurations here... // Dynamically generate a `{ [name]: sourceFileName }` map for the `entry` option // change `src/db/migrations` to the relative path to your migration folder entry: glob .sync(path.resolve("src/db/migrations/*.ts")) .reduce((entries, filename) => { const migrationName = path.basename(filename, ".ts") return Object.assign({}, entries, { [migrationName]: filename, }) }, {}), resolve: { // assuming all your migration files are written in TypeScript extensions: [".ts"], }, output: { // change `path` to where you want to put transpiled migration files. path: __dirname + "/dist/db/migrations", // this is important - we want UMD (Universal Module Definition) for migration files. libraryTarget: "umd", filename: "[name].js", }, } ``` Also, since Webpack 4, when using `mode: 'production'`, files are optimized by default which includes mangling your code in order to minimize file sizes. This breaks the [migrations](https://typeorm.io/docs/migrations/why.md) because TypeORM relies on their names to determine which has already been executed. You may disable minimization completely by adding: ``` module.exports = { // ... other Webpack configurations here optimization: { minimize: false, }, } ``` Alternatively, if you are using the `UglifyJsPlugin`, you can tell it to not change class or function names like so: ``` const UglifyJsPlugin = require("uglifyjs-webpack-plugin") module.exports = { // ... other Webpack configurations here optimization: { minimizer: [ new UglifyJsPlugin({ uglifyOptions: { keep_classnames: true, keep_fnames: true, }, }), ], }, } ``` Lastly, make sure in your data source options, the transpiled migration files are included: ``` // TypeORM Configurations module.exports = { // ... migrations: [ // this is the relative path to the transpiled migration files in production "db/migrations/**/*.js", // your source migration files, used in development mode "src/db/migrations/**/*.ts", ], } ``` ## How to use TypeORM in ESM projects?[​](#how-to-use-typeorm-in-esm-projects "Direct link to How to use TypeORM in ESM projects?") Make sure to add `"type": "module"` in the `package.json` of your project so TypeORM will know to use `import( ... )` on files. To avoid circular dependency import issues use the `Relation` wrapper type for relation type definitions in entities: ``` @Entity() export class User { @OneToOne(() => Profile, (profile) => profile.user) profile: Relation } ``` Doing this prevents the type of the property from being saved in the transpiled code in the property metadata, preventing circular dependency issues. Since the type of the column is already defined using the `@OneToOne` decorator, there's no use of the additional type metadata saved by TypeScript. > Important: Do not use `Relation` on non-relation column types --- # Support ## Found a bug or want to propose a new feature?[​](#found-a-bug-or-want-to-propose-a-new-feature "Direct link to Found a bug or want to propose a new feature?") If you found a bug, issue, or you just want to propose a new feature, create [an issue on GitHub](https://github.com/typeorm/typeorm/issues). ## Have a question?[​](#have-a-question "Direct link to Have a question?") If you have a question, you can ask it on [StackOverflow](https://stackoverflow.com/questions/tagged/typeorm) or other community support channels. ## Want community support?[​](#want-community-support "Direct link to Want community support?") If you want community support, or simply want to chat with friendly TypeORM enthusiasts and users, you can do it on [Discord](https://discord.gg/cC9hkmUgNa). ## Want professional commercial support?[​](#want-professional-commercial-support "Direct link to Want professional commercial support?") The TypeORM core team is always ready to provide professional commercial support. We are ready to work with any team in any part of the world. Feel free to [contact us](mailto:support@typeorm.io). --- # Supported platforms ## NodeJS[​](#nodejs "Direct link to NodeJS") TypeORM is compatible with Node.js 16+ and currently each commit is tested on Node.js 18 and 20. ## Browser[​](#browser "Direct link to Browser") You can use [sql.js](https://sql.js.org) in the browser. ### Webpack configuration[​](#webpack-configuration "Direct link to Webpack configuration") In the `browser` folder the package also includes a version compiled as a ES2015 module. If you want to use a different loader this is the point to start. Prior to TypeORM 0.1.7, the package is setup in a way that loaders like webpack will automatically use the `browser` folder. With 0.1.7 this was dropped to support Webpack usage in Node.js projects. This means, that the `NormalModuleReplacementPlugin` has to be used to insure that the correct version is loaded for browser projects. The configuration in your webpack config file, for this plugin looks like this: ``` plugins: [ ..., // any existing plugins that you already have new webpack.NormalModuleReplacementPlugin(/typeorm$/, function (result) { result.request = result.request.replace(/typeorm/, "typeorm/browser"); }), new webpack.ProvidePlugin({ 'window.SQL': 'sql.js/dist/sql-wasm.js' }) ] ``` and make sure [sql-wasm.wasm file](https://github.com/sql-js/sql.js/blob/master/README.md#downloadingusing) exists in your public path. ### Example of configuration[​](#example-of-configuration "Direct link to Example of configuration") ``` new DataSource({ type: "sqljs", entities: [Photo], synchronize: true, }) ``` ### Don't forget to include reflect-metadata[​](#dont-forget-to-include-reflect-metadata "Direct link to Don't forget to include reflect-metadata") In your main html page, you need to include reflect-metadata: ``` ``` ## Capacitor[​](#capacitor "Direct link to Capacitor") See [Using TypeORM with the Capacitor driver type](https://github.com/capacitor-community/sqlite/blob/master/docs/TypeORM-Usage-From-5.6.0.md) in the official Capacitor docs. ## Cordova / Ionic apps[​](#cordova--ionic-apps "Direct link to Cordova / Ionic apps") TypeORM is able to run on Cordova/Ionic apps using the [cordova-sqlite-storage](https://github.com/litehelpers/Cordova-sqlite-storage) plugin You have the option to choose between module loaders just like in browser package. For an example how to use TypeORM in Cordova see [typeorm/cordova-example](https://github.com/typeorm/cordova-example) and for Ionic see [typeorm/ionic-example](https://github.com/typeorm/ionic-example). **Important**: For use with Ionic, a custom webpack config file is needed! Please checkout the example to see the needed changes. Note that there is currently no support for transactions when using the [cordova-sqlite-storage](https://github.com/litehelpers/Cordova-sqlite-storage) plugin. See [Cordova SQLite limitations](https://github.com/storesafe/cordova-sqlite-storage#other-limitations) for more information. ## Expo[​](#expo "Direct link to Expo") TypeORM is able to run on Expo apps using the [Expo SQLite API](https://docs.expo.io/versions/latest/sdk/sqlite/). For an example how to use TypeORM in Expo see [typeorm/expo-example](https://github.com/typeorm/expo-example). ## NativeScript[​](#nativescript "Direct link to NativeScript") 1. `tns install webpack` (read below why webpack is required) 2. `tns plugin add nativescript-sqlite` 3. Create a DataSource in your app's entry point ``` import driver from "nativescript-sqlite" const dataSource = new DataSource({ database: "test.db", type: "nativescript", driver, entities: [ Todo, //... whatever entities you have ], logging: true, }) ``` Note: This works only with NativeScript 4.x and above *When using with NativeScript, **using webpack is compulsory**. The `typeorm/browser` package is raw ES7 code with `import/export` which will **NOT** run as it is. It has to be bundled. Please use the `tns run --bundle` method* Checkout example [here](https://github.com/championswimmer/nativescript-vue-typeorm-sample)! ## React Native[​](#react-native "Direct link to React Native") TypeORM is able to run on React Native apps using the [react-native-sqlite-storage](https://github.com/andpor/react-native-sqlite-storage) plugin. For an example see [typeorm/react-native-example](https://github.com/typeorm/react-native-example). --- # Query Runner API In order to use an API to change a database schema you can use `QueryRunner`. ``` import { MigrationInterface, QueryRunner, Table, TableIndex, TableColumn, TableForeignKey, } from "typeorm" export class QuestionRefactoringTIMESTAMP implements MigrationInterface { async up(queryRunner: QueryRunner): Promise { await queryRunner.createTable( new Table({ name: "question", columns: [ { name: "id", type: "int", isPrimary: true, }, { name: "name", type: "varchar", }, ], }), true, ) await queryRunner.createIndex( "question", new TableIndex({ name: "IDX_QUESTION_NAME", columnNames: ["name"], }), ) await queryRunner.createTable( new Table({ name: "answer", columns: [ { name: "id", type: "int", isPrimary: true, }, { name: "name", type: "varchar", }, { name: "created_at", type: "timestamp", default: "now()", }, ], }), true, ) await queryRunner.addColumn( "answer", new TableColumn({ name: "questionId", type: "int", }), ) await queryRunner.createForeignKey( "answer", new TableForeignKey({ columnNames: ["questionId"], referencedColumnNames: ["id"], referencedTableName: "question", onDelete: "CASCADE", }), ) } async down(queryRunner: QueryRunner): Promise { const table = await queryRunner.getTable("answer") const foreignKey = table.foreignKeys.find( (fk) => fk.columnNames.indexOf("questionId") !== -1, ) await queryRunner.dropForeignKey("answer", foreignKey) await queryRunner.dropColumn("answer", "questionId") await queryRunner.dropTable("answer") await queryRunner.dropIndex("question", "IDX_QUESTION_NAME") await queryRunner.dropTable("question") } } ``` *** ``` getDatabases(): Promise ``` Returns all available database names including system databases. *** ``` getSchemas(database?: string): Promise ``` * `database` - If database parameter specified, returns schemas of that database Returns all available schema names including system schemas. Useful for SQLServer and Postgres only. *** ``` getTable(tableName: string): Promise ``` * `tableName` - name of a table to be loaded Loads a table by a given name from the database. *** ``` getTables(tableNames: string[]): Promise ``` * `tableNames` - name of a tables to be loaded Loads a tables by a given names from the database. *** ``` hasDatabase(database: string): Promise ``` * `database` - name of a database to be checked Checks if database with the given name exist. *** ``` hasSchema(schema: string): Promise ``` * `schema` - name of a schema to be checked Checks if schema with the given name exist. Used only for SqlServer and Postgres. *** ``` hasTable(table: Table|string): Promise ``` * `table` - Table object or name Checks if table exist. *** ``` hasColumn(table: Table|string, columnName: string): Promise ``` * `table` - Table object or name * `columnName` - name of a column to be checked Checks if column exist in the table. *** ``` createDatabase(database: string, ifNotExist?: boolean): Promise ``` * `database` - database name * `ifNotExist` - skips creation if `true`, otherwise throws error if database already exist Creates a new database. *** ``` dropDatabase(database: string, ifExist?: boolean): Promise ``` * `database` - database name * `ifExist` - skips deletion if `true`, otherwise throws error if database was not found Drops database. *** ``` createSchema(schemaPath: string, ifNotExist?: boolean): Promise ``` * `schemaPath` - schema name. For SqlServer can accept schema path (e.g. 'dbName.schemaName') as parameter. If schema path passed, it will create schema in specified database * `ifNotExist` - skips creation if `true`, otherwise throws error if schema already exist Creates a new table schema. *** ``` dropSchema(schemaPath: string, ifExist?: boolean, isCascade?: boolean): Promise ``` * `schemaPath` - schema name. For SqlServer can accept schema path (e.g. 'dbName.schemaName') as parameter. If schema path passed, it will drop schema in specified database * `ifExist` - skips deletion if `true`, otherwise throws error if schema was not found * `isCascade` - If `true`, automatically drop objects (tables, functions, etc.) that are contained in the schema. Used only in Postgres. Drops a table schema. *** ``` createTable(table: Table, ifNotExist?: boolean, createForeignKeys?: boolean, createIndices?: boolean): Promise ``` * `table` - Table object. * `ifNotExist` - skips creation if `true`, otherwise throws error if table already exist. Default `false` * `createForeignKeys` - indicates whether foreign keys will be created on table creation. Default `true` * `createIndices` - indicates whether indices will be created on table creation. Default `true` Creates a new table. *** ``` dropTable(table: Table|string, ifExist?: boolean, dropForeignKeys?: boolean, dropIndices?: boolean): Promise ``` * `table` - Table object or table name to be dropped * `ifExist` - skips dropping if `true`, otherwise throws error if table does not exist * `dropForeignKeys` - indicates whether foreign keys will be dropped on table deletion. Default `true` * `dropIndices` - indicates whether indices will be dropped on table deletion. Default `true` Drops a table. *** ``` renameTable(oldTableOrName: Table|string, newTableName: string): Promise ``` * `oldTableOrName` - old Table object or name to be renamed * `newTableName` - new table name Renames a table. *** ``` addColumn(table: Table|string, column: TableColumn): Promise ``` * `table` - Table object or name * `column` - new column Adds a new column. *** ``` addColumns(table: Table|string, columns: TableColumn[]): Promise ``` * `table` - Table object or name * `columns` - new columns Adds a new column. *** ``` renameColumn(table: Table|string, oldColumnOrName: TableColumn|string, newColumnOrName: TableColumn|string): Promise ``` * `table` - Table object or name * `oldColumnOrName` - old column. Accepts TableColumn object or column name * `newColumnOrName` - new column. Accepts TableColumn object or column name Renames a column. *** ``` changeColumn(table: Table|string, oldColumn: TableColumn|string, newColumn: TableColumn): Promise ``` * `table` - Table object or name * `oldColumn` - old column. Accepts TableColumn object or column name * `newColumn` - new column. Accepts TableColumn object Changes a column in the table. *** ``` changeColumns(table: Table|string, changedColumns: { oldColumn: TableColumn, newColumn: TableColumn }[]): Promise ``` * `table` - Table object or name * `changedColumns` - array of changed columns. * `oldColumn` - old TableColumn object * `newColumn` - new TableColumn object Changes a columns in the table. *** ``` dropColumn(table: Table|string, column: TableColumn|string): Promise ``` * `table` - Table object or name * `column` - TableColumn object or column name to be dropped Drops a column in the table. *** ``` dropColumns(table: Table|string, columns: TableColumn[]|string[]): Promise ``` * `table` - Table object or name * `columns` - array of TableColumn objects or column names to be dropped Drops a columns in the table. *** ``` createPrimaryKey(table: Table|string, columnNames: string[]): Promise ``` * `table` - Table object or name * `columnNames` - array of column names which will be primary Creates a new primary key. *** ``` updatePrimaryKeys(table: Table|string, columns: TableColumn[]): Promise ``` * `table` - Table object or name * `columns` - array of TableColumn objects which will be updated Updates composite primary keys. *** ``` dropPrimaryKey(table: Table|string): Promise ``` * `table` - Table object or name Drops a primary key. *** ``` createUniqueConstraint(table: Table|string, uniqueConstraint: TableUnique): Promise ``` * `table` - Table object or name * `uniqueConstraint` - TableUnique object to be created Creates new unique constraint. > Note: does not work for MySQL, because MySQL stores unique constraints as unique indices. Use `createIndex()` method instead. *** ``` createUniqueConstraints(table: Table|string, uniqueConstraints: TableUnique[]): Promise ``` * `table` - Table object or name * `uniqueConstraints` - array of TableUnique objects to be created Creates new unique constraints. > Note: does not work for MySQL, because MySQL stores unique constraints as unique indices. Use `createIndices()` method instead. *** ``` dropUniqueConstraint(table: Table|string, uniqueOrName: TableUnique|string): Promise ``` * `table` - Table object or name * `uniqueOrName` - TableUnique object or unique constraint name to be dropped Drops a unique constraint. > Note: does not work for MySQL, because MySQL stores unique constraints as unique indices. Use `dropIndex()` method instead. *** ``` dropUniqueConstraints(table: Table|string, uniqueConstraints: TableUnique[]): Promise ``` * `table` - Table object or name * `uniqueConstraints` - array of TableUnique objects to be dropped Drops unique constraints. > Note: does not work for MySQL, because MySQL stores unique constraints as unique indices. Use `dropIndices()` method instead. *** ``` createCheckConstraint(table: Table|string, checkConstraint: TableCheck): Promise ``` * `table` - Table object or name * `checkConstraint` - TableCheck object Creates a new check constraint. > Note: MySQL does not support check constraints. *** ``` createCheckConstraints(table: Table|string, checkConstraints: TableCheck[]): Promise ``` * `table` - Table object or name * `checkConstraints` - array of TableCheck objects Creates a new check constraint. > Note: MySQL does not support check constraints. *** ``` dropCheckConstraint(table: Table|string, checkOrName: TableCheck|string): Promise ``` * `table` - Table object or name * `checkOrName` - TableCheck object or check constraint name Drops check constraint. > Note: MySQL does not support check constraints. *** ``` dropCheckConstraints(table: Table|string, checkConstraints: TableCheck[]): Promise ``` * `table` - Table object or name * `checkConstraints` - array of TableCheck objects Drops check constraints. > Note: MySQL does not support check constraints. *** ``` createForeignKey(table: Table|string, foreignKey: TableForeignKey): Promise ``` * `table` - Table object or name * `foreignKey` - TableForeignKey object Creates a new foreign key. *** ``` createForeignKeys(table: Table|string, foreignKeys: TableForeignKey[]): Promise ``` * `table` - Table object or name * `foreignKeys` - array of TableForeignKey objects Creates a new foreign keys. *** ``` dropForeignKey(table: Table|string, foreignKeyOrName: TableForeignKey|string): Promise ``` * `table` - Table object or name * `foreignKeyOrName` - TableForeignKey object or foreign key name Drops a foreign key. *** ``` dropForeignKeys(table: Table|string, foreignKeys: TableForeignKey[]): Promise ``` * `table` - Table object or name * `foreignKeys` - array of TableForeignKey objects Drops a foreign keys. *** ``` createIndex(table: Table|string, index: TableIndex): Promise ``` * `table` - Table object or name * `index` - TableIndex object Creates a new index. *** ``` createIndices(table: Table|string, indices: TableIndex[]): Promise ``` * `table` - Table object or name * `indices` - array of TableIndex objects Creates a new indices. *** ``` dropIndex(table: Table|string, index: TableIndex|string): Promise ``` * `table` - Table object or name * `index` - TableIndex object or index name Drops an index. *** ``` dropIndices(table: Table|string, indices: TableIndex[]): Promise ``` * `table` - Table object or name * `indices` - array of TableIndex objects Drops an indices. *** ``` clearTable(tableName: string): Promise ``` * `tableName` - table name Clears all table contents. > Note: this operation uses SQL's TRUNCATE query which cannot be reverted in transactions. *** ``` enableSqlMemory(): void ``` Enables special query runner mode in which sql queries won't be executed, instead they will be memorized into a special variable inside query runner. You can get memorized sql using `getMemorySql()` method. *** ``` disableSqlMemory(): void ``` Disables special query runner mode in which sql queries won't be executed. Previously memorized sql will be flushed. *** ``` clearSqlMemory(): void ``` Flushes all memorized sql statements. *** ``` getMemorySql(): SqlInMemory ``` * returns `SqlInMemory` object with array of `upQueries` and `downQueries` sql statements Gets sql stored in the memory. Parameters in the sql are already replaced. *** ``` executeMemoryUpSql(): Promise ``` Executes memorized up sql queries. *** ``` executeMemoryDownSql(): Promise ``` Executes memorized down sql queries. *** --- # Creating manually You can create a new migration using CLI by specifying the name and location of the migration: ``` npx typeorm migration:create / ``` For example: ``` npx typeorm migration:create src/db/migrations/post-refactoring ``` After you run the command you can see a new file generated in the `src/db/migrations` directory named `{TIMESTAMP}-post-refactoring.ts` where `{TIMESTAMP}` is the current timestamp when the migration was generated. Now you can open the file and add your migration sql queries there. You should see the following content inside your migration: ``` import { MigrationInterface, QueryRunner } from "typeorm" export class PostRefactoringTIMESTAMP implements MigrationInterface { async up(queryRunner: QueryRunner): Promise {} async down(queryRunner: QueryRunner): Promise {} } ``` There are two methods you must fill with your migration code: `up` and `down`. `up` has to contain the code you need to perform the migration. `down` has to revert whatever `up` changed. `down` method is used to revert the last migration. Inside both `up` and `down` you have a `QueryRunner` object. All database operations are executed using this object. Learn more about [query runner](https://typeorm.io/docs/query-runner.md). Let's see what the migration looks like with our `Post` changes: ``` import { MigrationInterface, QueryRunner } from "typeorm" export class PostRefactoringTIMESTAMP implements MigrationInterface { async up(queryRunner: QueryRunner): Promise { await queryRunner.query( `ALTER TABLE "post" RENAME COLUMN "title" TO "name"`, ) } async down(queryRunner: QueryRunner): Promise { await queryRunner.query( `ALTER TABLE "post" RENAME COLUMN "name" TO "title"`, ) // reverts things made in "up" method } } ``` --- # Executing and reverting Once you have a migration to run on production, you can run them using a CLI command: ``` typeorm migration:run -- -d path-to-datasource-config ``` **`typeorm migration:create` and `typeorm migration:generate` will create `.ts` files, unless you use the `o` flag (see more in [Generating migrations](https://typeorm.io/docs/migrations/generating.md)). The `migration:run` and `migration:revert` commands only work on `.js` files. Thus the typescript files need to be compiled before running the commands.** Alternatively, you can use `ts-node` with `typeorm` to run `.ts` migration files. Example with `ts-node`: ``` npx typeorm-ts-node-commonjs migration:run -- -d path-to-datasource-config ``` Example with `ts-node` in ESM projects: ``` npx typeorm-ts-node-esm migration:run -- -d path-to-datasource-config ``` ``` npx typeorm-ts-node-esm migration:generate ./src/migrations/update-post-table -d ./src/data-source.ts ``` This command will execute all pending migrations and run them in a sequence ordered by their timestamps. This means all sql queries written in the `up` methods of your created migrations will be executed. That's all! Now you have your database schema up-to-date. --- # Extra options ## Timestamp[​](#timestamp "Direct link to Timestamp") If you need to specify a timestamp for the migration name, use the `-t` (alias for `--timestamp`) and pass the timestamp (should be a non-negative number) ``` typeorm -t migration:{create|generate} ``` You can get a timestamp from: ``` Date.now() /* OR */ new Date().getTime() ``` --- # Faking Migrations and Rollbacks You can also fake run a migration using the `--fake` flag (`-f` for short). This will add the migration to the migrations table without running it. This is useful for migrations created after manual changes have already been made to the database or when migrations have been run externally (e.g. by another tool or application), and you still would like to keep a consistent migration history. ``` typeorm migration:run -d path-to-datasource-config --fake ``` This is also possible with rollbacks. ``` typeorm migration:revert -d path-to-datasource-config --fake ``` ### Transaction modes[​](#transaction-modes "Direct link to Transaction modes") By default, TypeORM will run all your migrations within a single wrapping transaction. This corresponds to the `--transaction all` flag. If you require more fine grained transaction control, you can use the `--transaction each` flag to wrap every migration individually, or the `--transaction none` flag to opt out of wrapping the migrations in transactions altogether. In addition to these flags, you can also override the transaction behavior on a per-migration basis by setting the `transaction` property on the `MigrationInterface` to `true` or `false`. This only works in the `each` or `none` transaction mode. ``` import { MigrationInterface, QueryRunner } from "typeorm" export class AddIndexTIMESTAMP implements MigrationInterface { transaction = false async up(queryRunner: QueryRunner): Promise { await queryRunner.query( `CREATE INDEX CONCURRENTLY post_names_idx ON post(name)`, ) } async down(queryRunner: QueryRunner): Promise { await queryRunner.query(`DROP INDEX CONCURRENTLY post_names_idx`) } } ``` --- # Generating TypeORM is able to automatically generate migration files based on the changes you made to the entities, comparing them with existing database schema on the server. Automatic migration generation creates a new migration file and writes all sql queries that must be executed to update the database. If no changes are detected, the command will exit with code `1`. Let's say you have a `Post` entity with a `title` column, and you have changed the name `title` to `name`. You can generate migration with of the following command: ``` typeorm migration:generate -d ``` The `-d` argument value should specify the path where your [DataSource](https://typeorm.io/docs/data-source/data-source.md) instance is defined. Alternatively you can also specify name with `--name` param ``` typeorm migration:generate -- -d --name= ``` or use a full path: ``` typeorm migration:generate -d / ``` Assuming you used `post-refactoring` as a name, it will generate a new file called `{TIMESTAMP}-post-refactoring.ts` with the following content: ``` import { MigrationInterface, QueryRunner } from "typeorm" export class PostRefactoringTIMESTAMP implements MigrationInterface { async up(queryRunner: QueryRunner): Promise { await queryRunner.query( `ALTER TABLE "post" ALTER COLUMN "title" RENAME TO "name"`, ) } async down(queryRunner: QueryRunner): Promise { await queryRunner.query( `ALTER TABLE "post" ALTER COLUMN "name" RENAME TO "title"`, ) } } ``` Alternatively, you can also output your migrations as Javascript files using the `o` (alias for `--outputJs`) flag. This is useful for Javascript only projects in which TypeScript additional packages are not installed. This command, will generate a new migration file `{TIMESTAMP}-PostRefactoring.js` with the following content: ``` /** * @typedef {import('typeorm').MigrationInterface} MigrationInterface * @typedef {import('typeorm').QueryRunner} QueryRunner */ /** * @class * @implements {MigrationInterface} */ module.exports = class PostRefactoringTIMESTAMP { /** * @param {QueryRunner} queryRunner */ async up(queryRunner) { await queryRunner.query( `ALTER TABLE "post" ALTER COLUMN "title" RENAME TO "name"`, ) } /** * @param {QueryRunner} queryRunner */ async down(queryRunner) { await queryRunner.query( `ALTER TABLE "post" ALTER COLUMN "name" RENAME TO "title"`, ) } } ``` By default, it generates CommonJS JavaScript code with the `o` (alias for `--outputJs`) flag, but you can also generate ESM code with the `esm` flag. This is useful for Javascript projects that use ESM: ``` /** * @typedef {import('typeorm').MigrationInterface} MigrationInterface * @typedef {import('typeorm').QueryRunner} QueryRunner */ /** * @class * @implements {MigrationInterface} */ export class PostRefactoringTIMESTAMP { /** * @param {QueryRunner} queryRunner */ async up(queryRunner) { await queryRunner.query( `ALTER TABLE "post" ALTER COLUMN "title" RENAME TO "name"`, ) } /** * @param {QueryRunner} queryRunner */ async down(queryRunner) { await queryRunner.query( `ALTER TABLE "post" ALTER COLUMN "name" RENAME TO "title"`, ) } } ``` See, you don't need to write the queries on your own. The rule of thumb for generating migrations is that you generate them after **each** change you made to your models. To apply multi-line formatting to your generated migration queries, use the `p` (alias for `--pretty`) flag. --- # Reverting If for some reason you want to revert the changes, you can run: ``` typeorm migration:revert -- -d path-to-datasource-config ``` This command will execute `down` in the latest executed migration. If you need to revert multiple migrations you must call this command multiple times. --- # Setup Before working with migrations you need to setup your [DataSource](https://typeorm.io/docs/data-source/data-source.md) options properly: ``` export default new DataSource({ // basic setup synchronize: false, migrations: [ /*...*/ ], // optional migrationsRun: false, migrationsTableName: 'migrations', migrationsTransactionMode: 'all' // other options... }) ``` ## `synchronise`[​](#synchronise "Direct link to synchronise") Turning off automatic schema synchronisation is essential for working with migrations. Otherwise they would make no sense. ## `migrations`[​](#migrations "Direct link to migrations") Defines list of migrations that need to be loaded by TypeORM. It accepts both migration classes and directories from which to load. The easiest is to specify the directory where your migration files are located (glob patterns are supported): ``` migrations: [__dirname + '/migration/**/*{.js,.ts}'] ``` Defining both `.js` and `.ts` extensions would allow you to run migrations in development and from compiled to JavaScript for production (eg. from Docker image). Alternatively you could also specify exact classes to get more fine grained control: ``` import FirstMigration from 'migrations/TIMESTAMP-first-migration' import SecondMigration from 'migrations/TIMESTAMP-second-migration' export default new DataSource({ migrations: [FirstMigration, SecondMigration] }) ``` but it also requires more manual work and can be error prone. * `migrationsRun` - Indicates if [migrations](https://typeorm.io/docs/migrations/why.md) should be auto-run on every application launch. ## Optional settings[​](#optional-settings "Direct link to Optional settings") ### `migrationsRun`[​](#migrationsrun "Direct link to migrationsrun") Indicates if migrations should be auto-run on every application launch. Default: `false` ### `migrationsTableName`[​](#migrationstablename "Direct link to migrationstablename") You might want to specify the name of the table that will store information about executed migrations. By default it is called `'migrations'`. ``` migrationsTableName: 'some_custom_migrations_table' ``` ### `migrationsTransactionMode`[​](#migrationstransactionmode "Direct link to migrationstransactionmode") Controls transaction mode when running migrations. Possible options are: * `all` (*default*) - wraps migrations run into a single transaction * `none` * `each` --- # Status To show all migrations and whether they've been run or not use following command: ``` typeorm migration:show -- -d path-to-datasource-config ``` \[X] = Migration has been ran \[ ] = Migration is pending/unapplied --- # Vite Using TypeORM in a [Vite](https://vite.dev) project is pretty straight forward. However, when you use [migrations](https://typeorm.io/docs/migrations/why.md), you will run into "...migration name is wrong. Migration class name should have a JavaScript timestamp appended." errors when running the production build. On production builds, files are [optimized by default](https://vite.dev/config/build-options#build-minify) which includes mangling your code in order to minimize file sizes. You have 3 options to mitigate this. The 3 options are shown below as diff to this basic `vite.config.ts` ``` import legacy from "@vitejs/plugin-legacy" import vue from "@vitejs/plugin-vue" import path from "path" import { defineConfig } from "vite" // https://vitejs.dev/config/ export default defineConfig({ build: { sourcemap: true, }, plugins: [vue(), legacy()], resolve: { alias: { "@": path.resolve(__dirname, "./src"), }, }, }) ``` ## Option 1: Disable minify[​](#option-1-disable-minify "Direct link to Option 1: Disable minify") This is the most crude option and will result in significantly larger files. Add `build.minify = false` to your config. ``` --- basic vite.config.ts +++ disable minify vite.config.ts @@ -7,6 +7,7 @@ export default defineConfig({ build: { sourcemap: true, + minify: false, }, plugins: [vue(), legacy()], resolve: { ``` ## Option 2: Disable esbuild minify identifiers[​](#option-2-disable-esbuild-minify-identifiers "Direct link to Option 2: Disable esbuild minify identifiers") Vite uses esbuild as the default minifier. You can disable mangling of identifiers by adding `esbuild.minifyIdentifiers = false` to your config. This will result in smaller file sizes, but depending on your code base you will get diminishing returns as all identifiers will be kept at full length. ``` --- basic vite.config.ts +++ disable esbuild minify identifiers vite.config.ts @@ -8,6 +8,7 @@ build: { sourcemap: true, }, + esbuild: { minifyIdentifiers: false }, plugins: [vue(), legacy()], resolve: { ``` ## Option 3: Use terser as minifier while keeping only the migration class names[​](#option-3-use-terser-as-minifier-while-keeping-only-the-migration-class-names "Direct link to Option 3: Use terser as minifier while keeping only the migration class names") Vite supports using terser as minifier. Terser is slower then esbuild, but offers more fine grained control over what to minify. Add `minify: 'terser'` with `terserOptions.mangle.keep_classnames: /^Migrations\d+$/` and `terserOptions.compress.keep_classnames: /^Migrations\d+$/` to your config. These options will make sure classnames that start with "Migrations" and end with numbers are not renamed during minification. Make sure terser is available as dev dependency in your project: `npm add -D terser`. ``` --- basic vite.config.ts +++ terser keep migration class names vite.config.ts @@ -7,6 +7,11 @@ export default defineConfig({ build: { sourcemap: true, + minify: 'terser', + terserOptions: { + mangle: { keep_classnames: /^Migrations\d+$/ }, + compress: { keep_classnames: /^Migrations\d+$/ }, + }, }, plugins: [vue(), legacy()], resolve: { ``` --- # How migrations work? Once you get into production you'll need to synchronize model changes into the database. Typically, it is unsafe to use `synchronize: true` for schema synchronization on production once you get data in your database. Here is where migrations come to help. A migration is just a single file with SQL queries to update a database schema and apply new changes to an existing database. Let's say you already have a database and a `Post` entity: ``` import { Entity, Column, PrimaryGeneratedColumn } from "typeorm" @Entity() export class Post { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string } ``` And your entity worked in production for months without any changes. You have thousands of posts in your database. Now you need to make a new release and rename `title` to `name`. What would you do? You need to create a new migration with the following SQL query (PostgreSQL dialect): ``` ALTER TABLE "post" RENAME COLUMN "title" TO "name"; ``` Once you run this SQL query your database schema is ready to work with your new codebase. TypeORM provides a place where you can write such sql queries and run them when needed. This place is called "migrations". --- # Caching queries You can cache results selected by these `QueryBuilder` methods: `getMany`, `getOne`, `getRawMany`, `getRawOne` and `getCount`. You can also cache results selected by `find*` and `count*` methods of the `Repository` and `EntityManager`. To enable caching you need to explicitly enable it in data source options: ``` { type: "mysql", host: "localhost", username: "test", ... cache: true } ``` When you enable cache for the first time, you must synchronize your database schema (using CLI, migrations or the `synchronize` data source option). Then in `QueryBuilder` you can enable query cache for any query: ``` const users = await dataSource .createQueryBuilder(User, "user") .where("user.isAdmin = :isAdmin", { isAdmin: true }) .cache(true) .getMany() ``` Equivalent `Repository` query: ``` const users = await dataSource.getRepository(User).find({ where: { isAdmin: true }, cache: true, }) ``` This will execute a query to fetch all admin users and cache the results. Next time you execute the same code, it will get all admin users from the cache. Default cache lifetime is equal to `1000 ms`, e.g. 1 second. This means the cache will be invalid 1 second after the query builder code is called. In practice, this means that if users open the user page 150 times within 3 seconds, only three queries will be executed during this period. Any users inserted during the 1 second cache window won't be returned to the user. You can change cache time manually via `QueryBuilder`: ``` const users = await dataSource .createQueryBuilder(User, "user") .where("user.isAdmin = :isAdmin", { isAdmin: true }) .cache(60000) // 1 minute .getMany() ``` Or via `Repository`: ``` const users = await dataSource.getRepository(User).find({ where: { isAdmin: true }, cache: 60000, }) ``` Or globally in data source options: ``` { type: "mysql", host: "localhost", username: "test", ... cache: { duration: 30000 // 30 seconds } } ``` Also, you can set a "cache id" via `QueryBuilder`: ``` const users = await dataSource .createQueryBuilder(User, "user") .where("user.isAdmin = :isAdmin", { isAdmin: true }) .cache("users_admins", 25000) .getMany() ``` Or with `Repository`: ``` const users = await dataSource.getRepository(User).find({ where: { isAdmin: true }, cache: { id: "users_admins", milliseconds: 25000, }, }) ``` This gives you granular control of your cache, for example, clearing cached results when you insert a new user: ``` await dataSource.queryResultCache.remove(["users_admins"]) ``` By default, TypeORM uses a separate table called `query-result-cache` and stores all queries and results there. Table name is configurable, so you could change it by specifying a different value in the tableName property. Example: ``` { type: "mysql", host: "localhost", username: "test", ... cache: { type: "database", tableName: "configurable-table-query-result-cache" } } ``` If storing cache in a single database table is not effective for you, you can change the cache type to "redis" or "ioredis" and TypeORM will store all cached records in redis instead. Example: ``` { type: "mysql", host: "localhost", username: "test", ... cache: { type: "redis", options: { socket: { host: "localhost", port: 6379 } } } } ``` "options" can be [node\_redis specific options](https://github.com/redis/node-redis/blob/master/docs/client-configuration.md) or [ioredis specific options](https://github.com/luin/ioredis/blob/master/API.md#new-redisport-host-options) depending on what type you're using. In case you want to connect to a redis-cluster using IORedis's cluster functionality, you can do that as well by doing the following: ``` { type: "mysql", host: "localhost", username: "test", cache: { type: "ioredis/cluster", options: { startupNodes: [ { host: 'localhost', port: 7000, }, { host: 'localhost', port: 7001, }, { host: 'localhost', port: 7002, } ], options: { scaleReads: 'all', clusterRetryStrategy: function (times) { return null }, redisOptions: { maxRetriesPerRequest: 1 } } } } } ``` Note that, you can still use options as the first argument of IORedis's cluster constructor. ``` { ... cache: { type: "ioredis/cluster", options: [ { host: 'localhost', port: 7000, }, { host: 'localhost', port: 7001, }, { host: 'localhost', port: 7002, } ] }, ... } ``` If none of the built-in cache providers satisfy your demands, then you can also specify your own cache provider by using a `provider` factory function which needs to return a new object that implements the `QueryResultCache` interface: ``` class CustomQueryResultCache implements QueryResultCache { constructor(private dataSource: DataSource) {} ... } ``` ``` { ... cache: { provider(dataSource) { return new CustomQueryResultCache(dataSource); } } } ``` If you wish to ignore cache errors and want the queries to pass through to database in case of cache errors, you can use ignoreErrors option. Example: ``` { type: "mysql", host: "localhost", username: "test", ... cache: { type: "redis", options: { socket: { host: "localhost", port: 6379 } }, ignoreErrors: true } } ``` You can use `typeorm cache:clear` to clear everything stored in the cache. --- # Delete using Query Builder ## `Delete`[​](#delete "Direct link to delete") You can create `DELETE` queries using `QueryBuilder`. Examples: ``` await myDataSource .createQueryBuilder() .delete() .from(User) .where("id = :id", { id: 1 }) .execute() ``` This is the most efficient way in terms of performance to delete entities from your database. ## `Soft-Delete`[​](#soft-delete "Direct link to soft-delete") Applying Soft Delete to QueryBuilder ``` await dataSource.getRepository(Entity).createQueryBuilder().softDelete() ``` Examples: ``` await myDataSource .getRepository(User) .createQueryBuilder() .softDelete() .where("id = :id", { id: 1 }) .execute() ``` ## `Restore-Soft-Delete`[​](#restore-soft-delete "Direct link to restore-soft-delete") Alternatively, You can recover the soft deleted rows by using the `restore()` method: ``` await dataSource.getRepository(Entity).createQueryBuilder().restore() ``` Examples: ``` await myDataSource .getRepository(User) .createQueryBuilder() .restore() .where("id = :id", { id: 1 }) .execute() ``` --- # Insert using Query Builder You can create `INSERT` queries using `QueryBuilder`. Examples: ``` await dataSource .createQueryBuilder() .insert() .into(User) .values([ { firstName: "Timber", lastName: "Saw" }, { firstName: "Phantom", lastName: "Lancer" }, ]) .execute() ``` This is the most efficient way in terms of performance to insert rows into your database. You can also perform bulk insertions this way. ## Raw SQL support[​](#raw-sql-support "Direct link to Raw SQL support") In some cases when you need to execute SQL queries you need to use function style value: ``` await dataSource .createQueryBuilder() .insert() .into(User) .values({ firstName: "Timber", lastName: () => "CONCAT('S', 'A', 'W')", }) .execute() ``` > Warning: When using raw SQL, ensure that values are properly sanitized to prevent SQL injection. ## Update values ON CONFLICT[​](#update-values-on-conflict "Direct link to Update values ON CONFLICT") If the values you are trying to insert conflict due to existing data the `orUpdate` function can be used to update specific values on the conflicted target. ``` await dataSource .createQueryBuilder() .insert() .into(User) .values({ firstName: "Timber", lastName: "Saw", externalId: "abc123", }) .orUpdate(["firstName", "lastName"], ["externalId"]) .execute() ``` ## Update values ON CONFLICT with condition (Postgres, Oracle, MSSQL, SAP HANA)[​](#update-values-on-conflict-with-condition-postgres-oracle-mssql-sap-hana "Direct link to Update values ON CONFLICT with condition (Postgres, Oracle, MSSQL, SAP HANA)") ``` await dataSource .createQueryBuilder() .insert() .into(User) .values({ firstName: "Timber", lastName: "Saw", externalId: "abc123", }) .orUpdate(["firstName", "lastName"], ["externalId"], { overwriteCondition: { where: { firstName: Equal("Phantom"), }, }, }) .execute() ``` ## IGNORE error (MySQL) or DO NOTHING (Postgres, Oracle, MSSQL, SAP HANA) during insert[​](#ignore-error-mysql-or-do-nothing-postgres-oracle-mssql-sap-hana-during-insert "Direct link to IGNORE error (MySQL) or DO NOTHING (Postgres, Oracle, MSSQL, SAP HANA) during insert") If the values you are trying to insert conflict due to existing data or containing invalid data, the `orIgnore` function can be used to suppress errors and insert only rows that contain valid data. ``` await dataSource .createQueryBuilder() .insert() .into(User) .values({ firstName: "Timber", lastName: "Saw", externalId: "abc123", }) .orIgnore() .execute() ``` ## Skip data update if values have not changed (Postgres, Oracle, MSSQL, SAP HANA)[​](#skip-data-update-if-values-have-not-changed-postgres-oracle-mssql-sap-hana "Direct link to Skip data update if values have not changed (Postgres, Oracle, MSSQL, SAP HANA)") ``` await dataSource .createQueryBuilder() .insert() .into(User) .values({ firstName: "Timber", lastName: "Saw", externalId: "abc123", }) .orUpdate(["firstName", "lastName"], ["externalId"], { skipUpdateIfNoValuesChanged: true, }) .execute() ``` ## Use partial index (Postgres)[​](#use-partial-index-postgres "Direct link to Use partial index (Postgres)") ``` await dataSource .createQueryBuilder() .insert() .into(User) .values({ firstName: "Timber", lastName: "Saw", externalId: "abc123", }) .orUpdate(["firstName", "lastName"], ["externalId"], { skipUpdateIfNoValuesChanged: true, indexPredicate: "date > 2020-01-01", }) .execute() ``` --- # Working with Relations `RelationQueryBuilder` is a special type of `QueryBuilder` which allows you to work with your relations. Using it, you can bind entities to each other in the database without the need to load any entities, or you can load related entities easily. For example, we have a `Post` entity and it has a many-to-many relation to `Category` called `categories`. Let's add a new category to this many-to-many relation: ``` await dataSource .createQueryBuilder() .relation(Post, "categories") .of(post) .add(category) ``` This code is equivalent to doing this: ``` const postRepository = dataSource.manager.getRepository(Post) const post = await postRepository.findOne({ where: { id: 1, }, relations: { categories: true, }, }) post.categories.push(category) await postRepository.save(post) ``` But more efficient, because it does a minimal number of operations, and binds entities in the database, unlike calling a bulky `save` method call. Also, another benefit of such an approach is that you don't need to load every related entity before pushing into it. For example, if you have ten thousand categories inside a single post, adding new posts to this list may become problematic for you, because the standard way of doing this is to load the post with all ten thousand categories, push a new category, and save it. This results in very heavy performance costs and is basically inapplicable in production results. However, using `RelationQueryBuilder` solves this problem. Also, there is no real need to use entities when you "bind" things, since you can use entity ids instead. For example, let's add a category with id = 3 into post with id = 1: ``` await dataSource.createQueryBuilder().relation(Post, "categories").of(1).add(3) ``` If you are using composite primary keys, you have to pass them as an id map, for example: ``` await dataSource .createQueryBuilder() .relation(Post, "categories") .of({ firstPostId: 1, secondPostId: 3 }) .add({ firstCategoryId: 2, secondCategoryId: 4 }) ``` You can remove entities the same way you add them: ``` // this code removes a category from a given post await dataSource .createQueryBuilder() .relation(Post, "categories") .of(post) // you can use just post id as well .remove(category) // you can use just category id as well ``` Adding and removing related entities works in `many-to-many` and `one-to-many` relations. For `one-to-one` and `many-to-one` relations use `set` instead: ``` // this code sets category of a given post await dataSource .createQueryBuilder() .relation(Post, "categories") .of(post) // you can use just post id as well .set(category) // you can use just category id as well ``` If you want to unset a relation (set it to null), simply pass `null` to a `set` method: ``` // this code unsets category of a given post await dataSource .createQueryBuilder() .relation(Post, "categories") .of(post) // you can use just post id as well .set(null) ``` Besides updating relations, the relational query builder also allows you to load relational entities. For example, lets say inside a `Post` entity we have a many-to-many `categories` relation and a many-to-one `user` relation, to load those relations you can use following code: ``` const post = await dataSource.manager.findOneBy(Post, { id: 1, }) post.categories = await dataSource .createQueryBuilder() .relation(Post, "categories") .of(post) // you can use just post id as well .loadMany() post.author = await dataSource .createQueryBuilder() .relation(Post, "user") .of(post) // you can use just post id as well .loadOne() ``` --- # Select using Query Builder ## What is a QueryBuilder?[​](#what-is-a-querybuilder "Direct link to What is a QueryBuilder?") `QueryBuilder` is one of the most powerful features of TypeORM - it allows you to build SQL queries using elegant and convenient syntax, execute them and get automatically transformed entities. Simple example of `QueryBuilder`: ``` const firstUser = await dataSource .getRepository(User) .createQueryBuilder("user") .where("user.id = :id", { id: 1 }) .getOne() ``` It builds the following SQL query: ``` SELECT user.id as userId, user.firstName as userFirstName, user.lastName as userLastName FROM users user WHERE user.id = 1 ``` and returns you an instance of `User`: ``` User { id: 1, firstName: "Timber", lastName: "Saw" } ``` ## Important note when using the `QueryBuilder`[​](#important-note-when-using-the-querybuilder "Direct link to important-note-when-using-the-querybuilder") When using the `QueryBuilder`, you need to provide unique parameters in your `WHERE` expressions. **This will not work**: ``` const result = await dataSource .getRepository(User) .createQueryBuilder("user") .leftJoinAndSelect("user.linkedSheep", "linkedSheep") .leftJoinAndSelect("user.linkedCow", "linkedCow") .where("user.linkedSheep = :id", { id: sheepId }) .andWhere("user.linkedCow = :id", { id: cowId }) ``` ... but this will: ``` const result = await dataSource .getRepository(User) .createQueryBuilder("user") .leftJoinAndSelect("user.linkedSheep", "linkedSheep") .leftJoinAndSelect("user.linkedCow", "linkedCow") .where("user.linkedSheep = :sheepId", { sheepId }) .andWhere("user.linkedCow = :cowId", { cowId }) ``` Note that we uniquely named `:sheepId` and `:cowId` instead of using `:id` twice for different parameters. ## How to create and use a QueryBuilder?[​](#how-to-create-and-use-a-querybuilder "Direct link to How to create and use a QueryBuilder?") There are several ways how you can create a `Query Builder`: * Using DataSource: ``` const user = await dataSource .createQueryBuilder() .select("user") .from(User, "user") .where("user.id = :id", { id: 1 }) .getOne() ``` * Using entity manager: ``` const user = await dataSource.manager .createQueryBuilder(User, "user") .where("user.id = :id", { id: 1 }) .getOne() ``` * Using repository: ``` const user = await dataSource .getRepository(User) .createQueryBuilder("user") .where("user.id = :id", { id: 1 }) .getOne() ``` There are 5 different `QueryBuilder` types available: * `SelectQueryBuilder` - used to build and execute `SELECT` queries. Example: ``` const user = await dataSource .createQueryBuilder() .select("user") .from(User, "user") .where("user.id = :id", { id: 1 }) .getOne() ``` * `InsertQueryBuilder` - used to build and execute `INSERT` queries. Example: ``` await dataSource .createQueryBuilder() .insert() .into(User) .values([ { firstName: "Timber", lastName: "Saw" }, { firstName: "Phantom", lastName: "Lancer" }, ]) .execute() ``` * `UpdateQueryBuilder` - used to build and execute `UPDATE` queries. Example: ``` await dataSource .createQueryBuilder() .update(User) .set({ firstName: "Timber", lastName: "Saw" }) .where("id = :id", { id: 1 }) .execute() ``` * `DeleteQueryBuilder` - used to build and execute `DELETE` queries. Example: ``` await dataSource .createQueryBuilder() .delete() .from(User) .where("id = :id", { id: 1 }) .execute() ``` * `RelationQueryBuilder` - used to build and execute relation-specific operations \[TBD]. Example: ``` await dataSource .createQueryBuilder() .relation(User, "photos") .of(id) .loadMany() ``` You can switch between different types of query builder within any of them, once you do, you will get a new instance of query builder (unlike all other methods). ## Getting values using `QueryBuilder`[​](#getting-values-using-querybuilder "Direct link to getting-values-using-querybuilder") To get a single result from the database, for example to get a user by id or name, you must use `getOne`: ``` const timber = await dataSource .getRepository(User) .createQueryBuilder("user") .where("user.id = :id OR user.name = :name", { id: 1, name: "Timber" }) .getOne() ``` `getOneOrFail` will get a single result from the database, but if no result exists it will throw an `EntityNotFoundError`: ``` const timber = await dataSource .getRepository(User) .createQueryBuilder("user") .where("user.id = :id OR user.name = :name", { id: 1, name: "Timber" }) .getOneOrFail() ``` To get multiple results from the database, for example, to get all users from the database, use `getMany`: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .getMany() ``` There are two types of results you can get using select query builder: **entities** or **raw results**. Most of the time, you need to select real entities from your database, for example, users. For this purpose, you use `getOne` and `getMany`. But sometimes you need to select some specific data, let's say the *sum of all user photos*. This data is not an entity, it's called raw data. To get raw data, you use `getRawOne` and `getRawMany`. Examples: ``` const { sum } = await dataSource .getRepository(User) .createQueryBuilder("user") .select("SUM(user.photosCount)", "sum") .where("user.id = :id", { id: 1 }) .getRawOne() ``` ``` const photosSums = await dataSource .getRepository(User) .createQueryBuilder("user") .select("user.id") .addSelect("SUM(user.photosCount)", "sum") .groupBy("user.id") .getRawMany() // result will be like this: [{ id: 1, sum: 25 }, { id: 2, sum: 13 }, ...] ``` ## Getting a count[​](#getting-a-count "Direct link to Getting a count") You can get the count on the number of rows a query will return by using `getCount()`. This will return the count as a number rather than an Entity result. ``` const count = await dataSource .getRepository(User) .createQueryBuilder("user") .where("user.name = :name", { name: "Timber" }) .getCount() ``` Which produces the following SQL query: ``` SELECT count(*) FROM users user WHERE user.name = 'Timber' ``` ## What are aliases for?[​](#what-are-aliases-for "Direct link to What are aliases for?") We used `createQueryBuilder("user")`. But what is "user"? It's just a regular SQL alias. We use aliases everywhere, except when we work with selected data. `createQueryBuilder("user")` is equivalent to: ``` createQueryBuilder().select("user").from(User, "user") ``` Which will result in the following SQL query: ``` SELECT ... FROM users user ``` In this SQL query, `users` is the table name, and `user` is an alias we assign to this table. Later we use this alias to access the table: ``` createQueryBuilder() .select("user") .from(User, "user") .where("user.name = :name", { name: "Timber" }) ``` Which produces the following SQL query: ``` SELECT ... FROM users user WHERE user.name = 'Timber' ``` See, we used the users table by using the `user` alias we assigned when we created a query builder. One query builder is not limited to one alias, they can have multiple aliases. Each select can have its own alias, you can select from multiple tables each with its own alias, you can join multiple tables each with its own alias. You can use those aliases to access tables you are selecting (or data you are selecting). ## Using parameters to escape data[​](#using-parameters-to-escape-data "Direct link to Using parameters to escape data") We used `where("user.name = :name", { name: "Timber" })`. What does `{ name: "Timber" }` stand for? It's a parameter we used to prevent SQL injection. We could have written: `where("user.name = '" + name + "')`, however this is not safe, as it opens the code to SQL injections. The safe way is to use this special syntax: `where("user.name = :name", { name: "Timber" })`, where `:name` is a parameter name and the value is specified in an object: `{ name: "Timber" }`. ``` .where("user.name = :name", { name: "Timber" }) ``` is a shortcut for: ``` .where("user.name = :name") .setParameter("name", "Timber") ``` Note: do not use the same parameter name for different values across the query builder. Values will be overridden if you set them multiple times. You can also supply an array of values, and have them transformed into a list of values in the SQL statement, by using the special expansion syntax: ``` .where("user.name IN (:...names)", { names: [ "Timber", "Crystal", "Lina" ] }) ``` Which becomes: ``` WHERE user.name IN ('Timber', 'Crystal', 'Lina') ``` ## Adding `WHERE` expression[​](#adding-where-expression "Direct link to adding-where-expression") Adding a `WHERE` expression is as easy as: ``` createQueryBuilder("user").where("user.name = :name", { name: "Timber" }) ``` Which will produce: ``` SELECT ... FROM users user WHERE user.name = 'Timber' ``` You can add `AND` into an existing `WHERE` expression: ``` createQueryBuilder("user") .where("user.firstName = :firstName", { firstName: "Timber" }) .andWhere("user.lastName = :lastName", { lastName: "Saw" }) ``` Which will produce the following SQL query: ``` SELECT ... FROM users user WHERE user.firstName = 'Timber' AND user.lastName = 'Saw' ``` You can add `OR` into an existing `WHERE` expression: ``` createQueryBuilder("user") .where("user.firstName = :firstName", { firstName: "Timber" }) .orWhere("user.lastName = :lastName", { lastName: "Saw" }) ``` Which will produce the following SQL query: ``` SELECT ... FROM users user WHERE user.firstName = 'Timber' OR user.lastName = 'Saw' ``` You can do an `IN` query with the `WHERE` expression: ``` createQueryBuilder("user").where("user.id IN (:...ids)", { ids: [1, 2, 3, 4] }) ``` Which will produce the following SQL query: ``` SELECT ... FROM users user WHERE user.id IN (1, 2, 3, 4) ``` You can add a complex `WHERE` expression into an existing `WHERE` using `Brackets` ``` createQueryBuilder("user") .where("user.registered = :registered", { registered: true }) .andWhere( new Brackets((qb) => { qb.where("user.firstName = :firstName", { firstName: "Timber", }).orWhere("user.lastName = :lastName", { lastName: "Saw" }) }), ) ``` Which will produce the following SQL query: ``` SELECT ... FROM users user WHERE user.registered = true AND (user.firstName = 'Timber' OR user.lastName = 'Saw') ``` You can add a negated complex `WHERE` expression into an existing `WHERE` using `NotBrackets` ``` createQueryBuilder("user") .where("user.registered = :registered", { registered: true }) .andWhere( new NotBrackets((qb) => { qb.where("user.firstName = :firstName", { firstName: "Timber", }).orWhere("user.lastName = :lastName", { lastName: "Saw" }) }), ) ``` Which will produce the following SQL query: ``` SELECT ... FROM users user WHERE user.registered = true AND NOT((user.firstName = 'Timber' OR user.lastName = 'Saw')) ``` You can combine as many `AND` and `OR` expressions as you need. If you use `.where` more than once you'll override all previous `WHERE` expressions. Note: be careful with `orWhere` - if you use complex expressions with both `AND` and `OR` expressions, keep in mind that they are stacked without any pretences. Sometimes you'll need to create a where string instead, and avoid using `orWhere`. ## Adding `HAVING` expression[​](#adding-having-expression "Direct link to adding-having-expression") Adding a `HAVING` expression is easy as: ``` createQueryBuilder("user").having("user.name = :name", { name: "Timber" }) ``` Which will produce following SQL query: ``` SELECT ... FROM users user HAVING user.name = 'Timber' ``` You can add `AND` into an exist `HAVING` expression: ``` createQueryBuilder("user") .having("user.firstName = :firstName", { firstName: "Timber" }) .andHaving("user.lastName = :lastName", { lastName: "Saw" }) ``` Which will produce the following SQL query: ``` SELECT ... FROM users user HAVING user.firstName = 'Timber' AND user.lastName = 'Saw' ``` You can add `OR` into a exist `HAVING` expression: ``` createQueryBuilder("user") .having("user.firstName = :firstName", { firstName: "Timber" }) .orHaving("user.lastName = :lastName", { lastName: "Saw" }) ``` Which will produce the following SQL query: ``` SELECT ... FROM users user HAVING user.firstName = 'Timber' OR user.lastName = 'Saw' ``` You can combine as many `AND` and `OR` expressions as you need. If you use `.having` more than once you'll override all previous `HAVING` expressions. ## Adding `ORDER BY` expression[​](#adding-order-by-expression "Direct link to adding-order-by-expression") Adding an `ORDER BY` expression is easy as: ``` createQueryBuilder("user").orderBy("user.id") ``` Which will produce: ``` SELECT ... FROM users user ORDER BY user.id ``` You can change the ordering direction from ascending to descending (or versa): ``` createQueryBuilder("user").orderBy("user.id", "DESC") createQueryBuilder("user").orderBy("user.id", "ASC") ``` You can add multiple order-by criteria: ``` createQueryBuilder("user").orderBy("user.name").addOrderBy("user.id") ``` You can also use a map of order-by fields: ``` createQueryBuilder("user").orderBy({ "user.name": "ASC", "user.id": "DESC", }) ``` If you use `.orderBy` more than once you'll override all previous `ORDER BY` expressions. ## Adding `DISTINCT ON` expression (Postgres only)[​](#adding-distinct-on-expression-postgres-only "Direct link to adding-distinct-on-expression-postgres-only") When using both distinct-on with an order-by expression, the distinct-on expression must match the leftmost order-by. The distinct-on expressions are interpreted using the same rules as order-by. Please note that, using distinct-on without an order-by expression means that the first row of each set is unpredictable. Adding a `DISTINCT ON` expression is easy as: ``` createQueryBuilder("user").distinctOn(["user.id"]).orderBy("user.id") ``` Which will produce: ``` SELECT DISTINCT ON (user.id) ... FROM users user ORDER BY user.id ``` ## Adding `GROUP BY` expression[​](#adding-group-by-expression "Direct link to adding-group-by-expression") Adding a `GROUP BY` expression is easy as: ``` createQueryBuilder("user").groupBy("user.id") ``` Which will produce the following SQL query: ``` SELECT ... FROM users user GROUP BY user.id ``` To add more group-by criteria use `addGroupBy`: ``` createQueryBuilder("user").groupBy("user.name").addGroupBy("user.id") ``` If you use `.groupBy` more than once you'll override all previous `GROUP BY` expressions. ## Adding `LIMIT` expression[​](#adding-limit-expression "Direct link to adding-limit-expression") Adding a `LIMIT` expression is easy as: ``` createQueryBuilder("user").limit(10) ``` Which will produce the following SQL query: ``` SELECT ... FROM users user LIMIT 10 ``` The resulting SQL query depends on the type of database (SQL, mySQL, Postgres, etc). Note: LIMIT may not work as you may expect if you are using complex queries with joins or subqueries. If you are using pagination, it's recommended to use `take` instead. ## Adding `OFFSET` expression[​](#adding-offset-expression "Direct link to adding-offset-expression") Adding an SQL `OFFSET` expression is easy as: ``` createQueryBuilder("user").offset(10) ``` Which will produce the following SQL query: ``` SELECT ... FROM users user OFFSET 10 ``` The resulting SQL query depends on the type of database (SQL, mySQL, Postgres, etc). Note: OFFSET may not work as you may expect if you are using complex queries with joins or subqueries. If you are using pagination, it's recommended to use `skip` instead. ## Joining relations[​](#joining-relations "Direct link to Joining relations") Let's say you have the following entities: ``` import { Entity, PrimaryGeneratedColumn, Column, OneToMany } from "typeorm" import { Photo } from "./Photo" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @OneToMany((type) => Photo, (photo) => photo.user) photos: Photo[] } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToOne } from "typeorm" import { User } from "./User" @Entity() export class Photo { @PrimaryGeneratedColumn() id: number @Column() url: string @ManyToOne((type) => User, (user) => user.photos) user: User } ``` Now let's say you want to load user "Timber" with all of his photos: ``` const user = await createQueryBuilder("user") .leftJoinAndSelect("user.photos", "photo") .where("user.name = :name", { name: "Timber" }) .getOne() ``` You'll get the following result: ``` { id: 1, name: "Timber", photos: [{ id: 1, url: "me-with-chakram.jpg" }, { id: 2, url: "me-with-trees.jpg" }] } ``` As you can see `leftJoinAndSelect` automatically loaded all of Timber's photos. The first argument is the relation you want to load and the second argument is an alias you assign to this relation's table. You can use this alias anywhere in query builder. For example, let's take all Timber's photos which aren't removed. ``` const user = await createQueryBuilder("user") .leftJoinAndSelect("user.photos", "photo") .where("user.name = :name", { name: "Timber" }) .andWhere("photo.isRemoved = :isRemoved", { isRemoved: false }) .getOne() ``` This will generate following SQL query: ``` SELECT user.*, photo.* FROM users user LEFT JOIN photos photo ON photo.user = user.id WHERE user.name = 'Timber' AND photo.isRemoved = FALSE ``` You can also add conditions to the join expression instead of using "where": ``` const user = await createQueryBuilder("user") .leftJoinAndSelect("user.photos", "photo", "photo.isRemoved = :isRemoved", { isRemoved: false, }) .where("user.name = :name", { name: "Timber" }) .getOne() ``` This will generate the following SQL query: ``` SELECT user.*, photo.* FROM users user LEFT JOIN photos photo ON photo.user = user.id AND photo.isRemoved = FALSE WHERE user.name = 'Timber' ``` ## Inner and left joins[​](#inner-and-left-joins "Direct link to Inner and left joins") If you want to use `INNER JOIN` instead of `LEFT JOIN` just use `innerJoinAndSelect` instead: ``` const user = await createQueryBuilder("user") .innerJoinAndSelect( "user.photos", "photo", "photo.isRemoved = :isRemoved", { isRemoved: false }, ) .where("user.name = :name", { name: "Timber" }) .getOne() ``` This will generate: ``` SELECT user.*, photo.* FROM users user INNER JOIN photos photo ON photo.user = user.id AND photo.isRemoved = FALSE WHERE user.name = 'Timber' ``` The difference between `LEFT JOIN` and `INNER JOIN` is that `INNER JOIN` won't return a user if it does not have any photos. `LEFT JOIN` will return you the user even if it doesn't have photos. To learn more about different join types, refer to the [SQL documentation](https://msdn.microsoft.com/en-us/library/zt8wzxy4.aspx). ## Join without selection[​](#join-without-selection "Direct link to Join without selection") You can join data without its selection. To do that, use `leftJoin` or `innerJoin`: ``` const user = await createQueryBuilder("user") .innerJoin("user.photos", "photo") .where("user.name = :name", { name: "Timber" }) .getOne() ``` This will generate: ``` SELECT user.* FROM users user INNER JOIN photos photo ON photo.user = user.id WHERE user.name = 'Timber' ``` This will select Timber if he has photos, but won't return his photos. ## Joining any entity or table[​](#joining-any-entity-or-table "Direct link to Joining any entity or table") You can join not only relations, but also other unrelated entities or tables. Examples: ``` const user = await createQueryBuilder("user") .leftJoinAndSelect(Photo, "photo", "photo.userId = user.id") .getMany() ``` ``` const user = await createQueryBuilder("user") .leftJoinAndSelect("photos", "photo", "photo.userId = user.id") .getMany() ``` ## Joining and mapping functionality[​](#joining-and-mapping-functionality "Direct link to Joining and mapping functionality") Add `profilePhoto` to `User` entity, and you can map any data into that property using `QueryBuilder`: ``` export class User { /// ... profilePhoto: Photo } ``` ``` const user = await createQueryBuilder("user") .leftJoinAndMapOne( "user.profilePhoto", "user.photos", "photo", "photo.isForProfile = TRUE", ) .where("user.name = :name", { name: "Timber" }) .getOne() ``` This will load Timber's profile photo and set it to `user.profilePhoto`. If you want to load and map a single entity use `leftJoinAndMapOne`. If you want to load and map multiple entities use `leftJoinAndMapMany`. ## Getting the generated query[​](#getting-the-generated-query "Direct link to Getting the generated query") Sometimes you may want to get the SQL query generated by `QueryBuilder`. To do so, use `getSql`: ``` const sql = createQueryBuilder("user") .where("user.firstName = :firstName", { firstName: "Timber" }) .orWhere("user.lastName = :lastName", { lastName: "Saw" }) .getSql() ``` For debugging purposes you can use `printSql`: ``` const users = await createQueryBuilder("user") .where("user.firstName = :firstName", { firstName: "Timber" }) .orWhere("user.lastName = :lastName", { lastName: "Saw" }) .printSql() .getMany() ``` This query will return users and print the used sql statement to the console. ## Getting raw results[​](#getting-raw-results "Direct link to Getting raw results") There are two types of results you can get using select query builder: **entities** and **raw results**. Most of the time, you need to select real entities from your database, for example, users. For this purpose, you use `getOne` and `getMany`. However, sometimes you need to select specific data, like the *sum of all user photos*. Such data is not an entity, it's called raw data. To get raw data, you use `getRawOne` and `getRawMany`. Examples: ``` const { sum } = await dataSource .getRepository(User) .createQueryBuilder("user") .select("SUM(user.photosCount)", "sum") .where("user.id = :id", { id: 1 }) .getRawOne() ``` ``` const photosSums = await dataSource .getRepository(User) .createQueryBuilder("user") .select("user.id") .addSelect("SUM(user.photosCount)", "sum") .groupBy("user.id") .getRawMany() // result will be like this: [{ id: 1, sum: 25 }, { id: 2, sum: 13 }, ...] ``` ## Streaming result data[​](#streaming-result-data "Direct link to Streaming result data") You can use `stream` which returns you a stream. Streaming returns you raw data, and you must handle entity transformation manually: ``` const stream = await dataSource .getRepository(User) .createQueryBuilder("user") .where("user.id = :id", { id: 1 }) .stream() ``` ## Using pagination[​](#using-pagination "Direct link to Using pagination") Most of the time when you develop an application, you need pagination functionality. This is used if you have pagination, page slider, or infinite scroll components in your application. ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .leftJoinAndSelect("user.photos", "photo") .take(10) .getMany() ``` This will give you the first 10 users with their photos. ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .leftJoinAndSelect("user.photos", "photo") .skip(10) .getMany() ``` This will give you all except the first 10 users with their photos. You can combine those methods: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .leftJoinAndSelect("user.photos", "photo") .skip(5) .take(10) .getMany() ``` This will skip the first 5 users and take 10 users after them. `take` and `skip` may look like we are using `limit` and `offset`, but they aren't. `limit` and `offset` may not work as you expect once you have more complicated queries with joins or subqueries. Using `take` and `skip` will prevent those issues. ## Set locking[​](#set-locking "Direct link to Set locking") QueryBuilder supports both optimistic and pessimistic locking. ### Lock modes[​](#lock-modes "Direct link to Lock modes") Support of lock modes, and SQL statements they translate to, are listed in the table below (blank cell denotes unsupported). When specified lock mode is not supported, a `LockNotSupportedOnGivenDriverError` error will be thrown. ``` | | pessimistic_read | pessimistic_write | dirty_read | pessimistic_partial_write (Deprecated, use onLocked instead) | pessimistic_write_or_fail (Deprecated, use onLocked instead) | for_no_key_update | for_key_share | | --------------- | --------------------------------- | ----------------------- | ------------- | -------------------------------------------------------------- | -------------------------------------------------------------- | ------------------- | ------------- | | MySQL | FOR SHARE (8+)/LOCK IN SHARE MODE | FOR UPDATE | (nothing) | FOR UPDATE SKIP LOCKED | FOR UPDATE NOWAIT | | | | Postgres | FOR SHARE | FOR UPDATE | (nothing) | FOR UPDATE SKIP LOCKED | FOR UPDATE NOWAIT | FOR NO KEY UPDATE | FOR KEY SHARE | | Oracle | FOR UPDATE | FOR UPDATE | (nothing) | | | | | | SQL Server | WITH (HOLDLOCK, ROWLOCK) | WITH (UPDLOCK, ROWLOCK) | WITH (NOLOCK) | | | | | | AuroraDataApi | LOCK IN SHARE MODE | FOR UPDATE | (nothing) | | | | | | CockroachDB | | FOR UPDATE | (nothing) | | FOR UPDATE NOWAIT | FOR NO KEY UPDATE | | ``` To use pessimistic read locking use the following method: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .setLock("pessimistic_read") .getMany() ``` To use pessimistic write locking use the following method: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .setLock("pessimistic_write") .getMany() ``` To use dirty read locking use the following method: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .setLock("dirty_read") .getMany() ``` To use optimistic locking use the following method: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .setLock("optimistic", existUser.version) .getMany() ``` Optimistic locking works in conjunction with both `@Version` and `@UpdatedDate` decorators. ### Lock tables[​](#lock-tables "Direct link to Lock tables") You can also lock tables using the following method: ``` const users = await dataSource .getRepository(Post) .createQueryBuilder("post") .leftJoin("post.author", "user") .setLock("pessimistic_write", undefined, ["post"]) .getMany() ``` If the Lock Tables argument is provided, only the table that is locked in the FOR UPDATE OF clause is specified. ### setOnLocked[​](#setonlocked "Direct link to setOnLocked") Allows you to control what happens when a row is locked. By default, the database will wait for the lock. You can control that behavior by using `setOnLocked` To not wait: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .setLock("pessimistic_write") .setOnLocked("nowait") .getMany() ``` To skip the row: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .setLock("pessimistic_write") .setOnLocked("skip_locked") .getMany() ``` Database support for `setOnLocked` based on [lock mode](#lock-modes): * Postgres: `pessimistic_read`, `pessimistic_write`, `for_no_key_update`, `for_key_share` * MySQL 8+: `pessimistic_read`, `pessimistic_write` * MySQL < 8, Maria DB: `pessimistic_write` * Cockroach: `pessimistic_write` (`nowait` only) ## Use custom index[​](#use-custom-index "Direct link to Use custom index") You can provide a certain index for database server to use in some cases. This feature is only supported in MySQL. ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .useIndex("my_index") // name of index .getMany() ``` ## Max execution time[​](#max-execution-time "Direct link to Max execution time") We can drop slow query to avoid crashing the server. ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .maxExecutionTime(1000) // milliseconds. .getMany() ``` ## Partial selection[​](#partial-selection "Direct link to Partial selection") If you want to select only some entity properties, you can use the following syntax: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .select(["user.id", "user.name"]) .getMany() ``` This will only select the `id` and `name` of `User`. ## Using subqueries[​](#using-subqueries "Direct link to Using subqueries") You can easily create subqueries. Subqueries are supported in `FROM`, `WHERE` and `JOIN` expressions. Example: ``` const qb = await dataSource.getRepository(Post).createQueryBuilder("post") const posts = qb .where( "post.title IN " + qb .subQuery() .select("user.name") .from(User, "user") .where("user.registered = :registered") .getQuery(), ) .setParameter("registered", true) .getMany() ``` A more elegant way to do the same: ``` const posts = await dataSource .getRepository(Post) .createQueryBuilder("post") .where((qb) => { const subQuery = qb .subQuery() .select("user.name") .from(User, "user") .where("user.registered = :registered") .getQuery() return "post.title IN " + subQuery }) .setParameter("registered", true) .getMany() ``` Alternatively, you can create a separate query builder and use its generated SQL: ``` const userQb = await dataSource .getRepository(User) .createQueryBuilder("user") .select("user.name") .where("user.registered = :registered", { registered: true }) const posts = await dataSource .getRepository(Post) .createQueryBuilder("post") .where("post.title IN (" + userQb.getQuery() + ")") .setParameters(userQb.getParameters()) .getMany() ``` You can create subqueries in `FROM` like this: ``` const userQb = await dataSource .getRepository(User) .createQueryBuilder("user") .select("user.name", "name") .where("user.registered = :registered", { registered: true }) const posts = await dataSource .createQueryBuilder() .select("user.name", "name") .from("(" + userQb.getQuery() + ")", "user") .setParameters(userQb.getParameters()) .getRawMany() ``` or using a more elegant syntax: ``` const posts = await dataSource .createQueryBuilder() .select("user.name", "name") .from((subQuery) => { return subQuery .select("user.name", "name") .from(User, "user") .where("user.registered = :registered", { registered: true }) }, "user") .getRawMany() ``` If you want to add a subselect as a "second from" use `addFrom`. You can use subselects in `SELECT` statements as well: ``` const posts = await dataSource .createQueryBuilder() .select("post.id", "id") .addSelect((subQuery) => { return subQuery.select("user.name", "name").from(User, "user").limit(1) }, "name") .from(Post, "post") .getRawMany() ``` ## Hidden Columns[​](#hidden-columns "Direct link to Hidden Columns") If the model you are querying has a column with a `select: false` column, you must use the `addSelect` function in order to retrieve the information from the column. Let's say you have the following entity: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @Column({ select: false }) password: string } ``` Using a standard `find` or query, you will not receive the `password` property for the model. However, if you do the following: ``` const users = await dataSource .getRepository(User) .createQueryBuilder() .select("user.id", "id") .addSelect("user.password") .getMany() ``` You will get the property `password` in your query. ## Querying Deleted rows[​](#querying-deleted-rows "Direct link to Querying Deleted rows") If the model you are querying has a column with the attribute `@DeleteDateColumn` set, the query builder will automatically query rows which are 'soft deleted'. Let's say you have the following entity: ``` import { Entity, PrimaryGeneratedColumn, Column, DeleteDateColumn, } from "typeorm" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @DeleteDateColumn() deletedAt?: Date } ``` Using a standard `find` or query, you will not receive the rows which have a value in that column. However, if you do the following: ``` const users = await dataSource .getRepository(User) .createQueryBuilder() .select("user.id", "id") .withDeleted() .getMany() ``` You will get all the rows, including the ones which are deleted. ## Common table expressions[​](#common-table-expressions "Direct link to Common table expressions") `QueryBuilder` instances support [common table expressions](https://en.wikipedia.org/wiki/Hierarchical_and_recursive_queries_in_SQL#Common_table_expression) , if minimal supported version of your database supports them. Common table expressions aren't supported for Oracle yet. ``` const users = await connection .getRepository(User) .createQueryBuilder("user") .select("user.id", "id") .addCommonTableExpression( ` SELECT "userId" FROM "post" `, "post_users_ids", ) .where(`user.id IN (SELECT "userId" FROM 'post_users_ids')`) .getMany() ``` Result values of `InsertQueryBuilder` or `UpdateQueryBuilder` can be used in Postgres: ``` const insertQueryBuilder = connection .getRepository(User) .createQueryBuilder() .insert({ name: "John Smith", }) .returning(["id"]) const users = await connection .getRepository(User) .createQueryBuilder("user") .addCommonTableExpression(insertQueryBuilder, "insert_results") .where(`user.id IN (SELECT "id" FROM 'insert_results')`) .getMany() ``` ## Time Travel Queries[​](#time-travel-queries "Direct link to Time Travel Queries") [Time Travel Queries](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) currently supported only in `CockroachDB` database. ``` const repository = connection.getRepository(Account) // create a new account const account = new Account() account.name = "John Smith" account.balance = 100 await repository.save(account) // imagine we update the account balance 1 hour after creation account.balance = 200 await repository.save(account) // outputs { name: "John Smith", balance: "200" } console.log(account) // load account state on 1 hour back account = await repository .createQueryBuilder("account") .timeTravelQuery(`'-1h'`) .getOneOrFail() // outputs { name: "John Smith", balance: "100" } console.log(account) ``` By default `timeTravelQuery()` uses `follower_read_timestamp()` function if no arguments passed. For another supported timestamp arguments and additional information please refer to [CockroachDB](https://www.cockroachlabs.com/docs/stable/as-of-system-time.html) docs. ## Debugging[​](#debugging "Direct link to Debugging") You can get the generated SQL from the query builder by calling `getQuery()` or `getQueryAndParameters()`. If you just want the query you can use `getQuery()` ``` const sql = await dataSource .getRepository(User) .createQueryBuilder("user") .where("user.id = :id", { id: 1 }) .getQuery() ``` Which results in: ``` SELECT `user`.`id` as `userId`, `user`.`firstName` as `userFirstName`, `user`.`lastName` as `userLastName` FROM `users` `user` WHERE `user`.`id` = ? ``` Or if you want the query and the parameters you can get an array back using `getQueryAndParameters()` ``` const queryAndParams = await dataSource .getRepository(User) .createQueryBuilder("user") .where("user.id = :id", { id: 1 }) .getQueryAndParameters() ``` Which results in: ``` ;[ "SELECT `user`.`id` as `userId`, `user`.`firstName` as `userFirstName`, `user`.`lastName` as `userLastName` FROM `users` `user` WHERE `user`.`id` = ?", [1], ] ``` --- # Update using Query Builder You can create `UPDATE` queries using `QueryBuilder`. Examples: ``` await dataSource .createQueryBuilder() .update(User) .set({ firstName: "Timber", lastName: "Saw" }) .where("id = :id", { id: 1 }) .execute() ``` This is the most efficient way in terms of performance to update entities in your database. ## Raw SQL support[​](#raw-sql-support "Direct link to Raw SQL support") In some cases when you need to execute SQL queries you need to use function style value: ``` await dataSource .createQueryBuilder() .update(User) .set({ firstName: "Timber", lastName: "Saw", age: () => "age + 1", }) .where("id = :id", { id: 1 }) .execute() ``` > Warning: When using raw SQL, ensure that values are properly sanitized to prevent SQL injection. --- # Query Runner ## What is a QueryRunner?[​](#what-is-a-queryrunner "Direct link to What is a QueryRunner?") Each new `QueryRunner` instance takes a single connection from the connection pool, if the RDBMS supports connection pooling. For databases that do not support connection pools, it uses the same connection across the entire data source. ## Creating a new `QueryRunner` instance[​](#creating-a-new-queryrunner-instance "Direct link to creating-a-new-queryrunner-instance") Use the `createQueryRunner` method to create a new `QueryRunner`: ``` const queryRunner = dataSource.createQueryRunner() ``` ## Using `QueryRunner`[​](#using-queryrunner "Direct link to using-queryrunner") After you create a new instance of `QueryRunner`, use the `connect` method to get a connection from the connection pool: ``` const queryRunner = dataSource.createQueryRunner() await queryRunner.connect() ``` **Important**: Make sure to release it when it is no longer needed to make it available to the connection pool again: ``` await queryRunner.release() ``` After the connection is released, you cannot use the query runner's methods. `QueryRunner` has a bunch of methods you can use, it also has its own `EntityManager` instance, which you can use through `manager` property to run `EntityManager` methods on a particular database connection used by `QueryRunner` instance: ``` const queryRunner = dataSource.createQueryRunner() // take a connection from the connection pool await queryRunner.connect() // use this particular connection to execute queries const users = await queryRunner.manager.find(User) // remember to release the connection after you are done using it await queryRunner.release() ``` --- # Eager and Lazy Relations ## Eager relations[​](#eager-relations "Direct link to Eager relations") Eager relations are loaded automatically each time you load entities from the database. For example: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany } from "typeorm" import { Question } from "./Question" @Entity() export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @ManyToMany((type) => Question, (question) => question.categories) questions: Question[] } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany, JoinTable, } from "typeorm" import { Category } from "./Category" @Entity() export class Question { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string @ManyToMany((type) => Category, (category) => category.questions, { eager: true, }) @JoinTable() categories: Category[] } ``` Now when you load questions you don't need to join or specify relations you want to load. They will be loaded automatically: ``` const questionRepository = dataSource.getRepository(Question) // questions will be loaded with its categories const questions = await questionRepository.find() ``` Eager relations only work when you use `find*` methods. If you use `QueryBuilder` eager relations are disabled and have to use `leftJoinAndSelect` to load the relation. Eager relations can only be used on one side of the relationship, using `eager: true` on both sides of relationship is disallowed. ## Lazy relations[​](#lazy-relations "Direct link to Lazy relations") Entities in lazy relations are loaded once you access them. Such relations must have `Promise` as type - you store your value in a promise, and when you load them a promise is returned as well. Example: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany } from "typeorm" import { Question } from "./Question" @Entity() export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @ManyToMany((type) => Question, (question) => question.categories) questions: Promise } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany, JoinTable, } from "typeorm" import { Category } from "./Category" @Entity() export class Question { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string @ManyToMany((type) => Category, (category) => category.questions) @JoinTable() categories: Promise } ``` `categories` is a Promise. It means it is lazy and it can store only a promise with a value inside. Example how to save such relation: ``` const category1 = new Category() category1.name = "animals" await dataSource.manager.save(category1) const category2 = new Category() category2.name = "zoo" await dataSource.manager.save(category2) const question = new Question() question.categories = Promise.resolve([category1, category2]) await dataSource.manager.save(question) ``` Example how to load objects inside lazy relations: ``` const [question] = await dataSource.getRepository(Question).find() const categories = await question.categories // you'll have all question's categories inside "categories" variable now ``` Note: if you come from other languages (Java, PHP, etc.) and are used to using lazy relations everywhere - be careful. Those languages aren't asynchronous, and lazy loading is achieved in a different way, without the use of promises. In JavaScript and Node.JS, you have to use promises if you want to have lazy-loaded relations. This is a non-standard technique and considered experimental in TypeORM. --- # Many-to-many relations ## What are many-to-many relations?[​](#what-are-many-to-many-relations "Direct link to What are many-to-many relations?") Many-to-many is a relation where A contains multiple instances of B, and B contains multiple instances of A. Let's take for example `Question` and `Category` entities. A question can have multiple categories, and each category can have multiple questions. ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany, JoinTable, } from "typeorm" import { Category } from "./Category" @Entity() export class Question { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string @ManyToMany(() => Category) @JoinTable() categories: Category[] } ``` `@JoinTable()` is required for `@ManyToMany` relations. You must put `@JoinTable` on one (owning) side of relation. This example will produce following tables: ``` +-------------+--------------+----------------------------+ | category | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | name | varchar(255) | | +-------------+--------------+----------------------------+ +-------------+--------------+----------------------------+ | question | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | title | varchar(255) | | | text | varchar(255) | | +-------------+--------------+----------------------------+ +-------------+--------------+----------------------------+ | question_categories_category | +-------------+--------------+----------------------------+ | questionId | int | PRIMARY KEY FOREIGN KEY | | categoryId | int | PRIMARY KEY FOREIGN KEY | +-------------+--------------+----------------------------+ ``` ## Saving many-to-many relations[​](#saving-many-to-many-relations "Direct link to Saving many-to-many relations") With [cascades](https://typeorm.io/docs/relations/relations.md#cascades) enabled, you can save this relation with only one `save` call. ``` const category1 = new Category() category1.name = "animals" await dataSource.manager.save(category1) const category2 = new Category() category2.name = "zoo" await dataSource.manager.save(category2) const question = new Question() question.title = "dogs" question.text = "who let the dogs out?" question.categories = [category1, category2] await dataSource.manager.save(question) ``` ## Deleting many-to-many relations[​](#deleting-many-to-many-relations "Direct link to Deleting many-to-many relations") With [cascades](https://typeorm.io/docs/relations/relations.md#cascades) enabled, you can delete this relation with only one `save` call. To delete a many-to-many relationship between two records, remove it from the corresponding field and save the record. ``` const question = await dataSource.getRepository(Question).findOne({ relations: { categories: true, }, where: { id: 1 }, }) question.categories = question.categories.filter((category) => { return category.id !== categoryToRemove.id }) await dataSource.manager.save(question) ``` This will only remove the record in the join table. The `question` and `categoryToRemove` records will still exist. ## Soft Deleting a relationship with cascade[​](#soft-deleting-a-relationship-with-cascade "Direct link to Soft Deleting a relationship with cascade") This example shows how the cascading soft delete behaves: ``` const category1 = new Category() category1.name = "animals" const category2 = new Category() category2.name = "zoo" const question = new Question() question.categories = [category1, category2] const newQuestion = await dataSource.manager.save(question) await dataSource.manager.softRemove(newQuestion) ``` In this example we did not call save or softRemove for category1 and category2, but they will be automatically saved and soft-deleted when the cascade of relation options is set to true like this: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany, JoinTable, } from "typeorm" import { Category } from "./Category" @Entity() export class Question { @PrimaryGeneratedColumn() id: number @ManyToMany(() => Category, (category) => category.questions, { cascade: true, }) @JoinTable() categories: Category[] } ``` ## Loading many-to-many relations[​](#loading-many-to-many-relations "Direct link to Loading many-to-many relations") To load questions with categories inside you must specify the relation in `FindOptions`: ``` const questionRepository = dataSource.getRepository(Question) const questions = await questionRepository.find({ relations: { categories: true, }, }) ``` Or using `QueryBuilder` you can join them: ``` const questions = await dataSource .getRepository(Question) .createQueryBuilder("question") .leftJoinAndSelect("question.categories", "category") .getMany() ``` With eager loading enabled on a relation, you don't have to specify relations in the find command as it will ALWAYS be loaded automatically. If you use QueryBuilder eager relations are disabled, you have to use `leftJoinAndSelect` to load the relation. ## Bi-directional relations[​](#bi-directional-relations "Direct link to Bi-directional relations") Relations can be uni-directional and bi-directional. Uni-directional relations are relations with a relation decorator only on one side. Bi-directional relations are relations with decorators on both sides of a relation. We just created a uni-directional relation. Let's make it bi-directional: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany } from "typeorm" import { Question } from "./Question" @Entity() export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @ManyToMany(() => Question, (question) => question.categories) questions: Question[] } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany, JoinTable, } from "typeorm" import { Category } from "./Category" @Entity() export class Question { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string @ManyToMany(() => Category, (category) => category.questions) @JoinTable() categories: Category[] } ``` We just made our relation bi-directional. Note that the inverse relation does not have a `@JoinTable`. `@JoinTable` must be only on one side of the relation. Bi-directional relations allow you to join relations from both sides using `QueryBuilder`: ``` const categoriesWithQuestions = await dataSource .getRepository(Category) .createQueryBuilder("category") .leftJoinAndSelect("category.questions", "question") .getMany() ``` ## Many-to-many relations with custom properties[​](#many-to-many-relations-with-custom-properties "Direct link to Many-to-many relations with custom properties") In case you need to have additional properties in your many-to-many relationship, you have to create a new entity yourself. For example, if you would like entities `Question` and `Category` to have a many-to-many relationship with an additional `order` column, then you need to create an entity `QuestionToCategory` with two `ManyToOne` relations pointing in both directions and with custom columns in it: ``` import { Entity, Column, ManyToOne, PrimaryGeneratedColumn } from "typeorm" import { Question } from "./question" import { Category } from "./category" @Entity() export class QuestionToCategory { @PrimaryGeneratedColumn() public questionToCategoryId: number @Column() public questionId: number @Column() public categoryId: number @Column() public order: number @ManyToOne(() => Question, (question) => question.questionToCategories) public question: Question @ManyToOne(() => Category, (category) => category.questionToCategories) public category: Category } ``` Additionally you will have to add a relationship like the following to `Question` and `Category`: ``` // category.ts ... @OneToMany(() => QuestionToCategory, questionToCategory => questionToCategory.category) public questionToCategories: QuestionToCategory[]; // question.ts ... @OneToMany(() => QuestionToCategory, questionToCategory => questionToCategory.question) public questionToCategories: QuestionToCategory[]; ``` --- # Many-to-one / one-to-many relations Many-to-one / one-to-many is a relation where A contains multiple instances of B, but B contains only one instance of A. Let's take for example `User` and `Photo` entities. User can have multiple photos, but each photo is owned by only one single user. ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToOne } from "typeorm" import { User } from "./User" @Entity() export class Photo { @PrimaryGeneratedColumn() id: number @Column() url: string @ManyToOne(() => User, (user) => user.photos) user: User } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, OneToMany } from "typeorm" import { Photo } from "./Photo" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @OneToMany(() => Photo, (photo) => photo.user) photos: Photo[] } ``` Here we added `@OneToMany` to the `photos` property and specified the target relation type to be `Photo`. You can omit `@JoinColumn` in a `@ManyToOne` / `@OneToMany` relation. `@OneToMany` cannot exist without `@ManyToOne`. If you want to use `@OneToMany`, `@ManyToOne` is required. However, the inverse is not required: If you only care about the `@ManyToOne` relationship, you can define it without having `@OneToMany` on the related entity. Where you set `@ManyToOne` - its related entity will have "relation id" and foreign key. This example will produce following tables: ``` +-------------+--------------+----------------------------+ | photo | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | url | varchar(255) | | | userId | int | FOREIGN KEY | +-------------+--------------+----------------------------+ +-------------+--------------+----------------------------+ | user | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | name | varchar(255) | | +-------------+--------------+----------------------------+ ``` Example how to save such relation: ``` const photo1 = new Photo() photo1.url = "me.jpg" await dataSource.manager.save(photo1) const photo2 = new Photo() photo2.url = "me-and-bears.jpg" await dataSource.manager.save(photo2) const user = new User() user.name = "John" user.photos = [photo1, photo2] await dataSource.manager.save(user) ``` or alternatively you can do: ``` const user = new User() user.name = "Leo" await dataSource.manager.save(user) const photo1 = new Photo() photo1.url = "me.jpg" photo1.user = user await dataSource.manager.save(photo1) const photo2 = new Photo() photo2.url = "me-and-bears.jpg" photo2.user = user await dataSource.manager.save(photo2) ``` With [cascades](https://typeorm.io/docs/relations/relations.md#cascades) enabled you can save this relation with only one `save` call. To load a user with photos inside you must specify the relation in `FindOptions`: ``` const userRepository = dataSource.getRepository(User) const users = await userRepository.find({ relations: { photos: true, }, }) // or from inverse side const photoRepository = dataSource.getRepository(Photo) const photos = await photoRepository.find({ relations: { user: true, }, }) ``` Or using `QueryBuilder` you can join them: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .leftJoinAndSelect("user.photos", "photo") .getMany() // or from inverse side const photos = await dataSource .getRepository(Photo) .createQueryBuilder("photo") .leftJoinAndSelect("photo.user", "user") .getMany() ``` With eager loading enabled on a relation, you don't have to specify relations in the find command as it will ALWAYS be loaded automatically. If you use QueryBuilder eager relations are disabled, you have to use `leftJoinAndSelect` to load the relation. --- # One-to-one relations One-to-one is a relation where A contains only one instance of B, and B contains only one instance of A. Let's take for example `User` and `Profile` entities. User can have only a single profile, and a single profile is owned by only a single user. ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class Profile { @PrimaryGeneratedColumn() id: number @Column() gender: string @Column() photo: string } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, OneToOne, JoinColumn, } from "typeorm" import { Profile } from "./Profile" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @OneToOne(() => Profile) @JoinColumn() profile: Profile } ``` Here we added `@OneToOne` to the `user` and specified the target relation type to be `Profile`. We also added `@JoinColumn` which is required and must be set only on one side of the relation. The side you set `@JoinColumn` on, that side's table will contain a "relation id" and foreign keys to the target entity table. This example will produce the following tables: ``` +-------------+--------------+----------------------------+ | profile | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | gender | varchar(255) | | | photo | varchar(255) | | +-------------+--------------+----------------------------+ +-------------+--------------+----------------------------+ | user | +-------------+--------------+----------------------------+ | id | int | PRIMARY KEY AUTO_INCREMENT | | name | varchar(255) | | | profileId | int | FOREIGN KEY | +-------------+--------------+----------------------------+ ``` Again, `@JoinColumn` must be set only on one side of the relation - the side that must have the foreign key in the database table. Example how to save such a relation: ``` const profile = new Profile() profile.gender = "male" profile.photo = "me.jpg" await dataSource.manager.save(profile) const user = new User() user.name = "Joe Smith" user.profile = profile await dataSource.manager.save(user) ``` With [cascades](https://typeorm.io/docs/relations/relations.md#cascades) enabled you can save this relation with only one `save` call. To load user with profile inside you must specify relation in `FindOptions`: ``` const users = await dataSource.getRepository(User).find({ relations: { profile: true, }, }) ``` Or using `QueryBuilder` you can join them: ``` const users = await dataSource .getRepository(User) .createQueryBuilder("user") .leftJoinAndSelect("user.profile", "profile") .getMany() ``` With eager loading enabled on a relation, you don't have to specify relations in the find command as it will ALWAYS be loaded automatically. If you use QueryBuilder eager relations are disabled, you have to use `leftJoinAndSelect` to load the relation. Relations can be uni-directional and bi-directional. Uni-directional are relations with a relation decorator only on one side. Bi-directional are relations with decorators on both sides of a relation. We just created a uni-directional relation. Let's make it bi-directional: ``` import { Entity, PrimaryGeneratedColumn, Column, OneToOne } from "typeorm" import { User } from "./User" @Entity() export class Profile { @PrimaryGeneratedColumn() id: number @Column() gender: string @Column() photo: string @OneToOne(() => User, (user) => user.profile) // specify inverse side as a second parameter user: User } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, OneToOne, JoinColumn, } from "typeorm" import { Profile } from "./Profile" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @OneToOne(() => Profile, (profile) => profile.user) // specify inverse side as a second parameter @JoinColumn() profile: Profile } ``` We just made our relation bi-directional. Note, inverse relation does not have a `@JoinColumn`. `@JoinColumn` must only be on one side of the relation - on the table that will own the foreign key. Bi-directional relations allow you to join relations from both sides using `QueryBuilder`: ``` const profiles = await dataSource .getRepository(Profile) .createQueryBuilder("profile") .leftJoinAndSelect("profile.user", "user") .getMany() ``` --- # Relations ## What are relations?[​](#what-are-relations "Direct link to What are relations?") Relations helps you to work with related entities easily. There are several types of relations: * [one-to-one](https://typeorm.io/docs/relations/one-to-one-relations.md) using `@OneToOne` * [many-to-one](https://typeorm.io/docs/relations/many-to-one-one-to-many-relations.md) using `@ManyToOne` * [one-to-many](https://typeorm.io/docs/relations/many-to-one-one-to-many-relations.md) using `@OneToMany` * [many-to-many](https://typeorm.io/docs/relations/many-to-many-relations.md) using `@ManyToMany` ## Relation options[​](#relation-options "Direct link to Relation options") There are several options you can specify for relations: * `eager: boolean` (default: `false`) - If set to true, the relation will always be loaded with the main entity when using `find*` methods or `QueryBuilder` on this entity * `cascade: boolean | ("insert" | "update")[]` (default: `false`) - If set to true, the related object will be inserted and updated in the database. You can also specify an array of [cascade options](#cascade-options). * `onDelete: "RESTRICT"|"CASCADE"|"SET NULL"` (default: `RESTRICT`) - specifies how foreign key should behave when referenced object is deleted * `nullable: boolean` (default: `true`) - Indicates whether this relation's column is nullable or not. By default it is nullable. * `orphanedRowAction: "nullify" | "delete" | "soft-delete" | "disable"` (default: `disable`) - When a parent is saved (cascading enabled) without a child/children that still exists in database, this will control what shall happen to them. * *delete* will remove these children from database. * *soft-delete* will mark children as soft-deleted. * *nullify* will remove the relation key. * *disable* will keep the relation intact. To delete, one has to use their own repository. ## Cascades[​](#cascades "Direct link to Cascades") Cascades example: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany } from "typeorm" import { Question } from "./Question" @Entity() export class Category { @PrimaryGeneratedColumn() id: number @Column() name: string @ManyToMany((type) => Question, (question) => question.categories) questions: Question[] } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany, JoinTable, } from "typeorm" import { Category } from "./Category" @Entity() export class Question { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string @ManyToMany((type) => Category, (category) => category.questions, { cascade: true, }) @JoinTable() categories: Category[] } ``` ``` const category1 = new Category() category1.name = "ORMs" const category2 = new Category() category2.name = "Programming" const question = new Question() question.title = "How to ask questions?" question.text = "Where can I ask TypeORM-related questions?" question.categories = [category1, category2] await dataSource.manager.save(question) ``` As you can see in this example we did not call `save` for `category1` and `category2`. They will be automatically inserted, because we set `cascade` to true. Keep in mind - great power comes with great responsibility. Cascades may seem like a good and easy way to work with relations, but they may also bring bugs and security issues when some undesired object is being saved into the database. Also, they provide a less explicit way of saving new objects into the database. ### Cascade Options[​](#cascade-options "Direct link to Cascade Options") The `cascade` option can be set as a `boolean` or an array of cascade options `("insert" | "update" | "remove" | "soft-remove" | "recover")[]`. It will default to `false`, meaning no cascades. Setting `cascade: true` will enable full cascades. You can also specify options by providing an array. For example: ``` @Entity(Post) export class Post { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string // Full cascades on categories. @ManyToMany((type) => PostCategory, { cascade: true, }) @JoinTable() categories: PostCategory[] // Cascade insert here means if there is a new PostDetails instance set // on this relation, it will be inserted automatically to the db when you save this Post entity @ManyToMany((type) => PostDetails, (details) => details.posts, { cascade: ["insert"], }) @JoinTable() details: PostDetails[] // Cascade update here means if there are changes to an existing PostImage, it // will be updated automatically to the db when you save this Post entity @ManyToMany((type) => PostImage, (image) => image.posts, { cascade: ["update"], }) @JoinTable() images: PostImage[] // Cascade insert & update here means if there are new PostInformation instances // or an update to an existing one, they will be automatically inserted or updated // when you save this Post entity @ManyToMany((type) => PostInformation, (information) => information.posts, { cascade: ["insert", "update"], }) @JoinTable() informations: PostInformation[] } ``` ## `@JoinColumn` options[​](#joincolumn-options "Direct link to joincolumn-options") `@JoinColumn` not only defines which side of the relation contains the join column with a foreign key, but also allows you to customize join column name and referenced column name. When we set `@JoinColumn`, it automatically creates a column in the database named `propertyName + referencedColumnName`. For example: ``` @ManyToOne(type => Category) @JoinColumn() // this decorator is optional for @ManyToOne, but required for @OneToOne category: Category; ``` This code will create a `categoryId` column in the database. If you want to change this name in the database you can specify a custom join column name: ``` @ManyToOne(type => Category) @JoinColumn({ name: "cat_id" }) category: Category; ``` Join columns are always a reference to some other columns (using a foreign key). By default your relation always refers to the primary column of the related entity. If you want to create relation with other columns of the related entity - you can specify them in `@JoinColumn` as well: ``` @ManyToOne(type => Category) @JoinColumn({ referencedColumnName: "name" }) category: Category; ``` The relation now refers to `name` of the `Category` entity, instead of `id`. Column name for that relation will become `categoryName`. You can also join multiple columns. Note that they do not reference the primary column of the related entity by default: you must provide the referenced column name. ``` @ManyToOne(type => Category) @JoinColumn([ { name: "category_id", referencedColumnName: "id" }, { name: "locale_id", referencedColumnName: "locale_id" } ]) category: Category; ``` ## `@JoinTable` options[​](#jointable-options "Direct link to jointable-options") `@JoinTable` is used for `many-to-many` relations and describes join columns of the "junction" table. A junction table is a special separate table created automatically by TypeORM with columns that refer to the related entities. You can change column names inside junction tables and their referenced columns with `@JoinColumn`: You can also change the name of the generated "junction" table. ``` @ManyToMany(type => Category) @JoinTable({ name: "question_categories", // table name for the junction table of this relation joinColumn: { name: "question", referencedColumnName: "id" }, inverseJoinColumn: { name: "category", referencedColumnName: "id" } }) categories: Category[]; ``` If the destination table has composite primary keys, then an array of properties must be sent to `@JoinTable`. --- # Relations FAQ ## How to create self referencing relation?[​](#how-to-create-self-referencing-relation "Direct link to How to create self referencing relation?") Self-referencing relations are relations which have a relation to themselves. This is useful when you are storing entities in a tree-like structures. Also, "adjacency list" pattern is implemented using self-referenced relations. For example, you want to create categories tree in your application. Categories can nest categories, nested categories can nest other categories, etc. Self-referencing relations come handy here. Basically self-referencing relations are just regular relations that targets entity itself. Example: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToOne, OneToMany, } from "typeorm" @Entity() export class Category { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string @ManyToOne((type) => Category, (category) => category.childCategories) parentCategory: Category @OneToMany((type) => Category, (category) => category.parentCategory) childCategories: Category[] } ``` ## How to use relation id without joining relation?[​](#how-to-use-relation-id-without-joining-relation "Direct link to How to use relation id without joining relation?") Sometimes you want to have, in your object, the id of the related object without loading it. For example: ``` import { Entity, PrimaryGeneratedColumn, Column } from "typeorm" @Entity() export class Profile { @PrimaryGeneratedColumn() id: number @Column() gender: string @Column() photo: string } ``` ``` import { Entity, PrimaryGeneratedColumn, Column, OneToOne, JoinColumn, } from "typeorm" import { Profile } from "./Profile" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @OneToOne((type) => Profile) @JoinColumn() profile: Profile } ``` When you load a user without `profile` joined you won't have any information about profile in your user object, even profile id: ``` User { id: 1, name: "Umed" } ``` But sometimes you want to know what is the "profile id" of this user without loading the whole profile for this user. To do this you just need to add another property to your entity with `@Column` named exactly as the column created by your relation. Example: ``` import { Entity, PrimaryGeneratedColumn, Column, OneToOne, JoinColumn, } from "typeorm" import { Profile } from "./Profile" @Entity() export class User { @PrimaryGeneratedColumn() id: number @Column() name: string @Column({ nullable: true }) profileId: number @OneToOne((type) => Profile) @JoinColumn() profile: Profile } ``` That's all. Next time you load a user object it will contain a profile id: ``` User { id: 1, name: "Umed", profileId: 1 } ``` ## How to load relations in entities?[​](#how-to-load-relations-in-entities "Direct link to How to load relations in entities?") The easiest way to load your entity relations is to use `relations` option in `FindOptions`: ``` const users = await dataSource.getRepository(User).find({ relations: { profile: true, photos: true, videos: true, }, }) ``` Alternative and more flexible way is to use `QueryBuilder`: ``` const user = await dataSource .getRepository(User) .createQueryBuilder("user") .leftJoinAndSelect("user.profile", "profile") .leftJoinAndSelect("user.photos", "photo") .leftJoinAndSelect("user.videos", "video") .getMany() ``` Using `QueryBuilder` you can do `innerJoinAndSelect` instead of `leftJoinAndSelect` (to learn the difference between `LEFT JOIN` and `INNER JOIN` refer to your SQL documentation), you can join relation data by a condition, make ordering, etc. Learn more about [`QueryBuilder`](https://typeorm.io/docs/query-builder/select-query-builder.md). ## Avoid relation property initializers[​](#avoid-relation-property-initializers "Direct link to Avoid relation property initializers") Sometimes it is useful to initialize your relation properties, for example: ``` import { Entity, PrimaryGeneratedColumn, Column, ManyToMany, JoinTable, } from "typeorm" import { Category } from "./Category" @Entity() export class Question { @PrimaryGeneratedColumn() id: number @Column() title: string @Column() text: string @ManyToMany((type) => Category, (category) => category.questions) @JoinTable() categories: Category[] = [] // see = [] initialization here } ``` However, in TypeORM entities it may cause problems. To understand the problem, let's first try to load a Question entity WITHOUT the initializer set. When you load a question it will return an object like this: ``` Question { id: 1, title: "Question about ..." } ``` Now when you save this object `categories` inside it won't be touched - because it is unset. But if you have an initializer, the loaded object will look as follows: ``` Question { id: 1, title: "Question about ...", categories: [] } ``` When you save the object it will check if there are any categories in the database bind to the question - and it will detach all of them. Why? Because relation equal to `[]` or any items inside it will be considered like something was removed from it, there is no other way to check if an object was removed from entity or not. Therefore, saving an object like this will bring you problems - it will remove all previously set categories. How to avoid this behaviour? Simply don't initialize arrays in your entities. Same rule applies to a constructor - don't initialize it in a constructor as well. ## Avoid foreign key constraint creation[​](#avoid-foreign-key-constraint-creation "Direct link to Avoid foreign key constraint creation") Sometimes for performance reasons you might want to have a relation between entities, but without foreign key constraint. You can define if foreign key constraint should be created with `createForeignKeyConstraints` option (default: true). ``` import { Entity, PrimaryColumn, Column, ManyToOne } from "typeorm" import { Person } from "./Person" @Entity() export class ActionLog { @PrimaryColumn() id: number @Column() date: Date @Column() action: string @ManyToOne((type) => Person, { createForeignKeyConstraints: false, }) person: Person } ``` ## Avoid circular import errors[​](#avoid-circular-import-errors "Direct link to Avoid circular import errors") Here is an example if you want to define your entities, and you don't want those to cause errors in some environments. In this situation we have Action.ts and Person.ts importing each other for a many-to-many relationship. We use import type so that we can use the type information without any JavaScript code being generated. ``` import { Entity, PrimaryColumn, Column, ManytoMany } from "typeorm" import type { Person } from "./Person" @Entity() export class ActionLog { @PrimaryColumn() id: number @Column() date: Date @Column() action: string @ManyToMany("Person", (person: Person) => person.id) person: Person } ``` ``` import { Entity, PrimaryColumn, ManytoMany } from "typeorm" import type { ActionLog } from "./Action" @Entity() export class Person { @PrimaryColumn() id: number @ManyToMany("ActionLog", (actionLog: ActionLog) => actionLog.id) log: ActionLog } ``` --- # Custom repositories You can create a custom repository which should contain methods to work with your database. For example, let's say we want to have a method called `findByName(firstName: string, lastName: string)` which will search for users by a given first and last names. The best place for this method is a `Repository`, so we could call it like `userRepository.findByName(...)`. You can achieve this using custom repositories. There are several ways how custom repositories can be created. * [How to create custom repository](#how-to-create-custom-repository) * [Using custom repositories in transactions](#using-custom-repositories-in-transactions) ## How to create custom repository?[​](#how-to-create-custom-repository "Direct link to How to create custom repository?") It's common practice assigning a repository instance to a globally exported variable, and use this variable across your app, for example: ``` // user.repository.ts export const UserRepository = dataSource.getRepository(User) // user.controller.ts export class UserController { users() { return UserRepository.find() } } ``` In order to extend `UserRepository` functionality you can use `.extend` method of `Repository` class: ``` // user.repository.ts export const UserRepository = dataSource.getRepository(User).extend({ findByName(firstName: string, lastName: string) { return this.createQueryBuilder("user") .where("user.firstName = :firstName", { firstName }) .andWhere("user.lastName = :lastName", { lastName }) .getMany() }, }) // user.controller.ts export class UserController { users() { return UserRepository.findByName("Timber", "Saw") } } ``` ## Using custom repositories in transactions[​](#using-custom-repositories-in-transactions "Direct link to Using custom repositories in transactions") Transactions have their own scope of execution: they have their own query runner, entity manager and repository instances. That's why using global (data source's) entity manager and repositories won't work in transactions. In order to execute queries properly in scope of transaction you **must** use provided entity manager and its `getRepository` method. In order to use custom repositories within transaction, you must use `withRepository` method of the provided entity manager instance: ``` await connection.transaction(async (manager) => { // in transactions you MUST use manager instance provided by a transaction, // you cannot use global entity managers or repositories, // because this manager is exclusive and transactional const userRepository = manager.withRepository(UserRepository) await userRepository.createAndSave("Timber", "Saw") const timber = await userRepository.findByName("Timber", "Saw") }) ``` --- # `EntityManager` API * `dataSource` - The DataSource used by `EntityManager`. ``` const dataSource = manager.dataSource ``` * `queryRunner` - The query runner used by `EntityManager`. Used only in transactional instances of EntityManager. ``` const queryRunner = manager.queryRunner ``` * `transaction` - Provides a transaction where multiple database requests will be executed in a single database transaction. Learn more [Transactions](https://typeorm.io/docs/advanced-topics/transactions.md). ``` await manager.transaction(async (manager) => { // NOTE: you must perform all database operations using the given manager instance // it's a special instance of EntityManager working with this transaction // and don't forget to await things here }) ``` * `query` - Executes a raw SQL query. ``` const rawData = await manager.query(`SELECT * FROM USERS`) // You can also use parameters to avoid SQL injection // The syntax differs between the drivers // aurora-mysql, better-sqlite3, capacitor, cordova, // expo, mariadb, mysql, nativescript, react-native, // sap, sqlite, sqljs const rawData = await manager.query( "SELECT * FROM USERS WHERE name = ? and age = ?", ["John", 24], ) // aurora-postgres, cockroachdb, postgres const rawData = await manager.query( "SELECT * FROM USERS WHERE name = $1 and age = $2", ["John", 24], ) // oracle const rawData = await manager.query( "SELECT * FROM USERS WHERE name = :1 and age = :2", ["John", 24], ) // spanner const rawData = await manager.query( "SELECT * FROM USERS WHERE name = @param0 and age = @param1", ["John", 24], ) // mssql const rawData = await manager.query( "SELECT * FROM USERS WHERE name = @0 and age = @1", ["John", 24], ) ``` * `sql` - Executes a raw SQL query using template literals. ``` const rawData = await manager.sql`SELECT * FROM USERS WHERE name = ${"John"} and age = ${24}` ``` Learn more about using the [SQL Tag syntax](https://typeorm.io/docs/guides/sql-tag.md). * `createQueryBuilder` - Creates a query builder use to build SQL queries. Learn more about [QueryBuilder](https://typeorm.io/docs/query-builder/select-query-builder.md). ``` const users = await manager .createQueryBuilder() .select() .from(User, "user") .where("user.name = :name", { name: "John" }) .getMany() ``` * `hasId` - Checks if given entity has its primary column property defined. ``` if (manager.hasId(user)) { // ... do something } ``` * `getId` - Gets given entity's primary column property value. If the entity has composite primary keys then the returned value will be an object with names and values of primary columns. ``` const userId = manager.getId(user) // userId === 1 ``` * `create` - Creates a new instance of `User`. Optionally accepts an object literal with user properties which will be written into newly created user object. ``` const user = manager.create(User) // same as const user = new User(); const user = manager.create(User, { id: 1, firstName: "Timber", lastName: "Saw", }) // same as const user = new User(); user.firstName = "Timber"; user.lastName = "Saw"; ``` * `merge` - Merges multiple entities into a single entity. ``` const user = new User() manager.merge(User, user, { firstName: "Timber" }, { lastName: "Saw" }) // same as user.firstName = "Timber"; user.lastName = "Saw"; ``` * `preload` - Creates a new entity from the given plain javascript object. If the entity already exist in the database, then it loads it (and everything related to it), replaces all values with the new ones from the given object, and returns the new entity. The new entity is actually loaded from the database entity with all properties replaced from the new object. ``` const partialUser = { id: 1, firstName: "Rizzrak", profile: { id: 1, }, } const user = await manager.preload(User, partialUser) // user will contain all missing data from partialUser with partialUser property values: // { id: 1, firstName: "Rizzrak", lastName: "Saw", profile: { id: 1, ... } } ``` * `save` - Saves a given entity or array of entities. If the entity already exists in the database, then it's updated. If the entity does not exist in the database yet, it's inserted. It saves all given entities in a single transaction (in the case of entity manager is not transactional). Also supports partial updating since all undefined properties are skipped. In order to make a value `NULL`, you must manually set the property to equal `null`. ``` await manager.save(user) await manager.save([category1, category2, category3]) ``` * `remove` - Removes a given entity or array of entities. It removes all given entities in a single transaction (in the case of entity, manager is not transactional). ``` await manager.remove(user) await manager.remove([category1, category2, category3]) ``` * `insert` - Inserts a new entity, or array of entities. ``` await manager.insert(User, { firstName: "Timber", lastName: "Timber", }) await manager.insert(User, [ { firstName: "Foo", lastName: "Bar", }, { firstName: "Rizz", lastName: "Rak", }, ]) ``` * `update` - Updates entities by entity id, ids or given conditions. Sets fields from supplied partial entity. ``` await manager.update(User, { age: 18 }, { category: "ADULT" }) // executes UPDATE user SET category = ADULT WHERE age = 18 await manager.update(User, 1, { firstName: "Rizzrak" }) // executes UPDATE user SET firstName = Rizzrak WHERE id = 1 ``` * `updateAll` - Updates *all* entities of target type (without WHERE clause). Sets fields from supplied partial entity. ``` await manager.updateAll(User, { category: "ADULT" }) // executes UPDATE user SET category = ADULT ``` * `upsert` - Inserts a new entity or array of entities unless they already exist in which case they are updated instead. Supported by AuroraDataApi, Cockroach, Mysql, Postgres, and Sqlite database drivers. When an upsert operation results in an update (due to a conflict), special columns like `@UpdateDateColumn` and `@VersionColumn` are automatically updated to their current values. ``` await manager.upsert( User, [ { externalId: "abc123", firstName: "Rizzrak" }, { externalId: "bca321", firstName: "Karzzir" }, ], ["externalId"], ) /** executes * INSERT INTO user * VALUES * (externalId = abc123, firstName = Rizzrak), * (externalId = cba321, firstName = Karzzir), * ON CONFLICT (externalId) DO UPDATE firstName = EXCLUDED.firstName **/ ``` * `delete` - Deletes entities by entity id, ids or given conditions. ``` await manager.delete(User, 1) await manager.delete(User, [1, 2, 3]) await manager.delete(User, { firstName: "Timber" }) ``` * `deleteAll` - Deletes *all* entities of target type (without WHERE clause). ``` await manager.deleteAll(User) // executes DELETE FROM user ``` Refer also to the `clear` method, which performs database `TRUNCATE TABLE` operation instead. * `increment` - Increments some column by provided value of entities that match given options. ``` await manager.increment(User, { firstName: "Timber" }, "age", 3) ``` * `decrement` - Decrements some column by provided value that match given options. ``` await manager.decrement(User, { firstName: "Timber" }, "age", 3) ``` * `exists` - Check whether any entity exists that matches `FindOptions`. ``` const exists = await manager.exists(User, { where: { firstName: "Timber", }, }) ``` * `existsBy` - Checks whether any entity exists that matches `FindOptionsWhere`. ``` const exists = await manager.existsBy(User, { firstName: "Timber" }) ``` * `count` - Counts entities that match `FindOptions`. Useful for pagination. ``` const count = await manager.count(User, { where: { firstName: "Timber", }, }) ``` * `countBy` - Counts entities that match `FindOptionsWhere`. Useful for pagination. ``` const count = await manager.countBy(User, { firstName: "Timber" }) ``` * `find` - Finds entities that match given `FindOptions`. ``` const timbers = await manager.find(User, { where: { firstName: "Timber", }, }) ``` * `findBy` - Finds entities that match given `FindWhereOptions`. ``` const timbers = await manager.findBy(User, { firstName: "Timber", }) ``` * `findAndCount` - Finds entities that match given `FindOptions`. Also counts all entities that match given conditions, but ignores pagination settings (from and take options). ``` const [timbers, timbersCount] = await manager.findAndCount(User, { where: { firstName: "Timber", }, }) ``` * `findAndCountBy` - Finds entities that match given `FindOptionsWhere`. Also counts all entities that match given conditions, but ignores pagination settings (from and take options). ``` const [timbers, timbersCount] = await manager.findAndCountBy(User, { firstName: "Timber", }) ``` * `findOne` - Finds the first entity that matches given `FindOptions`. ``` const timber = await manager.findOne(User, { where: { firstName: "Timber", }, }) ``` * `findOneBy` - Finds the first entity that matches given `FindOptionsWhere`. ``` const timber = await manager.findOneBy(User, { firstName: "Timber" }) ``` * `findOneOrFail` - Finds the first entity that matches some id or find options. Rejects the returned promise if nothing matches. ``` const timber = await manager.findOneOrFail(User, { where: { firstName: "Timber", }, }) ``` * `findOneByOrFail` - Finds the first entity that matches given `FindOptions`. Rejects the returned promise if nothing matches. ``` const timber = await manager.findOneByOrFail(User, { firstName: "Timber" }) ``` * `clear` - Clears all the data from the given table (truncates/drops it). ``` await manager.clear(User) ``` * `getRepository` - Gets `Repository` to perform operations on a specific entity. Learn more about [Repositories](https://typeorm.io/docs/working-with-entity-manager/working-with-repository.md). ``` const userRepository = manager.getRepository(User) ``` * `getTreeRepository` - Gets `TreeRepository` to perform operations on a specific entity. Learn more about [Repositories](https://typeorm.io/docs/working-with-entity-manager/working-with-repository.md). ``` const categoryRepository = manager.getTreeRepository(Category) ``` * `getMongoRepository` - Gets `MongoRepository` to perform operations on a specific entity. Learn more about [MongoDB](https://typeorm.io/docs/drivers/mongodb.md). ``` const userRepository = manager.getMongoRepository(User) ``` * `withRepository` - Gets custom repository instance used in a transaction. Learn more about [Custom repositories](https://typeorm.io/docs/working-with-entity-manager/custom-repository.md). ``` const myUserRepository = manager.withRepository(UserRepository) ``` * `release` - Releases query runner of an entity manager. Used only when query runner was created and managed manually. ``` await manager.release() ``` --- # Find Options ## Basic options[​](#basic-options "Direct link to Basic options") All repository and manager `.find*` methods accept special options you can use to query data you need without using `QueryBuilder`: * `select` - indicates which properties of the main object must be selected ``` userRepository.find({ select: { firstName: true, lastName: true, }, }) ``` will execute following query: ``` SELECT "firstName", "lastName" FROM "user" ``` * `relations` - relations needs to be loaded with the main entity. Sub-relations can also be loaded (shorthand for `join` and `leftJoinAndSelect`) ``` userRepository.find({ relations: { profile: true, photos: true, videos: true, }, }) userRepository.find({ relations: { profile: true, photos: true, videos: { videoAttributes: true, }, }, }) ``` will execute following queries: ``` SELECT * FROM "user" LEFT JOIN "profile" ON "profile"."id" = "user"."profileId" LEFT JOIN "photos" ON "photos"."id" = "user"."photoId" LEFT JOIN "videos" ON "videos"."id" = "user"."videoId" SELECT * FROM "user" LEFT JOIN "profile" ON "profile"."id" = "user"."profileId" LEFT JOIN "photos" ON "photos"."id" = "user"."photoId" LEFT JOIN "videos" ON "videos"."id" = "user"."videoId" LEFT JOIN "video_attributes" ON "video_attributes"."id" = "videos"."video_attributesId" ``` * `where` - simple conditions by which entity should be queried. ``` userRepository.find({ where: { firstName: "Timber", lastName: "Saw", }, }) ``` will execute following query: ``` SELECT * FROM "user" WHERE "firstName" = 'Timber' AND "lastName" = 'Saw' ``` Querying a column from an embedded entity should be done with respect to the hierarchy in which it was defined. Example: ``` userRepository.find({ relations: { project: true, }, where: { project: { name: "TypeORM", initials: "TORM", }, }, }) ``` will execute following query: ``` SELECT * FROM "user" LEFT JOIN "project" ON "project"."id" = "user"."projectId" WHERE "project"."name" = 'TypeORM' AND "project"."initials" = 'TORM' ``` Querying with OR operator: ``` userRepository.find({ where: [ { firstName: "Timber", lastName: "Saw" }, { firstName: "Stan", lastName: "Lee" }, ], }) ``` will execute following query: ``` SELECT * FROM "user" WHERE ("firstName" = 'Timber' AND "lastName" = 'Saw') OR ("firstName" = 'Stan' AND "lastName" = 'Lee') ``` * `order` - selection order. ``` userRepository.find({ order: { name: "ASC", id: "DESC", }, }) ``` will execute following query: ``` SELECT * FROM "user" ORDER BY "name" ASC, "id" DESC ``` * `withDeleted` - include entities which have been soft deleted with `softDelete` or `softRemove`, e.g. have their `@DeleteDateColumn` column set. By default, soft deleted entities are not included. ``` userRepository.find({ withDeleted: true, }) ``` `find*` methods which return multiple entities (`find`, `findBy`, `findAndCount`, `findAndCountBy`) also accept following options: * `skip` - offset (paginated) from where entities should be taken. ``` userRepository.find({ skip: 5, }) ``` ``` SELECT * FROM "user" OFFSET 5 ``` * `take` - limit (paginated) - max number of entities that should be taken. ``` userRepository.find({ take: 10, }) ``` will execute following query: ``` SELECT * FROM "user" LIMIT 10 ``` \*\* `skip` and `take` should be used together \*\* If you are using typeorm with MSSQL, and want to use `take` or `limit`, you need to use order as well or you will receive the following error: `'Invalid usage of the option NEXT in the FETCH statement.'` ``` userRepository.find({ order: { columnName: "ASC", }, skip: 0, take: 10, }) ``` will execute following query: ``` SELECT * FROM "user" ORDER BY "columnName" ASC LIMIT 10 OFFSET 0 ``` * `cache` - Enables or disables query result caching. See [caching](https://typeorm.io/docs/query-builder/caching.md) for more information and options. ``` userRepository.find({ cache: true, }) ``` * `lock` - Enables locking mechanism for query. Can be used only in `findOne` and `findOneBy` methods. `lock` is an object which can be defined as: ``` { mode: "optimistic", version: number | Date } ``` or ``` { mode: "pessimistic_read" | "pessimistic_write" | "dirty_read" | /* "pessimistic_partial_write" and "pessimistic_write_or_fail" are deprecated and will be removed in a future version. Use onLocked instead. */ "pessimistic_partial_write" | "pessimistic_write_or_fail" | "for_no_key_update" | "for_key_share", tables: string[], onLocked: "nowait" | "skip_locked" } ``` for example: ``` userRepository.findOne({ where: { id: 1, }, lock: { mode: "optimistic", version: 1 }, }) ``` See [lock modes](https://typeorm.io/docs/query-builder/select-query-builder.md#lock-modes) for more information Complete example of find options: ``` userRepository.find({ select: { firstName: true, lastName: true, }, relations: { profile: true, photos: true, videos: true, }, where: { firstName: "Timber", lastName: "Saw", profile: { userName: "tshaw", }, }, order: { name: "ASC", id: "DESC", }, skip: 5, take: 10, cache: true, }) ``` Find without arguments: ``` userRepository.find() ``` will execute following query: ``` SELECT * FROM "user" ``` ## Advanced options[​](#advanced-options "Direct link to Advanced options") TypeORM provides a lot of built-in operators that can be used to create more complex comparisons: * `Not` ``` import { Not } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: Not("About #1"), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "title" != 'About #1' ``` * `LessThan` ``` import { LessThan } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ likes: LessThan(10), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "likes" < 10 ``` * `LessThanOrEqual` ``` import { LessThanOrEqual } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ likes: LessThanOrEqual(10), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "likes" <= 10 ``` * `MoreThan` ``` import { MoreThan } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ likes: MoreThan(10), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "likes" > 10 ``` * `MoreThanOrEqual` ``` import { MoreThanOrEqual } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ likes: MoreThanOrEqual(10), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "likes" >= 10 ``` * `Equal` ``` import { Equal } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: Equal("About #2"), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "title" = 'About #2' ``` * `Like` ``` import { Like } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: Like("%out #%"), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "title" LIKE '%out #%' ``` * `ILike` ``` import { ILike } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: ILike("%out #%"), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "title" ILIKE '%out #%' ``` * `Between` ``` import { Between } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ likes: Between(1, 10), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "likes" BETWEEN 1 AND 10 ``` * `In` ``` import { In } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: In(["About #2", "About #3"]), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "title" IN ('About #2','About #3') ``` * `Any` ``` import { Any } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: Any(["About #2", "About #3"]), }) ``` will execute following query (Postgres notation): ``` SELECT * FROM "post" WHERE "title" = ANY(['About #2','About #3']) ``` * `IsNull` ``` import { IsNull } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: IsNull(), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "title" IS NULL ``` * `ArrayContains` ``` import { ArrayContains } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ categories: ArrayContains(["TypeScript"]), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "categories" @> '{TypeScript}' ``` * `ArrayContainedBy` ``` import { ArrayContainedBy } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ categories: ArrayContainedBy(["TypeScript"]), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "categories" <@ '{TypeScript}' ``` * `ArrayOverlap` ``` import { ArrayOverlap } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ categories: ArrayOverlap(["TypeScript"]), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "categories" && '{TypeScript}' ``` * `Raw` ``` import { Raw } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ likes: Raw("dislikes - 4"), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "likes" = "dislikes" - 4 ``` In the simplest case, a raw query is inserted immediately after the equal symbol. But you can also completely rewrite the comparison logic using the function. ``` import { Raw } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ currentDate: Raw((alias) => `${alias} > NOW()`), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "currentDate" > NOW() ``` If you need to provide user input, you should not include the user input directly in your query as this may create a SQL injection vulnerability. Instead, you can use the second argument of the `Raw` function to provide a list of parameters to bind to the query. ``` import { Raw } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ currentDate: Raw((alias) => `${alias} > :date`, { date: "2020-10-06" }), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "currentDate" > '2020-10-06' ``` If you need to provide user input that is an array, you can bind them as a list of values in the SQL statement by using the special expression syntax: ``` import { Raw } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: Raw((alias) => `${alias} IN (:...titles)`, { titles: [ "Go To Statement Considered Harmful", "Structured Programming", ], }), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "title" IN ('Go To Statement Considered Harmful', 'Structured Programming') ``` ## Combining Advanced Options[​](#combining-advanced-options "Direct link to Combining Advanced Options") Also you can combine these operators with below: * `Not` ``` import { Not, MoreThan, Equal } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ likes: Not(MoreThan(10)), title: Not(Equal("About #2")), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE NOT("likes" > 10) AND NOT("title" = 'About #2') ``` * `Or` ``` import { Or, Equal, ILike } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: Or(Equal("About #2"), ILike("About%")), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE "title" = 'About #2' OR "title" ILIKE 'About%' ``` * `And` ``` import { And, Not, Equal, ILike } from "typeorm" const loadedPosts = await dataSource.getRepository(Post).findBy({ title: And(Not(Equal("About #2")), ILike("%About%")), }) ``` will execute following query: ``` SELECT * FROM "post" WHERE NOT("title" = 'About #2') AND "title" ILIKE '%About%' ``` --- # Repository APIs ## `Repository` API[​](#repository-api "Direct link to repository-api") * `manager` - The `EntityManager` used by this repository. ``` const manager = repository.manager ``` * `metadata` - The `EntityMetadata` of the entity managed by this repository. ``` const metadata = repository.metadata ``` * `queryRunner` - The query runner used by `EntityManager`. Used only in transactional instances of EntityManager. ``` const queryRunner = repository.queryRunner ``` * `target` - The target entity class managed by this repository. Used only in transactional instances of EntityManager. ``` const target = repository.target ``` * `createQueryBuilder` - Creates a query builder use to build SQL queries. Learn more about [QueryBuilder](https://typeorm.io/docs/query-builder/select-query-builder.md). ``` const users = await repository .createQueryBuilder("user") .where("user.name = :name", { name: "John" }) .getMany() ``` * `hasId` - Checks if the given entity's primary column property is defined. ``` if (repository.hasId(user)) { // ... do something } ``` * `getId` - Gets the primary column property values of the given entity. If entity has composite primary keys then the returned value will be an object with names and values of primary columns. ``` const userId = repository.getId(user) // userId === 1 ``` * `create` - Creates a new instance of `User`. Optionally accepts an object literal with user properties which will be written into newly created user object ``` const user = repository.create() // same as const user = new User(); const user = repository.create({ id: 1, firstName: "Timber", lastName: "Saw", }) // same as const user = new User(); user.firstName = "Timber"; user.lastName = "Saw"; ``` * `merge` - Merges multiple entities into a single entity. ``` const user = new User() repository.merge(user, { firstName: "Timber" }, { lastName: "Saw" }) // same as user.firstName = "Timber"; user.lastName = "Saw"; ``` * `preload` - Creates a new entity from the given plain javascript object. If the entity already exists in the database, then it loads it (and everything related to it), replaces all values with the new ones from the given object, and returns the new entity. The new entity is actually an entity loaded from the database with all properties replaced from the new object. > Note that given entity-like object must have an entity id / primary key to find entity by. Returns undefined if entity with given id was not found. ``` const partialUser = { id: 1, firstName: "Rizzrak", profile: { id: 1, }, } const user = await repository.preload(partialUser) // user will contain all missing data from partialUser with partialUser property values: // { id: 1, firstName: "Rizzrak", lastName: "Saw", profile: { id: 1, ... } } ``` * `save` - Saves a given entity or array of entities. If the entity already exist in the database, it is updated. If the entity does not exist in the database, it is inserted. It saves all given entities in a single transaction (in the case of entity, manager is not transactional). Also supports partial updating since all undefined properties are skipped. Returns the saved entity/entities. ``` await repository.save(user) await repository.save([category1, category2, category3]) ``` * `remove` - Removes a given entity or array of entities. It removes all given entities in a single transaction (in the case of entity, manager is not transactional). Returns the removed entity/entities. ``` await repository.remove(user) await repository.remove([category1, category2, category3]) ``` * `insert` - Inserts a new entity, or array of entities. ``` await repository.insert({ firstName: "Timber", lastName: "Timber", }) await repository.insert([ { firstName: "Foo", lastName: "Bar", }, { firstName: "Rizz", lastName: "Rak", }, ]) ``` * `update` - Updates entities by entity id, ids or given conditions. Sets fields from supplied partial entity. ``` await repository.update({ age: 18 }, { category: "ADULT" }) // executes UPDATE user SET category = ADULT WHERE age = 18 await repository.update(1, { firstName: "Rizzrak" }) // executes UPDATE user SET firstName = Rizzrak WHERE id = 1 // optionally request RETURNING / OUTPUT values (supported drivers only) const result = await repository.update( 1, { firstName: "Rizzrak" }, { returning: ["id", "firstName"] }, ) console.log(result.raw) // [{ id: 1, firstName: "Rizzrak" }] ``` * `updateAll` - Updates *all* entities of target type (without WHERE clause). Sets fields from supplied partial entity. ``` await repository.updateAll({ category: "ADULT" }) // executes UPDATE user SET category = ADULT await repository.updateAll( { category: "ADULT" }, { returning: "*" }, // limited to drivers that support returning clauses ) ``` * `upsert` - Inserts a new entity or array of entities unless they already exist in which case they are updated instead. Supported by AuroraDataApi, Cockroach, Mysql, Postgres, and Sqlite database drivers. When an upsert operation results in an update (due to a conflict), special columns like `@UpdateDateColumn` and `@VersionColumn` are automatically updated to their current values. ``` await repository.upsert( [ { externalId: "abc123", firstName: "Rizzrak" }, { externalId: "bca321", firstName: "Karzzir" }, ], ["externalId"], ) /** executes * INSERT INTO user * VALUES * (externalId = abc123, firstName = Rizzrak), * (externalId = cba321, firstName = Karzzir), * ON CONFLICT (externalId) DO UPDATE * SET firstName = EXCLUDED.firstName, * updatedDate = CURRENT_TIMESTAMP, * version = version + 1 **/ ``` You can also request values to be returned from an upsert (supported on drivers with RETURNING / OUTPUT support): ``` const { raw } = await repository.upsert( { externalId: "abc123", firstName: "Rizzrak" }, { conflictPaths: ["externalId"], returning: ["externalId", "firstName"], }, ) console.log(raw) // [{ externalId: "abc123", firstName: "Rizzrak" }] ``` ``` await repository.upsert( [ { externalId: "abc123", firstName: "Rizzrak" }, { externalId: "bca321", firstName: "Karzzir" }, ], { conflictPaths: ["externalId"], skipUpdateIfNoValuesChanged: true, // supported by postgres, skips update if it would not change row values upsertType: "upsert", // "on-conflict-do-update" | "on-duplicate-key-update" | "upsert" - optionally provide an UpsertType - 'upsert' is currently only supported by CockroachDB }, ) /** executes * INSERT INTO user * VALUES * (externalId = abc123, firstName = Rizzrak), * (externalId = cba321, firstName = Karzzir), * ON CONFLICT (externalId) DO UPDATE * SET firstName = EXCLUDED.firstName * WHERE user.firstName IS DISTINCT FROM EXCLUDED.firstName **/ ``` ``` await repository.upsert( [ { externalId: "abc123", firstName: "Rizzrak", dateAdded: "2020-01-01" }, { externalId: "bca321", firstName: "Karzzir", dateAdded: "2022-01-01" }, ], { conflictPaths: ["externalId"], skipUpdateIfNoValuesChanged: true, // supported by postgres, skips update if it would not change row values indexPredicate: "dateAdded > 2020-01-01", // supported by postgres, allows for partial indexes }, ) /** executes * INSERT INTO user * VALUES * (externalId = abc123, firstName = Rizzrak, dateAdded = 2020-01-01), * (externalId = cba321, firstName = Karzzir, dateAdded = 2022-01-01), * ON CONFLICT (externalId) WHERE ( dateAdded > 2021-01-01 ) DO UPDATE * SET firstName = EXCLUDED.firstName, * SET dateAdded = EXCLUDED.dateAdded, * WHERE user.firstName IS DISTINCT FROM EXCLUDED.firstName OR user.dateAdded IS DISTINCT FROM EXCLUDED.dateAdded **/ ``` * `delete` - Deletes entities by entity id, ids or given conditions: ``` await repository.delete(1) await repository.delete([1, 2, 3]) await repository.delete({ firstName: "Timber" }) ``` * `deleteAll` - Deletes *all* entities of target type (without WHERE clause). ``` await repository.deleteAll() // executes DELETE FROM user ``` Refer also to the `clear` method, which performs database `TRUNCATE TABLE` operation instead. * `softDelete` and `restore` - Soft deleting and restoring a row by id, ids, or given conditions: ``` const repository = dataSource.getRepository(Entity) // Soft delete an entity await repository.softDelete(1) // And you can restore it using restore; await repository.restore(1) // Soft delete multiple entities await repository.softDelete([1, 2, 3]) // Or soft delete by other attribute await repository.softDelete({ firstName: "Jake" }) ``` * `softRemove` and `recover` - This is alternative to `softDelete` and `restore`. ``` // You can soft-delete them using softRemove const entities = await repository.find() const entitiesAfterSoftRemove = await repository.softRemove(entities) // And You can recover them using recover; await repository.recover(entitiesAfterSoftRemove) ``` * `increment` - Increments some column by provided value of entities that match given options. ``` await repository.increment({ firstName: "Timber" }, "age", 3) ``` * `decrement` - Decrements some column by provided value that match given options. ``` await repository.decrement({ firstName: "Timber" }, "age", 3) ``` * `exists` - Check whether any entity exists that matches `FindOptions`. ``` const exists = await repository.exists({ where: { firstName: "Timber", }, }) ``` * `existsBy` - Checks whether any entity exists that matches `FindOptionsWhere`. ``` const exists = await repository.existsBy({ firstName: "Timber" }) ``` * `count` - Counts entities that match `FindOptions`. Useful for pagination. ``` const count = await repository.count({ where: { firstName: "Timber", }, }) ``` * `countBy` - Counts entities that match `FindOptionsWhere`. Useful for pagination. ``` const count = await repository.countBy({ firstName: "Timber" }) ``` * `sum` - Returns the sum of a numeric field for all entities that match `FindOptionsWhere`. ``` const sum = await repository.sum("age", { firstName: "Timber" }) ``` * `average` - Returns the average of a numeric field for all entities that match `FindOptionsWhere`. ``` const average = await repository.average("age", { firstName: "Timber" }) ``` * `minimum` - Returns the minimum of a numeric field for all entities that match `FindOptionsWhere`. ``` const minimum = await repository.minimum("age", { firstName: "Timber" }) ``` * `maximum` - Returns the maximum of a numeric field for all entities that match `FindOptionsWhere`. ``` const maximum = await repository.maximum("age", { firstName: "Timber" }) ``` * `find` - Finds entities that match given `FindOptions`. ``` const timbers = await repository.find({ where: { firstName: "Timber", }, }) ``` * `findBy` - Finds entities that match given `FindWhereOptions`. ``` const timbers = await repository.findBy({ firstName: "Timber", }) ``` * `findAndCount` - Finds entities that match given `FindOptions`. Also counts all entities that match given conditions, but ignores pagination settings (from and take options). ``` const [timbers, timbersCount] = await repository.findAndCount({ where: { firstName: "Timber", }, }) ``` * `findAndCountBy` - Finds entities that match given `FindOptionsWhere`. Also counts all entities that match given conditions, but ignores pagination settings (from and take options). ``` const [timbers, timbersCount] = await repository.findAndCountBy({ firstName: "Timber", }) ``` * `findOne` - Finds the first entity that matches given `FindOptions`. ``` const timber = await repository.findOne({ where: { firstName: "Timber", }, }) ``` * `findOneBy` - Finds the first entity that matches given `FindOptionsWhere`. ``` const timber = await repository.findOneBy({ firstName: "Timber" }) ``` * `findOneOrFail` - Finds the first entity that matches some id or find options. Rejects the returned promise if nothing matches. ``` const timber = await repository.findOneOrFail({ where: { firstName: "Timber", }, }) ``` * `findOneByOrFail` - Finds the first entity that matches given `FindOptions`. Rejects the returned promise if nothing matches. ``` const timber = await repository.findOneByOrFail({ firstName: "Timber" }) ``` * `query` - Executes a raw SQL query. ``` const rawData = await repository.query(`SELECT * FROM USERS`) // You can also use parameters to avoid SQL injection // The syntax differs between the drivers // aurora-mysql, better-sqlite3, capacitor, cordova, // expo, mariadb, mysql, nativescript, react-native, // sap, sqlite, sqljs const rawData = await repository.query( "SELECT * FROM USERS WHERE name = ? and age = ?", ["John", 24], ) // aurora-postgres, cockroachdb, postgres const rawData = await repository.query( "SELECT * FROM USERS WHERE name = $1 and age = $2", ["John", 24], ) // oracle const rawData = await repository.query( "SELECT * FROM USERS WHERE name = :1 and age = :2", ["John", 24], ) // spanner const rawData = await repository.query( "SELECT * FROM USERS WHERE name = @param0 and age = @param1", ["John", 24], ) // mssql const rawData = await repository.query( "SELECT * FROM USERS WHERE name = @0 and age = @1", ["John", 24], ) ``` * `clear` - Clears all the data from the given table (truncates/drops it). ``` await repository.clear() ``` ### Additional Options[​](#additional-options "Direct link to Additional Options") Optional `SaveOptions` can be passed as parameter for `save`. * `data` - Additional data to be passed with persist method. This data can be used in subscribers then. * `listeners`: boolean - Indicates if listeners and subscribers are called for this operation. By default they are enabled, you can disable them by setting `{ listeners: false }` in save/remove options. * `transaction`: boolean - By default transactions are enabled and all queries in persistence operation are wrapped into the transaction. You can disable this behaviour by setting `{ transaction: false }` in the persistence options. * `chunk`: number - Breaks save execution into multiple groups of chunks. For example, if you want to save 100.000 objects but you have issues with saving them, you can break them into 10 groups of 10.000 objects (by setting `{ chunk: 10000 }`) and save each group separately. This option is needed to perform very big insertions when you have issues with underlying driver parameter number limitation. * `reload`: boolean - Flag to determine whether the entity that is being persisted should be reloaded during the persistence operation. It will work only on databases which do not support RETURNING / OUTPUT statement. Enabled by default. Example: ``` userRepository.save(users, { chunk: 1000 }) ``` Optional `RemoveOptions` can be passed as parameter for `remove` and `delete`. * `data` - Additional data to be passed with remove method. This data can be used in subscribers then. * `listeners`: boolean - Indicates if listeners and subscribers are called for this operation. By default they are enabled, you can disable them by setting `{ listeners: false }` in save/remove options. * `transaction`: boolean - By default transactions are enabled and all queries in persistence operation are wrapped into the transaction. You can disable this behaviour by setting `{ transaction: false }` in the persistence options. * `chunk`: number - Breaks removal execution into multiple groups of chunks. For example, if you want to remove 100.000 objects but you have issues doing so, you can break them into 10 groups of 10.000 objects, by setting `{ chunk: 10000 }`, and remove each group separately. This option is needed to perform very big deletions when you have issues with underlying driver parameter number limitation. Example: ``` userRepository.remove(users, { chunk: 1000 }) ``` ## `TreeRepository` API[​](#treerepository-api "Direct link to treerepository-api") For `TreeRepository` API refer to [the Tree Entities documentation](https://typeorm.io/docs/entity/tree-entities.md#working-with-tree-entities). ## `MongoRepository` API[​](#mongorepository-api "Direct link to mongorepository-api") For `MongoRepository` API refer to [the MongoDB documentation](https://typeorm.io/docs/drivers/mongodb.md). --- # EntityManager Using `EntityManager` you can manage (insert, update, delete, load, etc.) any entity. EntityManager is just like a collection of all entity repositories in a single place. You can access the entity manager via DataSource. Example how to use it: ``` import { DataSource } from "typeorm" import { User } from "./entity/User" const myDataSource = new DataSource(/*...*/) const user = await myDataSource.manager.findOneBy(User, { id: 1, }) user.name = "Umed" await myDataSource.manager.save(user) ``` --- # Repository `Repository` is just like `EntityManager` but its operations are limited to a concrete entity. You can access the repository via EntityManager. Example: ``` import { User } from "./entity/User" const userRepository = dataSource.getRepository(User) const user = await userRepository.findOneBy({ id: 1, }) user.name = "Umed" await userRepository.save(user) ``` There are 3 types of repositories: * `Repository` - Regular repository for any entity. * `TreeRepository` - Repository, extensions of `Repository` used for tree-entities (like entities marked with `@Tree` decorator). Has special methods to work with tree structures. * `MongoRepository` - Repository with special functions used only with MongoDB. ---