Mapping Table Columns — SQLAlchemy 2.0.0b1 documentation
Mapping Table Columns
The default behavior of _orm.mapper()
is to assemble all the columns in the mapped _schema.Table
into mapped object attributes, each of which are named according to the name of the column itself (specifically, the key
attribute of _schema.Column
). This behavior can be modified in several ways.
Naming Columns Distinctly from Attribute Names
A mapping by default shares the same name for a _schema.Column
as that of the mapped attribute - specifically it matches the _schema.Column.key
attribute on _schema.Column
, which by default is the same as the _schema.Column.name
.
The name assigned to the Python attribute which maps to _schema.Column
can be different from either _schema.Column.name
or _schema.Column.key
just by assigning it that way, as we illustrate here in a Declarative mapping:
class User(Base):
__tablename__ = 'user'
id = Column('user_id', Integer, primary_key=True)
name = Column('user_name', String(50))
Where above User.id
resolves to a column named user_id
and User.name
resolves to a column named user_name
.
When mapping to an existing table, the _schema.Column
object can be referenced directly:
class User(Base):
__table__ = user_table
id = user_table.c.user_id
name = user_table.c.user_name
The corresponding technique for an imperative mapping is to place the desired key in the :paramref:`_orm.mapper.properties` dictionary with the desired key:
mapper_registry.map_imperatively(User, user_table, properties={
'id': user_table.c.user_id,
'name': user_table.c.user_name,
})
In the next section we’ll examine the usage of .key
more closely.
Automating Column Naming Schemes from Reflected Tables
In the previous section Naming Columns Distinctly from Attribute Names, we showed how a _schema.Column
explicitly mapped to a class can have a different attribute name than the column. But what if we aren’t listing out _schema.Column
objects explicitly, and instead are automating the production of _schema.Table
objects using reflection (e.g. as described in Reflecting Database Objects)? In this case we can make use of the _events.DDLEvents.column_reflect()
event to intercept the production of _schema.Column
objects and provide them with the _schema.Column.key
of our choice. The event is most easily associated with the _schema.MetaData
object that’s in use, such as below we use the one linked to the _orm.declarative_base
instance:
@event.listens_for(Base.metadata, "column_reflect")
def column_reflect(inspector, table, column_info):
# set column.key = "attr_<lower_case_name>"
column_info['key'] = "attr_%s" % column_info['name'].lower()
With the above event, the reflection of _schema.Column
objects will be intercepted with our event that adds a new “.key” element, such as in a mapping as below:
class MyClass(Base):
__table__ = Table("some_table", Base.metadata,
autoload_with=some_engine)
The approach also works with the Automap extension. See the section automap_intercepting_columns for background.
See also
_events.DDLEvents.column_reflect()
automap_intercepting_columns - in the Automap documentation
Naming All Columns with a Prefix
A quick approach to prefix column names, typically when mapping to an existing _schema.Table
object, is to use column_prefix
:
class User(Base):
__table__ = user_table
__mapper_args__ = {'column_prefix':'_'}
The above will place attribute names such as _user_id
, _user_name
, _password
etc. on the mapped User
class.
This approach is uncommon in modern usage. For dealing with reflected tables, a more flexible approach is to use that described in Automating Column Naming Schemes from Reflected Tables.
Using column_property for column level options
Options can be specified when mapping a _schema.Column
using the column_property()
function. This function explicitly creates the ColumnProperty
used by the mapper()
to keep track of the _schema.Column
; normally, the mapper()
creates this automatically. Using column_property()
, we can pass additional arguments about how we’d like the _schema.Column
to be mapped. Below, we pass an option active_history
, which specifies that a change to this column’s value should result in the former value being loaded first:
from sqlalchemy.orm import column_property
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
name = column_property(Column(String(50)), active_history=True)
column_property()
is also used to map a single attribute to multiple columns. This use case arises when mapping to a _expression.join()
which has attributes which are equated to each other:
class User(Base):
__table__ = user.join(address)
# assign "user.id", "address.user_id" to the
# "id" attribute
id = column_property(user_table.c.id, address_table.c.user_id)
For more examples featuring this usage, see Mapping a Class against Multiple Tables.
Another place where column_property()
is needed is to specify SQL expressions as mapped attributes, such as below where we create an attribute fullname
that is the string concatenation of the firstname
and lastname
columns:
class User(Base):
__tablename__ = 'user'
id = Column(Integer, primary_key=True)
firstname = Column(String(50))
lastname = Column(String(50))
fullname = column_property(firstname + " " + lastname)
See examples of this usage at SQL Expressions as Mapped Attributes.
Mapping a Subset of Table Columns
Sometimes, a _schema.Table
object was made available using the reflection process described at Reflecting Database Objects to load the table’s structure from the database. For such a table that has lots of columns that don’t need to be referenced in the application, the include_properties
or exclude_properties
arguments can specify that only a subset of columns should be mapped. For example:
class User(Base):
__table__ = user_table
__mapper_args__ = {
'include_properties' :['user_id', 'user_name']
}
…will map the User
class to the user_table
table, only including the user_id
and user_name
columns - the rest are not referenced. Similarly:
class Address(Base):
__table__ = address_table
__mapper_args__ = {
'exclude_properties' : ['street', 'city', 'state', 'zip']
}
…will map the Address
class to the address_table
table, including all columns present except street
, city
, state
, and zip
.
When this mapping is used, the columns that are not included will not be referenced in any SELECT statements emitted by _query.Query
, nor will there be any mapped attribute on the mapped class which represents the column; assigning an attribute of that name will have no effect beyond that of a normal Python attribute assignment.
In some cases, multiple columns may have the same name, such as when mapping to a join of two or more tables that share some column name. include_properties
and exclude_properties
can also accommodate _schema.Column
objects to more accurately describe which columns should be included or excluded:
class UserAddress(Base):
__table__ = user_table.join(addresses_table)
__mapper_args__ = {
'exclude_properties' :[address_table.c.id],
'primary_key' : [user_table.c.id]
}
Note
insert and update defaults configured on individual _schema.Column
objects, i.e. those described at Column INSERT/UPDATE Defaults including those configured by the :paramref:`_schema.Column.default`, :paramref:`_schema.Column.onupdate`, :paramref:`_schema.Column.server_default` and :paramref:`_schema.Column.server_onupdate` parameters, will continue to function normally even if those _schema.Column
objects are not mapped. This is because in the case of :paramref:`_schema.Column.default` and :paramref:`_schema.Column.onupdate`, the _schema.Column
object is still present on the underlying _schema.Table
, thus allowing the default functions to take place when the ORM emits an INSERT or UPDATE, and in the case of :paramref:`_schema.Column.server_default` and :paramref:`_schema.Column.server_onupdate`, the relational database itself emits these defaults as a server side behavior.