此版本仍在开发中,尚未被视为稳定版本。对于最新的稳定版本,请使用 Spring Data Couchbase 5.4.4! |
建模实体
本章描述了如何对实体进行建模,并解释了它们在 Couchbase Server 本身中的对应表示。
对象映射基础知识
本节介绍了 Spring Data 对象映射、对象创建、字段和属性访问、可变性和不可变性的基础知识。 请注意,本节仅适用于不使用底层数据存储(如 JPA)的对象映射的 Spring Data 模块。 此外,请务必查阅特定于 store 的部分,了解特定于 store 的对象映射,例如索引、自定义列或字段名称等。
Spring Data 对象映射的核心职责是创建域对象的实例并将存储原生数据结构映射到这些实例上。 这意味着我们需要两个基本步骤:
-
使用公开的构造函数之一创建实例。
-
实例填充 实现所有公开的属性。
对象创建
Spring Data 会自动尝试检测用于具体化该类型对象的持久实体的构造函数。 解析算法的工作原理如下:
-
如果有一个 static factory method,并带有
@PersistenceCreator
然后它被使用。 -
如果只有一个构造函数,则使用它。
-
如果有多个构造函数,并且只有一个构造函数被注释为
@PersistenceCreator
,则使用它。 -
如果类型是 Java
Record
使用 canonical 构造函数。 -
如果存在无参数构造函数,则使用它。 其他构造函数将被忽略。
值解析假定构造函数/工厂方法参数名称与实体的属性名称匹配,即解析将像要填充属性一样执行,包括映射中的所有自定义(不同的数据存储列或字段名称等)。
这还需要类文件中可用的参数名称信息或@ConstructorProperties
注解。
值 resolution 可以使用 Spring Framework 的@Value
value 注释。
有关更多详细信息,请参阅特定于 store 的映射部分。
物业人口
创建实体的实例后, Spring Data 将填充该类的所有剩余持久属性。 除非已由实体的构造函数填充(即通过其构造函数参数列表使用),否则将首先填充 identifier 属性以允许解析循环对象引用。 之后,将在实体实例上设置尚未由构造函数填充的所有非临时属性。 为此,我们使用以下算法:
-
如果该属性是不可变的,但公开了
with…
方法(见下文),我们使用with…
方法创建具有新属性值的新实体实例。 -
如果定义了属性访问(即通过 getter 和 setter 进行访问),我们将调用 setter 方法。
-
如果属性是可变的,我们直接设置字段。
-
如果属性是不可变的,我们将使用持久性作(请参阅对象创建)使用的构造函数来创建实例的副本。
-
默认情况下,我们直接设置 field 值。
让我们看看以下实体:
class Person {
private final @Id Long id; (1)
private final String firstname, lastname; (2)
private final LocalDate birthday;
private final int age; (3)
private String comment; (4)
private @AccessType(Type.PROPERTY) String remarks; (5)
static Person of(String firstname, String lastname, LocalDate birthday) { (6)
return new Person(null, firstname, lastname, birthday,
Period.between(birthday, LocalDate.now()).getYears());
}
Person(Long id, String firstname, String lastname, LocalDate birthday, int age) { (6)
this.id = id;
this.firstname = firstname;
this.lastname = lastname;
this.birthday = birthday;
this.age = age;
}
Person withId(Long id) { (1)
return new Person(id, this.firstname, this.lastname, this.birthday, this.age);
}
void setRemarks(String remarks) { (5)
this.remarks = remarks;
}
}
1 | identifier 属性是 final 的,但设置为null 在构造函数中。
该类公开了一个withId(…) 方法,例如,当实例插入数据存储并生成标识符时。
原版Person 创建新实例时,实例保持不变。
相同的模式通常适用于存储管理的其他属性,但对于持久性作,可能需要更改这些属性。
wither 方法是可选的,因为持久化构造函数(参见 6)实际上是一个复制构造函数,并且设置该属性将被转换为创建一个应用了新标识符值的新实例。 |
2 | 这firstname 和lastname properties 是可能通过 getter 公开的普通不可变属性。 |
3 | 这age property 是一个不可变的,但派生自birthday 财产。
显示设计后,数据库值将胜过默认值,因为 Spring Data 使用唯一声明的构造函数。
即使意图是应该首选计算,这个构造函数也接受age as 参数(可能会忽略它),否则,属性 population 步骤将尝试设置 age 字段并失败,因为它是不可变的,并且没有with… 方法。 |
4 | 这comment property 是可变的,并通过直接设置其 field 来填充。 |
5 | 这remarks property 是可变的,并通过调用 setter 方法进行填充。 |
6 | 该类公开了用于创建对象的工厂方法和构造函数。
这里的核心思想是使用工厂方法而不是额外的构造函数,以避免通过@PersistenceCreator .
相反,属性的默认值是在工厂方法中处理的。
如果您希望 Spring Data 使用工厂方法进行对象实例化,请使用@PersistenceCreator . |
一般建议
-
尽量坚持使用不可变对象 — 不可变对象很容易创建,因为具体化对象只需调用其构造函数即可。 此外,这还可以避免您的域对象充斥着允许客户端代码作对象状态的 setter 方法。 如果需要这些,最好将它们设为包保护,以便它们只能由有限数量的共存类型调用。 仅构造函数具体化比属性填充快 30%。
-
提供全参数构造函数 — 即使您不能或不想将实体建模为不可变值,提供将实体的所有属性(包括可变属性)作为参数的构造函数仍然具有价值,因为这允许对象映射跳过属性填充以获得最佳性能。
-
使用工厂方法而不是重载的构造函数来避免
@PersistenceCreator
— 由于需要全参数构造函数才能获得最佳性能,我们通常希望公开更多特定于应用程序用例的构造函数,这些构造函数省略了自动生成的标识符等内容。 使用静态工厂方法来公开 all-args 构造函数的这些变体是一种既定模式。 -
请确保遵守允许使用生成的 instantiator 和 property 访问器类的约束 —
-
对于要生成的标识符,仍将 final 字段与全参数持久性构造函数 (preferred) 或
with…
方法— -
使用 Lombok 以避免样板代码 — 由于持久性作通常需要一个采用所有参数的构造函数,因此它们的声明会变成将样板参数繁琐地重复到字段分配中,而使用 Lombok 的
@AllArgsConstructor
.
覆盖属性
Java 允许灵活地设计域类,其中子类可以定义一个已经在其超类中以相同名称声明的属性。 请考虑以下示例:
public class SuperType {
private CharSequence field;
public SuperType(CharSequence field) {
this.field = field;
}
public CharSequence getField() {
return this.field;
}
public void setField(CharSequence field) {
this.field = field;
}
}
public class SubType extends SuperType {
private String field;
public SubType(String field) {
super(field);
this.field = field;
}
@Override
public String getField() {
return this.field;
}
public void setField(String field) {
this.field = field;
// optional
super.setField(field);
}
}
这两个类都定义了一个field
使用 Assignable 类型。SubType
然而,阴影SuperType.field
.
根据类设计,使用构造函数可能是设置SuperType.field
.
或者,调用super.setField(…)
可以在 setter 中设置field
在SuperType
.
所有这些机制在某种程度上都会产生冲突,因为属性共享相同的名称,但可能表示两个不同的值。
如果类型不可分配,则 Spring Data 会跳过超类型属性。
也就是说,被覆盖属性的类型必须可分配给其超类型属性类型才能注册为 override,否则超类型属性被视为临时属性。
我们通常建议使用不同的属性名称。
Spring Data 模块通常支持持有不同值的覆盖属性。 从编程模型的角度来看,需要考虑以下几点:
-
应该保留哪个属性(默认为所有声明的属性)? 您可以通过使用
@Transient
. -
如何在数据存储中表示属性? 对不同的值使用相同的字段/列名称通常会导致数据损坏,因此您应该使用显式字段/列名称至少注释一个属性。
-
用
@AccessType(PROPERTY)
不能使用,因为如果不对 setter 实现进行任何进一步的假设,通常不能设置 super-property。
Kotlin 支持
Spring Data 调整了 Kotlin 的细节以允许对象创建和更改。
Kotlin 对象创建
支持实例化 Kotlin 类,默认情况下所有类都是不可变的,并且需要显式属性声明来定义可变属性。
Spring Data 会自动尝试检测用于具体化该类型对象的持久实体的构造函数。 解析算法的工作原理如下:
-
如果存在一个带有
@PersistenceCreator
,则使用它。 -
如果类型是 Kotlin 数据类,则使用主构造函数。
-
如果有一个 static factory method,并带有
@PersistenceCreator
然后它被使用。 -
如果只有一个构造函数,则使用它。
-
如果有多个构造函数,并且只有一个构造函数被注释为
@PersistenceCreator
,则使用它。 -
如果类型是 Java
Record
使用 canonical 构造函数。 -
如果存在无参数构造函数,则使用它。 其他构造函数将被忽略。
请考虑以下data
类Person
:
data class Person(val id: String, val name: String)
上面的类编译为具有显式构造函数的典型类。我们可以通过添加另一个构造函数来自定义这个类,并使用@PersistenceCreator
要指示构造函数首选项,请执行以下作:
data class Person(var id: String, val name: String) {
@PersistenceCreator
constructor(id: String) : this(id, "unknown")
}
Kotlin 允许在未提供参数时使用默认值,从而支持参数可选性。
当 Spring Data 检测到具有参数default的构造函数时,如果数据存储不提供值(或只是返回)null
),以便 Kotlin 可以应用参数默认值。请考虑以下类,该类为name
data class Person(var id: String, val name: String = "unknown")
每次name
parameter 不是 result 的一部分,或者其值为null
,则name
默认为unknown
.
Spring Data 不支持委托属性。映射元数据会筛选 Kotlin Data 类的委托属性。
在所有其他情况下,您可以通过使用@delegate:org.springframework.data.annotation.Transient . |
Kotlin 数据类的属性填充
在 Kotlin 中,默认情况下,所有类都是不可变的,并且需要显式属性声明来定义可变属性。
请考虑以下data
类Person
:
data class Person(val id: String, val name: String)
这个类实际上是不可变的。
它允许在 Kotlin 生成copy(…)
创建新对象实例的方法,从现有对象复制所有属性值,并将作为参数提供的属性值应用于方法。
Kotlin 覆盖属性
Kotlin 允许声明属性覆盖来更改子类中的属性。
open class SuperType(open var field: Int)
class SubType(override var field: Int = 1) :
SuperType(field) {
}
这样的排列会呈现两个名为field
.
Kotlin 会为每个类中的每个属性生成属性访问器(getter 和 setter)。
实际上,代码如下所示:
public class SuperType {
private int field;
public SuperType(int field) {
this.field = field;
}
public int getField() {
return this.field;
}
public void setField(int field) {
this.field = field;
}
}
public final class SubType extends SuperType {
private int field;
public SubType(int field) {
super(field);
this.field = field;
}
public int getField() {
return this.field;
}
public void setField(int field) {
this.field = field;
}
}
上的 getter 和 setterSubType
仅设置SubType.field
而不是SuperType.field
.
在这样的安排中,使用构造函数是将SuperType.field
.
添加一个SubType
设置SuperType.field
通过this.SuperType.field = …
是可能的,但不符合支持的约定。
属性覆盖在某种程度上会产生冲突,因为属性共享相同的名称,但可能表示两个不同的值。
我们通常建议使用不同的属性名称。
Spring Data 模块通常支持持有不同值的覆盖属性。 从编程模型的角度来看,需要考虑以下几点:
-
应该保留哪个属性(默认为所有声明的属性)? 您可以通过使用
@Transient
. -
如何在数据存储中表示属性? 对不同的值使用相同的字段/列名称通常会导致数据损坏,因此您应该使用显式字段/列名称至少注释一个属性。
-
用
@AccessType(PROPERTY)
不能使用,因为无法设置 super-property。
Kotlin 值类
Kotlin 值类专为更具表现力的领域模型而设计,以明确基本概念。 Spring Data 可以使用 Value Classes 读取和写入定义属性的类型。
请考虑以下域模型:
@JvmInline
value class EmailAddress(val theAddress: String) (1)
data class Contact(val id: String, val name:String, val emailAddress: EmailAddress) (2)
1 | 具有不可为 null 的值类型的简单值类。 |
2 | Data 类使用EmailAddress value 类。 |
使用非基元值类型的不可为 null 的属性在编译类中被平展为 value 类型。 可为 null 的基元值类型或可为 null 的值类型用其包装器类型表示,这会影响值类型在数据库中的表示方式。 |
文档和字段
所有实体都应使用@Document
注解,但这不是必需的。
此外,实体中的每个字段都应该使用@Field
注解。虽然这是 - 严格来说 -
可选,它有助于减少边缘情况,并清楚地显示实体的意图和设计。它还可用于
将字段存储在其他名称下。
还有一个特殊的@Id
注释,这需要始终保持原位。最佳做法是同时为属性命名id
.
下面是一个非常简单的User
实体:
import org.springframework.data.annotation.Id;
import org.springframework.data.couchbase.core.mapping.Field;
import org.springframework.data.couchbase.core.mapping.Document;
@Document
public class User {
@Id
private String id;
@Field
private String firstname;
@Field
private String lastname;
public User(String id, String firstname, String lastname) {
this.id = id;
this.firstname = firstname;
this.lastname = lastname;
}
public String getId() {
return id;
}
public String getFirstname() {
return firstname;
}
public String getLastname() {
return lastname;
}
}
Couchbase Server supports automatic expiration for documents.
The library implements support for it through the @Document
annotation.
You can set a expiry
value which translates to the number of seconds until the document gets removed automatically.
If you want to make it expire in 10 seconds after mutation, set it like @Document(expiry = 10)
.
Alternatively, you can configure the expiry using Spring’s property support and the expiryExpression
parameter, to allow for dynamically changing the expiry value.
For example: @Document(expiryExpression = "${valid.document.expiry}")
.
The property must be resolvable to an int value and the two approaches cannot be mixed.
If you want a different representation of the field name inside the document in contrast to the field name used in your entity, you can set a different name on the @Field
annotation.
For example if you want to keep your documents small you can set the firstname field to @Field("fname")
.
In the JSON document, you’ll see {"fname": ".."}
instead of {"firstname": ".."}
.
The @Id
annotation needs to be present because every document in Couchbase needs a unique key.
This key needs to be any string with a length of maximum 250 characters.
Feel free to use whatever fits your use case, be it a UUID, an email address or anything else.
Writes to Couchbase-Server buckets can optionally be assigned durability requirements; which instruct Couchbase Server to update the specified document on multiple nodes in memory and/or disk locations across the cluster; before considering the write to be committed.
Default durability requirements can also be configured through the @Document
or @Durability
annotations.
For example: @Document(durabilityLevel = DurabilityLevel.MAJORITY)
will force mutations to be replicated to a majority of the Data Service nodes. Both of the annotations support expression based durability level assignment via durabilityExpression
attribute (Note SPEL is not supported).
Datatypes and Converters
The storage format of choice is JSON. It is great, but like many data representations it allows less datatypes than you could express in Java directly.
Therefore, for all non-primitive types some form of conversion to and from supported types needs to happen.
For the following entity field types, you don’t need to add special handling:
Table 1. Primitive Types
Java Type
JSON Representation
string
string
boolean
boolean
byte
number
short
number
int
number
long
number
float
number
double
number
null
Ignored on write
Since JSON supports objects ("maps") and lists, Map
and List
types can be converted naturally.
If they only contain primitive field types from the last paragraph, you don’t need to add special handling too.
Here is an example:
Example 2. A Document with Map and List
@Document
public class User {
@Id
private String id;
@Field
private List<String> firstnames;
@Field
private Map<String, Integer> childrenAges;
public User(String id, List<String> firstnames, Map<String, Integer> childrenAges) {
this.id = id;
this.firstnames = firstnames;
this.childrenAges = childrenAges;
}
}
Storing a user with some sample data could look like this as a JSON representation:
Example 3. A Document with Map and List - JSON
{
"_class": "foo.User",
"childrenAges": {
"Alice": 10,
"Bob": 5
},
"firstnames": [
"Foo",
"Bar",
"Baz"
]
}
You don’t need to break everything down to primitive types and Lists/Maps all the time.
Of course, you can also compose other objects out of those primitive values.
Let’s modify the last example so that we want to store a List
of Children
:
Example 4. A Document with composed objects
@Document
public class User {
@Id
private String id;
@Field
private List<String> firstnames;
@Field
private List<Child> children;
public User(String id, List<String> firstnames, List<Child> children) {
this.id = id;
this.firstnames = firstnames;
this.children = children;
}
static class Child {
private String name;
private int age;
Child(String name, int age) {
this.name = name;
this.age = age;
}
}
}
A populated object can look like:
Example 5. A Document with composed objects - JSON
{
"_class": "foo.User",
"children": [
{
"age": 4,
"name": "Alice"
},
{
"age": 3,
"name": "Bob"
}
],
"firstnames": [
"Foo",
"Bar",
"Baz"
]
}
Most of the time, you also need to store a temporal value like a Date
.
Since it can’t be stored directly in JSON, a conversion needs to happen.
The library implements default converters for Date
, Calendar
and JodaTime types (if on the classpath).
All of those are represented by default in the document as a unix timestamp (number).
You can always override the default behavior with custom converters as shown later.
Here is an example:
Example 6. A Document with Date and Calendar
@Document
public class BlogPost {
@Id
private String id;
@Field
private Date created;
@Field
private Calendar updated;
@Field
private String title;
public BlogPost(String id, Date created, Calendar updated, String title) {
this.id = id;
this.created = created;
this.updated = updated;
this.title = title;
}
}
A populated object can look like:
Example 7. A Document with Date and Calendar - JSON
{
"title": "a blog post title",
"_class": "foo.BlogPost",
"updated": 1394610843,
"created": 1394610843897
}
Optionally, Date can be converted to and from ISO-8601 compliant strings by setting system property org.springframework.data.couchbase.useISOStringConverterForDate
to true.
If you want to override a converter or implement your own one, this is also possible.
The library implements the general Spring Converter pattern.
You can plug in custom converters on bean creation time in your configuration.
Here’s how you can configure it (in your overridden AbstractCouchbaseConfiguration
):
Example 8. Custom Converters
@Override
public CustomConversions customConversions() {
return new CustomConversions(Arrays.asList(FooToBarConverter.INSTANCE, BarToFooConverter.INSTANCE));
}
@WritingConverter
public static enum FooToBarConverter implements Converter<Foo, Bar> {
INSTANCE;
@Override
public Bar convert(Foo source) {
return /* do your conversion here */;
}
}
@ReadingConverter
public static enum BarToFooConverter implements Converter<Bar, Foo> {
INSTANCE;
@Override
public Foo convert(Bar source) {
return /* do your conversion here */;
}
}
There are a few things to keep in mind with custom conversions:
-
To make it unambiguous, always use the @WritingConverter
and @ReadingConverter
annotations on your converters.
Especially if you are dealing with primitive type conversions, this will help to reduce possible wrong conversions.
-
If you implement a writing converter, make sure to decode into primitive types, maps and lists only.
If you need more complex object types, use the CouchbaseDocument
and CouchbaseList
types, which are also understood by the underlying translation engine.
Your best bet is to stick with as simple as possible conversions.
-
Always put more special converters before generic converters to avoid the case where the wrong converter gets executed.
-
For dates, reading converters should be able to read from any Number
(not just Long
).
This is required for N1QL support.
Optimistic Locking
In certain situations you may want to ensure that you are not overwriting another users changes when you perform a mutation operation on a document. For this you have three choices: Transactions (since Couchbase 6.5), pessimistic concurrency (locking) or optimistic concurrency.
Optimistic concurrency tends to provide better performance than pessimistic concurrency or transactions, because no actual locks are held on the data and no extra information is stored about the operation (no transaction log).
To implement optimistic locking, Couchbase uses a CAS (compare and swap) approach. When a document is mutated, the CAS value also changes.
The CAS is opaque to the client, the only thing you need to know is that it changes when the content or a meta information changes too.
In other datastores, similar behavior can be achieved through an arbitrary version field with a incrementing counter.
Since Couchbase supports this in a much better fashion, it is easy to implement.
If you want automatic optimistic locking support, all you need to do is add a @Version
annotation on a long field like this:
Example 9. A Document with optimistic locking.
@Document
public class User {
@Version
private long version;
// constructor, getters, setters...
}
If you load a document through the template or repository, the version field will be automatically populated with the current CAS value.
It is important to note that you shouldn’t access the field or even change it on your own.
Once you save the document back, it will either succeed or fail with a OptimisticLockingFailureException
.
If you get such an exception, the further approach depends on what you want to achieve application wise.
You should either retry the complete load-update-write cycle or propagate the error to the upper layers for proper handling.
Validation
The library supports JSR 303 validation, which is based on annotations directly in your entities.
Of course you can add all kinds of validation in your service layer, but this way its nicely coupled to your actual entities.
To make it work, you need to include two additional dependencies.
JSR 303 and a library that implements it, like the one supported by hibernate:
Example 10. Validation dependencies
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
</dependency>
Now you need to add two beans to your configuration:
Example 11. Validation beans
@Bean
public LocalValidatorFactoryBean validator() {
return new LocalValidatorFactoryBean();
}
@Bean
public ValidatingCouchbaseEventListener validationEventListener() {
return new ValidatingCouchbaseEventListener(validator());
}
Now you can annotate your fields with JSR303 annotations.
If a validation on save()
fails, a ConstraintViolationException
is thrown.
Example 12. Sample Validation Annotation
@Size(min = 10)
@Field
private String name;
Auditing
Entities can be automatically audited (tracing which user created the object, updated the object, and at what times) through Spring Data auditing mechanisms.
First, note that only entities that have a @Version
annotated field can be audited for creation (otherwise the framework will interpret a creation as an update).
Auditing works by annotating fields with @CreatedBy
, @CreatedDate
, @LastModifiedBy
and @LastModifiedDate
.
The framework will automatically inject the correct values on those fields when persisting the entity.
The xxxDate annotations must be put on a Date
field (or compatible, eg. jodatime classes) while the xxxBy annotations can be put on fields of any class T
(albeit both fields must be of the same type).
To configure auditing, first you need to have an auditor aware bean in the context.
Said bean must be of type AuditorAware<T>
(allowing to produce a value that can be stored in the xxxBy fields of type T
we saw earlier).
Secondly, you must activate auditing in your @Configuration
class by using the @EnableCouchbaseAuditing
annotation.
Here is an example:
Example 13. Sample Auditing Entity
@Document
public class AuditedItem {
@Id
private final String id;
private String value;
@CreatedBy
private String creator;
@LastModifiedBy
private String lastModifiedBy;
@LastModifiedDate
private Date lastModification;
@CreatedDate
private Date creationDate;
@Version
private long version;
//..omitted constructor/getters/setters/...
}
Notice both @CreatedBy
and @LastModifiedBy
are both put on a String
field, so our AuditorAware
must work with String
.
Example 14. Sample AuditorAware implementation
public class NaiveAuditorAware implements AuditorAware<String> {
private String auditor = "auditor";
@Override
public String getCurrentAuditor() {
return auditor;
}
public void setAuditor(String auditor) {
this.auditor = auditor;
}
}
To tie all that together, we use the java configuration both to declare an AuditorAware bean and to activate auditing:
Example 15. Sample Auditing Configuration
@Configuration
@EnableCouchbaseAuditing //this activates auditing
public class AuditConfiguration extends AbstractCouchbaseConfiguration {
//... a few abstract methods omitted here
// this creates the auditor aware bean that will feed the annotations
@Bean
public NaiveAuditorAware testAuditorAware() {
return new NaiveAuditorAware();
}