shardingsphere-jdbc 水平分表學習記錄( 二 )

也就分為邏輯datasource定義, 真實的datasource定義.對于每個邏輯表,定義分庫分表規則,如果需要生成分布式key,定義key的生成算法.分別對應spring.shardingsphere.datasource.前綴和spring.shardingsphere.rules.sharding前綴.
對于SNOWFLAKE要注意數據庫的字段類型要bigint,int不夠放.
啟動報錯***************************APPLICATION FAILED TO START***************************Description:An attempt was made to call a method that does not exist. The attempt was made from the following location:org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1.<init>(ShardingSphereYamlConstructor.java:44)The following method did not exist:'void org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1.setCodePointLimit(int)'The calling methods class, org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1, was loaded from the following location:jar:file:/.m2/repository/org/apache/shardingsphere/shardingsphere-infra-util/5.2.1/shardingsphere-infra-util-5.2.1.jar!/org/apache/shardingsphere/infra/util/yaml/constructor/ShardingSphereYamlConstructor$1.classThe called methods class, org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1, is available from the following locations:jar:file:/.m2/repository/org/apache/shardingsphere/shardingsphere-infra-util/5.2.1/shardingsphere-infra-util-5.2.1.jar!/org/apache/shardingsphere/infra/util/yaml/constructor/ShardingSphereYamlConstructor$1.classThe called methods class hierarchy was loaded from the following locations:null: file:/.m2/repository/org/apache/shardingsphere/shardingsphere-infra-util/5.2.1/shardingsphere-infra-util-5.2.1.jarorg.yaml.snakeyaml.LoaderOptions: file:/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jarAction:Correct the classpath of your application so that it contains a single, compatible version of org.apache.shardingsphere.infra.util.yaml.constructor.ShardingSphereYamlConstructor$1很明顯的一個以來沖突問題, 主要是這行代碼:
public ShardingSphereYamlConstructor(final Class<?> rootClass) {super(rootClass, new LoaderOptions() {{setCodePointLimit(Integer.MAX_VALUE);}});ShardingSphereYamlConstructFactory.getInstances().forEach(each -> typeConstructs.put(each.getType(), each));ShardingSphereYamlShortcutsFactory.getAllYamlShortcuts().forEach((key, value) -> addTypeDescription(new TypeDescription(value, key)));this.rootClass = rootClass;}snakeyaml的版本沖突,使用的版本中LoaderOptions沒有setCodePointLimit這個方法.使用的springboot的依賴的是1.30.0,顯式依賴1.33.0即可.
<dependency><groupId>org.yaml</groupId><artifactId>snakeyaml</artifactId><version>1.33</version></dependency>配置錯誤導致的報錯這類報錯種類比較多比如

  • DataNodesMissedWithShardingTableException
  • ShardingRuleNotFoundException
  • InconsistentShardingTableMetaDataException
等等, 啟動就會失敗, 因為是讀取了配置解析異常.
這種就要看看對應的錯誤和配置.
不過有點奇怪的是一些錯誤沒有打出詳細的報錯信息.比如:
Caused by: org.apache.shardingsphere.sharding.exception.metadata.DataNodesMissedWithShardingTableException: null at org.apache.shardingsphere.sharding.rule.TableRule.lambda$checkRule$4(TableRule.java:246) ~[shardingsphere-sharding-core-5.2.1.jar:5.2.1] at org.apache.shardingsphere.infra.util.exception.ShardingSpherePreconditions.checkState(ShardingSpherePreconditions.java:41) ~[shardingsphere-infra-util-5.2.1.jar:5.2.1] at org.apache.shardingsphere.sharding.rule.TableRule.checkRule(TableRule.java:245) ~[shardingsphere-sharding-core-5.2.1.jar:5.2.1]看了下是基類沒調用super,導致message沒有值.看了下這個已經在master分支修好了:
public ShardingSphereSQLException(final SQLState sqlState, final int typeOffset, final int errorCode, final String reason, final Object... messageArguments) {this(sqlState.getValue(), typeOffset, errorCode, reason, messageArguments);}public ShardingSphereSQLException(final String sqlState, final int typeOffset, final int errorCode, final String reason, final Object... messageArguments) {this.sqlState = sqlState;vendorCode = typeOffset * 10000 + errorCode;this.reason = null == reason ? null : String.format(reason, messageArguments);// missing super(resaon) here}數據庫自動生成的key不能作為route key但是分布式生成的key可以, 這個在FAQ里有, 有這個錯誤是剛開始配分布式key的時候配錯了.
原文:
[分片] ShardingSphere 除了支持自帶的分布式自增主鍵之外,還能否支持原生的自增主鍵?回答:
是的,可以支持 。但原生自增主鍵有使用限制,即不能將原生自增主鍵同時作為分片鍵使用 。由于 ShardingSphere 并不知曉數據庫的表結構,而原生自增主鍵是不包含在原始 SQL 中內的,因此 ShardingSphere 無法將該字段解析為分片字段 。如自增主鍵非分片鍵 , 則無需關注,可正常返回;若自增主鍵同時作為分片鍵使用,ShardingSphere 無法解析其分片值 , 導致 SQL 路由至多張表,從而影響應用的正確性 。而原生自增主鍵返回的前提條件是 INSERT SQL 必須最終路由至一張表,因此 , 面對返回多表的 INSERT SQL,自增主鍵則會返回零 。

推薦閱讀