Compare commits

...

59 Commits

Author SHA1 Message Date
github-actions[bot] e034606465 Update version to v3.8.3-patch.2 2025-02-28 09:06:23 +00:00
icey-yu 41a26c185b Merge pull request #3174 from icey-yu/fix-push
fix: Offline push does not have a badge && Android offline push (#3146)
2025-02-28 17:02:00 +08:00
icey-yu 5e0200cc91 fix: Offline push does not have a badge && Android offline push (#3146)
* fix: offline push can display badge

* feat: strategy

* feat: log

* feat: log

* chore: offlinepush

* fix: offlinepush

* fix: log

(cherry picked from commit 14393b0f53)
2025-02-28 16:59:17 +08:00
icey-yu cb4ac19968 Merge pull request #3062 from icey-yu/fix-pro
Fix: prometheus config
2025-01-14 18:26:37 +08:00
icey-yu 3fec3b4321 fix: prometheus config 2025-01-14 18:23:08 +08:00
icey-yu 0f8d4d0943 fix: prometheus config 2025-01-14 18:21:44 +08:00
icey-yu e20e52b779 fix: prometheus config 2025-01-14 18:19:25 +08:00
icey-yu 28f00a8ffb Merge pull request #3061 from icey-yu/fix-pro
fix: node exporter image
2025-01-14 18:16:22 +08:00
icey-yu 376c1b87ee fix: add autoPort && prometheus port discovery 2025-01-14 18:12:00 +08:00
icey-yu c0a0b4da62 Merge pull request #3059 from icey-yu/fix-pro
fix: add autoSetPort && prometheus port discovery
2025-01-14 17:55:35 +08:00
icey-yu 89503aa529 fix: add autoPort && prometheus port discovery 2025-01-14 17:46:08 +08:00
icey-yu 1e68f99d11 fix: add autoPort && prometheus port discovery 2025-01-14 17:44:44 +08:00
icey-yu 7912ac0623 Merge pull request #3057 from openimsdk/cherry-pick-f7a1d5b
deps: Merge  #3055 PRs into release-v3.8.3
2025-01-10 17:34:10 +08:00
icey-yu ade108d656 update: env (#3055) 2025-01-10 09:30:31 +00:00
icey-yu 222a2f0029 Merge pull request #3054 from icey-yu/fix-v3
Fix: Service Discovery err in v3.8.3
2025-01-10 14:34:31 +08:00
icey-yu 8c6d734f88 fix: discovery 2025-01-10 14:20:43 +08:00
github-actions[bot] 1339121e29 Update version to v3.8.3 2025-01-07 11:30:49 +00:00
chao 382e5947ac Merge pull request #3043 from openimsdk/cherry-pick-1e8a106
deps: Merge  #3038 #3040 PRs into pre-release-v3.8.3
2025-01-07 19:24:27 +08:00
chao 12800c1421 Merge pull request #3044 from withchao/cherry-pick-1e8a106
fix: resolving v3.8.3 conflicts
2025-01-07 19:24:07 +08:00
withchao d0cd40aae6 resolving v3.8.3 conflicts 2025-01-07 19:22:54 +08:00
chao 36c18ce7ea fix: GetUsersOnline returns an error in the online list (#3040)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect

* fix: quote message error revoke

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* upgrading pkg tools

* fix

* fix

* optimize log output

* feat: support GetLastMessage

* feat: support GetLastMessage

* feat: s3 switch

* feat: s3 switch

* fix: GetUsersOnline
2025-01-07 10:41:54 +00:00
icey-yu 4b20286a96 feat: change appNotificationAccount to appManagerAccount && fix: enable config center add env check && fix: error return (#3038)
* feat: change appNotificationAccount to appManagerAccount && fix: enable config center add env check && fix: error return

* fix: err
2025-01-07 10:41:54 +00:00
chao 3b3ce0d8f5 Merge pull request #3032 from openimsdk/cherry-pick-a1b5f05
deps: Merge  #3026 #3029 PRs into pre-release-v3.8.3
2025-01-04 11:44:06 +08:00
chao 34c6fe5838 Merge pull request #3033 from withchao/cherry-pick-a1b5f05
fix: resolving v3.8.3 conflicts
2025-01-04 11:43:01 +08:00
withchao 7415dff32c fix: resolving v3.8.3 conflicts 2025-01-04 11:41:12 +08:00
chao 66edc76c54 feat: support GetLastMessage (#3029)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect

* fix: quote message error revoke

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* upgrading pkg tools

* fix

* fix

* optimize log output

* feat: support GetLastMessage

* feat: support GetLastMessage

* feat: s3 switch

* feat: s3 switch
2025-01-04 03:09:57 +00:00
chao 0b78948f62 feat: optimize log output (#3026)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect

* fix: quote message error revoke

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* upgrading pkg tools

* fix

* fix

* optimize log output
2025-01-04 03:09:57 +00:00
chao a395c82e0d Merge pull request #3023 from openimsdk/cherry-pick-435d8bb
deps: Merge  #3022 PRs into pre-release-v3.8.3
2024-12-29 14:50:50 +08:00
chao 3d064d66f2 fix: online status error (#3022)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect

* fix: quote message error revoke

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* upgrading pkg tools

* fix

* fix
2024-12-29 06:50:32 +00:00
Monet Lee 5f333426a3 build: create services image workflows. 2024-12-29 10:43:48 +08:00
chao 9a122d2eb3 Merge pull request #3014 from openimsdk/cherry-pick-7563aff
deps: Merge  #2914 #2957 #2972 #2973 #2977 #2985 #2993 #2995 #3001 #3002 #3007 #3009 PRs into pre-release-v3.8.3
2024-12-27 18:06:49 +08:00
chao b805113870 Merge pull request #3018 from withchao/cherry-pick-7563aff
fix: resolving v3.8.3 conflicts
2024-12-27 18:05:16 +08:00
withchao 2676295a4c merge code 2024-12-27 17:34:56 +08:00
OpenIM-Gordon d3b2587743 fix: The message @ information will be set only for members in the group. (#3009) 2024-12-27 08:38:05 +00:00
chao f9e6d07581 feat: support message cache (#3007)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect

* fix: quote message error revoke

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* upgrading pkg tools

* redis msg cache

* redis msg cache

* redis msg cache

* redis msg cache

* redis msg cache

* redis msg cache

* redis msg cache
2024-12-27 08:38:05 +00:00
icey-yu 52052a9165 fix: redis save error when KickTokens (#3002) 2024-12-27 08:38:05 +00:00
icey-yu 3e872d6c5a fix: when unable EnableHistoryForNewMembers, new group member can read last one message. (#3001) 2024-12-27 08:38:05 +00:00
chao 9ac35c9059 feat: optimize error stack information (#2995)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect

* fix: quote message error revoke

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* upgrading pkg tools
2024-12-27 08:38:05 +00:00
chao de42eb1f11 feat: Optimizing RPC call (#2993)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect

* fix: quote message error revoke

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* rpc client

* rpc client

* rpc client

* rpc client

* rpc client

* rpc client

* rpc client

* rpc client
2024-12-27 08:38:05 +00:00
chao b26b0a422c feat: Optimize Scheduled Task (#2985)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect

* fix: quote message error revoke

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks

* refactoring scheduled tasks
2024-12-27 08:38:05 +00:00
chao 248cb5c107 fix: when fetching a referenced message, it indicates that the original message has been deleted. (#2977)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect

* fix: quote message error revoke
2024-12-27 08:38:05 +00:00
Monet Lee 035baff1b5 docs: improve deployment docs in kubernetes. (#2973)
* docs: improve deployment docs in kubernetes.

* move docs path.

* format contents.

* update contents.

* build: update deployment env.

* docs: update deploy docs.

* build: add kafka secret and dependencies.

* docs: update deployment docs.

* Update docs contents.

* update docs contents.
2024-12-27 08:38:05 +00:00
chao de94014b1b fix: modifying other fields while setting IsPrivateChat does not take effect (#2972)
* pb

* fix: Modifying other fields while setting IsPrivateChat does not take effect
2024-12-27 08:38:05 +00:00
icey-yu aa35155ccb fix: rpc panic recover (#2957) 2024-12-27 08:38:05 +00:00
icey-yu 1542a0c98d fix: Add a lead time for the token's issuance time. (#2914) 2024-12-27 08:38:04 +00:00
OpenIM-Robot 716e06f3ad pb (#2958) (#2961)
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
2024-12-12 18:55:55 +08:00
OpenIM-Robot 3fe2053d4f fix: fetch message return isEnd and endSeq panic. (#2959) (#2960)
* fix: server can return isEnd to control fetch messages when sdk pull messages end normally.

* fix: server can return isEnd to control fetch messages when sdk pull messages end normally.

* fix: server can return isEnd to control fetch messages when sdk pull messages end normally.

Co-authored-by: OpenIM-Gordon <1432970085@qq.com>
2024-12-12 18:55:12 +08:00
Monet Lee 5430bc4569 Merge pull request #2956 from openimsdk/cherry-pick-cf740a1
deps: Merge  #2891 #2893 #2901 #2906 #2910 #2911 #2924 #2927 #2947 #2949 PRs into pre-release-v3.8.3
2024-12-12 15:19:33 +08:00
Monet Lee 01c0d9ca89 build: update go pkg. 2024-12-12 15:11:12 +08:00
OpenIM-Gordon 3c6fbabded fix: server can return isEnd to control fetch messages when sdk pull messages end normally. (#2949) 2024-12-12 07:05:17 +00:00
icey-yu 7f471c44bf fix:Only print panic function frame && feat: log.ZPanic (#2947)
* fix: log print panic

* Merge: main
2024-12-12 07:05:17 +00:00
icey-yu f3a78260a8 feat: Change upload logs systemType to AppFramework. (#2927)
* feat: change upload logs systemType to AppFramework

* feat: change upload logs systemType to AppFramework

* Merge: main
2024-12-12 07:05:17 +00:00
chao be4061da85 feat: seq user and conversation seq synchronization (#2924)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: implement no gob encoder.

* update unitTest content.

* Update hub_server.go

* feat: GroupApplicationAgreeMemberEnterNotification

* fix: encoder replace to json encoder.

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* merge:  merge main code into js branch. (#2648)

* feat: update group notification when set to null. (#2590)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* feat: update group notification when set to null.

* update log standard.

* feat: add long time push msg in prometheus (#2584)

* feat: add long time push msg in prometheus

* fix: log print

* fix: go mod

* fix: log msg

* fix: log init

* feat: push msg

* feat: go mod ,remove cgo package

* feat: remove error log

* feat: test dummy push

* feat:redis pool config

* feat: push to kafka log

* feat: supports getting messages based on session ID and seq (#2582)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: implement request batch count limit. (#2591)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: getting messages based on session ID and seq (#2595)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: avoid pulling messages from sessions with a large number of max seq values of 0 (#2602)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* refactor: improve db structure in `storage/controller` (#2604)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* refactor: improve db structure in `storage/controller`

* feat: implement offline push using kafka (#2600)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* feat: implement offline push.

* feat: implement batch Push spilt

* update go mod

* feat: implement kafka producer and consumer.

* update format,

* add PushMQ log.

* feat: update Handler logic.

* update MQ logic.

* update

* update

* fix: update OfflinePushConsumerHandler.

* feat: API supports gzip (#2609)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* Fix err (#2608)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* feat: add rocksTimeout

* feat: wrap logs

* feat: add logs

* feat: listen config

* feat: enable listen TIME_WAIT port

* feat: add logs

* feat: cache batch

* chore: enable fullUserCache

* feat: push rpc num

* feat: push err

* feat: with operationID

* feat: sleep

* feat: change 1s

* feat: change log

* feat: implement Getbatch in rpcCache.

* feat: print getOnline cost

* feat: change log

* feat: change kafka and push config

* feat: del interface

* feat: fix err

* feat: change config

* feat: go mod

* feat: change config

* feat: change config

* feat: add sleep in push

* feat: warn logs

* feat: logs

* feat: logs

* feat: change port

* feat: start config

* feat: remove port reuse

* feat: prometheus config

* feat: prometheus config

* feat: prometheus config

* feat: add long time send msg to grafana

* feat: init

* feat: init

* feat: implement offline push.

* feat: batch get user online

* feat: implement batch Push spilt

* update go mod

* Revert "feat: change port"

This reverts commit 06d5e944

* feat: change port

* feat: change config

* feat: implement kafka producer and consumer.

* update format,

* add PushMQ log.

* feat: get all online users and init push

* feat: lock in online cache

* feat: config

* fix: init online status

* fix: add logs

* fix: userIDs

* fix: add logs

* feat: update Handler logic.

* update MQ logic.

* update

* update

* fix: method name

* fix: update OfflinePushConsumerHandler.

* fix: prommetrics

* fix: add logs

* fix: ctx

* fix: log

* fix: config

* feat: change port

* fix: atomic online cache status

---------

Co-authored-by: Monet Lee <monet_lee@163.com>

* feature: add GetConversationsHasReadAndMaxSeq interface to the WebSocket API. (#2611)

* fix: lru lock (#2613)

* fix: lru lock

* fix: lru lock

* fix: lru lock

* fix: nil pointer error on close (#2618)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: create group can push notification (#2617)

* fix: blockage caused by listen error (#2620)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: go.mod (#2621)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: improve searchMsg implement. (#2614)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* remove unused script.

* feat: improve searchMsg implement.

* update mongo config.

* Fix lock (#2622)

* fix:log

* fix: lock

* fix: update setGroupInfoEX field name. (#2625)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEX field name.

* fix: update setGroupInfoEX field name (#2626)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEX field name.

* fix: update setGroupInfoEX field name

* feat: msg gateway add log (#2631)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: update setGroupInfoEx func name and field. (#2634)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEx func name and field.

* refactor: update groupinfoEx field.

* refactor: update database name in mongodb.yml

* add groupName Condition

* fix: fix setConversations req fill. (#2645)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: fix setConversations req fill.

* fix: GetMsgBySeqs boundary issues (#2647)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: the attribute version is obsolete, remove it (#2644)

* refactor: update Userregister request field. (#2650)

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: 蔡相跃 <caixiangyue007@gmail.com>

* update go mod

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* merge: update code from main to v3.8-js-sdk-only. (#2720)

* fix: fix update groupName invalid. (#2673)

* refactor: change platform to platformID (#2670)

* feat: don`t return nil data (#2675)

Co-authored-by: Monet Lee <monet_lee@163.com>

* refactor: update fields type in userStatus and check registered. (#2676)

* fix: usertoken auth. (#2677)

* refactor: update fields type in userStatus and check registered.

* fix: usertoken auth.

* update contents.

* update content.

* update

* fix

* update pb file.

* feat: add friend agree after callback (#2680)

* fix: sn not sort (#2682)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* refactor: add GetAdminToken interface. (#2684)

* refactor: add GetAdminToken interface.

* update config.

* fix: admin token (#2686)

* fix: update workflows logic. (#2688)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* fix: admin token (#2687)

* update the front image (#2692)

* update the front image

* update version

* feat: improve publish docker image workflows (#2697)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* feat: improve publish docker image workflows

* update condition logic.

* fix: update load file logic. (#2700)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* feat: improve publish docker image workflows

* update condition logic.

* fix: update load file logic.

* feat: Msg filter (#2703)

* feat: msg filter

* feat: msg filter

* feat: msg filter

* feat: provide the interface required by js sdk (#2712)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* Line webhook (#2716)

* feat: online and offline webhook

* feat: online and offline webhook

* feat: remove zk

* fix: the message I sent is not set to read seq in mongodb (#2718)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: cannot modify group member avatars (#2719)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: skiffer-git <72860476+skiffer-git@users.noreply.github.com>

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* merge

* merge: update code from main to v3.8-js-sdk-only.  (#2818)

* feat: implement merge milestone PR to target-branch. (#2796)

* build: improve workflows logic. (#2801)

* fix: improve time condition check mehtod. (#2804)

* fix: improve time condition check mehtod.

* fix

* fix: webhook before online push (#2805)

* fix: set own read seq in MongoDB when sender send a message. (#2808)

* fix: solve err Notification when setGroupInfo. (#2806)

* fix: solve err Notification when setGroupInfo.

* build: update checkout version.

* fix: update notification contents.

* Introducing OpenIM Guru on Gurubase.io (#2788)

* feat: support app update service (#2811)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: add ApplicationVersion

* feat: add ApplicationVersion

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: ApplicationVersion move chat (#2813)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: ApplicationVersion move chat

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: improve condition check. (#2815)

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: Kürşat Aktaş <kursat.ce@gmail.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: support text ping pong

* feat: support text ping pong

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* fix: concurrent write to websocket connection

* fix: concurrent write to websocket connection

* fix: concurrent write to websocket connection

* fix: group member update face_url

* feat: seq user hook set conversation

* feat: seq user hook set conversation

* feat: seq user hook set conversation

* feat: seq user hook set conversation

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: OpenIM-Gordon <46924906+FGadvancer@users.noreply.github.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: 蔡相跃 <caixiangyue007@gmail.com>
Co-authored-by: skiffer-git <72860476+skiffer-git@users.noreply.github.com>
Co-authored-by: Kürşat Aktaş <kursat.ce@gmail.com>
2024-12-12 07:05:17 +00:00
Monet Lee c62945ed05 fix: update set seq implement. (#2911)
* update set seq implement.

* revert seq logic.

* Update func logic.

* add log print.
2024-12-12 07:05:17 +00:00
chao 7d2fd64429 fix: group member update face_url (#2910)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: implement no gob encoder.

* update unitTest content.

* Update hub_server.go

* feat: GroupApplicationAgreeMemberEnterNotification

* fix: encoder replace to json encoder.

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* merge:  merge main code into js branch. (#2648)

* feat: update group notification when set to null. (#2590)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* feat: update group notification when set to null.

* update log standard.

* feat: add long time push msg in prometheus (#2584)

* feat: add long time push msg in prometheus

* fix: log print

* fix: go mod

* fix: log msg

* fix: log init

* feat: push msg

* feat: go mod ,remove cgo package

* feat: remove error log

* feat: test dummy push

* feat:redis pool config

* feat: push to kafka log

* feat: supports getting messages based on session ID and seq (#2582)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: implement request batch count limit. (#2591)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: getting messages based on session ID and seq (#2595)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: avoid pulling messages from sessions with a large number of max seq values of 0 (#2602)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* refactor: improve db structure in `storage/controller` (#2604)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* refactor: improve db structure in `storage/controller`

* feat: implement offline push using kafka (#2600)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* feat: implement offline push.

* feat: implement batch Push spilt

* update go mod

* feat: implement kafka producer and consumer.

* update format,

* add PushMQ log.

* feat: update Handler logic.

* update MQ logic.

* update

* update

* fix: update OfflinePushConsumerHandler.

* feat: API supports gzip (#2609)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* Fix err (#2608)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* feat: add rocksTimeout

* feat: wrap logs

* feat: add logs

* feat: listen config

* feat: enable listen TIME_WAIT port

* feat: add logs

* feat: cache batch

* chore: enable fullUserCache

* feat: push rpc num

* feat: push err

* feat: with operationID

* feat: sleep

* feat: change 1s

* feat: change log

* feat: implement Getbatch in rpcCache.

* feat: print getOnline cost

* feat: change log

* feat: change kafka and push config

* feat: del interface

* feat: fix err

* feat: change config

* feat: go mod

* feat: change config

* feat: change config

* feat: add sleep in push

* feat: warn logs

* feat: logs

* feat: logs

* feat: change port

* feat: start config

* feat: remove port reuse

* feat: prometheus config

* feat: prometheus config

* feat: prometheus config

* feat: add long time send msg to grafana

* feat: init

* feat: init

* feat: implement offline push.

* feat: batch get user online

* feat: implement batch Push spilt

* update go mod

* Revert "feat: change port"

This reverts commit 06d5e944

* feat: change port

* feat: change config

* feat: implement kafka producer and consumer.

* update format,

* add PushMQ log.

* feat: get all online users and init push

* feat: lock in online cache

* feat: config

* fix: init online status

* fix: add logs

* fix: userIDs

* fix: add logs

* feat: update Handler logic.

* update MQ logic.

* update

* update

* fix: method name

* fix: update OfflinePushConsumerHandler.

* fix: prommetrics

* fix: add logs

* fix: ctx

* fix: log

* fix: config

* feat: change port

* fix: atomic online cache status

---------

Co-authored-by: Monet Lee <monet_lee@163.com>

* feature: add GetConversationsHasReadAndMaxSeq interface to the WebSocket API. (#2611)

* fix: lru lock (#2613)

* fix: lru lock

* fix: lru lock

* fix: lru lock

* fix: nil pointer error on close (#2618)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: create group can push notification (#2617)

* fix: blockage caused by listen error (#2620)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: go.mod (#2621)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: improve searchMsg implement. (#2614)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* remove unused script.

* feat: improve searchMsg implement.

* update mongo config.

* Fix lock (#2622)

* fix:log

* fix: lock

* fix: update setGroupInfoEX field name. (#2625)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEX field name.

* fix: update setGroupInfoEX field name (#2626)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEX field name.

* fix: update setGroupInfoEX field name

* feat: msg gateway add log (#2631)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: update setGroupInfoEx func name and field. (#2634)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEx func name and field.

* refactor: update groupinfoEx field.

* refactor: update database name in mongodb.yml

* add groupName Condition

* fix: fix setConversations req fill. (#2645)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: fix setConversations req fill.

* fix: GetMsgBySeqs boundary issues (#2647)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: the attribute version is obsolete, remove it (#2644)

* refactor: update Userregister request field. (#2650)

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: 蔡相跃 <caixiangyue007@gmail.com>

* update go mod

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* merge: update code from main to v3.8-js-sdk-only. (#2720)

* fix: fix update groupName invalid. (#2673)

* refactor: change platform to platformID (#2670)

* feat: don`t return nil data (#2675)

Co-authored-by: Monet Lee <monet_lee@163.com>

* refactor: update fields type in userStatus and check registered. (#2676)

* fix: usertoken auth. (#2677)

* refactor: update fields type in userStatus and check registered.

* fix: usertoken auth.

* update contents.

* update content.

* update

* fix

* update pb file.

* feat: add friend agree after callback (#2680)

* fix: sn not sort (#2682)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* refactor: add GetAdminToken interface. (#2684)

* refactor: add GetAdminToken interface.

* update config.

* fix: admin token (#2686)

* fix: update workflows logic. (#2688)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* fix: admin token (#2687)

* update the front image (#2692)

* update the front image

* update version

* feat: improve publish docker image workflows (#2697)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* feat: improve publish docker image workflows

* update condition logic.

* fix: update load file logic. (#2700)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* feat: improve publish docker image workflows

* update condition logic.

* fix: update load file logic.

* feat: Msg filter (#2703)

* feat: msg filter

* feat: msg filter

* feat: msg filter

* feat: provide the interface required by js sdk (#2712)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* Line webhook (#2716)

* feat: online and offline webhook

* feat: online and offline webhook

* feat: remove zk

* fix: the message I sent is not set to read seq in mongodb (#2718)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: cannot modify group member avatars (#2719)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: skiffer-git <72860476+skiffer-git@users.noreply.github.com>

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* merge

* merge: update code from main to v3.8-js-sdk-only.  (#2818)

* feat: implement merge milestone PR to target-branch. (#2796)

* build: improve workflows logic. (#2801)

* fix: improve time condition check mehtod. (#2804)

* fix: improve time condition check mehtod.

* fix

* fix: webhook before online push (#2805)

* fix: set own read seq in MongoDB when sender send a message. (#2808)

* fix: solve err Notification when setGroupInfo. (#2806)

* fix: solve err Notification when setGroupInfo.

* build: update checkout version.

* fix: update notification contents.

* Introducing OpenIM Guru on Gurubase.io (#2788)

* feat: support app update service (#2811)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: add ApplicationVersion

* feat: add ApplicationVersion

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: ApplicationVersion move chat (#2813)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: ApplicationVersion move chat

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: improve condition check. (#2815)

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: Kürşat Aktaş <kursat.ce@gmail.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: support text ping pong

* feat: support text ping pong

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* fix: concurrent write to websocket connection

* fix: concurrent write to websocket connection

* fix: concurrent write to websocket connection

* fix: group member update face_url

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: OpenIM-Gordon <46924906+FGadvancer@users.noreply.github.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: 蔡相跃 <caixiangyue007@gmail.com>
Co-authored-by: skiffer-git <72860476+skiffer-git@users.noreply.github.com>
Co-authored-by: Kürşat Aktaş <kursat.ce@gmail.com>
2024-12-12 07:05:17 +00:00
icey-yu 1f91c756a4 fix: go mod (#2906) 2024-12-12 07:05:17 +00:00
Monet Lee 573b400af6 fix: improve crontask delete outdated Data. (#2901)
* fix: improve crontask delete outdated Data.

* remove comments.

* update go pkg

* revert go pkg.

* update gRPC Implement.

* feat: support use config filter in deleteOutdatedData.

* update go pkg.

* update pkg.

* comment GOProxy.
2024-12-12 07:05:17 +00:00
Kevin Lee 22820fa189 chore: update admin front image version (#2893) 2024-12-12 07:05:17 +00:00
icey-yu 645a5925bd fix: Wrong Redis Error Check (#2891)
* fix: redis err check

* fix: log err
2024-12-12 07:05:17 +00:00
171 changed files with 6197 additions and 3811 deletions
+10 -4
View File
@@ -6,12 +6,18 @@ ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
PROMETHEUS_IMAGE=prom/prometheus:v2.45.6 PROMETHEUS_IMAGE=prom/prometheus:v2.45.6
ALERTMANAGER_IMAGE=prom/alertmanager:v0.27.0 ALERTMANAGER_IMAGE=prom/alertmanager:v0.27.0
GRAFANA_IMAGE=grafana/grafana:11.0.1 GRAFANA_IMAGE=grafana/grafana:11.0.1
NODE_EXPORTER_IMAGE=prom/node-exporter:v1.7.0
OPENIM_WEB_FRONT_IMAGE=openim/openim-web-front:release-v3.8.1 OPENIM_WEB_FRONT_IMAGE=openim/openim-web-front:release-v3.8.3
OPENIM_ADMIN_FRONT_IMAGE=openim/openim-admin-front:release-v1.8.2 OPENIM_ADMIN_FRONT_IMAGE=openim/openim-admin-front:release-v1.8.4
#FRONT_IMAGE: use aliyun images #FRONT_IMAGE: use aliyun images
#OPENIM_WEB_FRONT_IMAGE=registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-web-front:release-v3.8.1 #OPENIM_WEB_FRONT_IMAGE=registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-web-front:release-v3.8.3
#OPENIM_ADMIN_FRONT_IMAGE=registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-admin-front:release-v1.8.2 #OPENIM_ADMIN_FRONT_IMAGE=registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-admin-front:release-v1.8.4
DATA_DIR=./ DATA_DIR=./
PROMETHEUS_PORT=19091
ALERTMANAGER_PORT=19093
GRAFANA_PORT=13000
NODE_EXPORTER_PORT=19100
@@ -0,0 +1,91 @@
name: Build and release services Images
on:
push:
branches:
- release-*
release:
types: [published]
workflow_dispatch:
inputs:
tag:
description: "Tag version to be used for Docker image"
required: true
default: "v3.8.3"
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Log in to Aliyun Container Registry
uses: docker/login-action@v2
with:
registry: registry.cn-hangzhou.aliyuncs.com
username: ${{ secrets.ALIREGISTRY_USERNAME }}
password: ${{ secrets.ALIREGISTRY_TOKEN }}
- name: Extract metadata for Docker (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
tags: |
type=ref,event=tag
type=schedule
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern=v{{version}}
# type=semver,pattern={{major}}.{{minor}}
type=semver,pattern=release-{{raw}}
type=sha
type=raw,value=${{ github.event.inputs.tag }}
- name: Build and push Docker images
run: |
ROOT_DIR="build/images"
for dir in "$ROOT_DIR"/*/; do
# Find Dockerfile or *.dockerfile in a case-insensitive manner
dockerfile=$(find "$dir" -maxdepth 1 -type f \( -iname 'dockerfile' -o -iname '*.dockerfile' \) | head -n 1)
if [ -n "$dockerfile" ] && [ -f "$dockerfile" ]; then
IMAGE_NAME=$(basename "$dir")
echo "Building Docker image for $IMAGE_NAME with tags:"
# Initialize tag arguments
tag_args=()
# Read each tag and append --tag arguments
while IFS= read -r tag; do
tag_args+=(--tag "${{ secrets.DOCKER_USERNAME }}/$IMAGE_NAME:$tag")
tag_args+=(--tag "ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME:$tag")
tag_args+=(--tag "registry.cn-hangzhou.aliyuncs.com/openimsdk/$IMAGE_NAME:$tag")
done <<< "${{ steps.meta.outputs.tags }}"
# Build and push the Docker image with all tags
docker buildx build --platform linux/amd64,linux/arm64 \
--file "$dockerfile" \
"${tag_args[@]}" \
--push "$dir"
else
echo "No valid Dockerfile found in $dir"
fi
done
+1 -1
View File
@@ -8,7 +8,7 @@ ENV SERVER_DIR=/openim-server
WORKDIR $SERVER_DIR WORKDIR $SERVER_DIR
# Set the Go proxy to improve dependency resolution speed # Set the Go proxy to improve dependency resolution speed
ENV GOPROXY=https://goproxy.io,direct # ENV GOPROXY=https://goproxy.io,direct
# Copy all files from the current directory into the container # Copy all files from the current directory into the container
COPY . . COPY . .
+3
View File
@@ -10,7 +10,10 @@ api:
prometheus: prometheus:
# Whether to enable prometheus # Whether to enable prometheus
enable: true enable: true
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# Prometheus listening ports, must match the number of api.ports # Prometheus listening ports, must match the number of api.ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 12002 ] ports: [ 12002 ]
# This address can be accessed via a browser # This address can be accessed via a browser
grafanaURL: http://127.0.0.1:13000/ grafanaURL: http://127.0.0.1:13000/
+2 -1
View File
@@ -1,3 +1,4 @@
cronExecuteTime: 0 2 * * * cronExecuteTime: 0 2 * * *
retainChatRecords: 365 retainChatRecords: 365
fileExpireTime: 90 fileExpireTime: 180
deleteObjectType: ["msg-picture","msg-file", "msg-voice","msg-video","msg-video-snapshot","sdklog"]
+4
View File
@@ -1,13 +1,17 @@
rpc: rpc:
# The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP # The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
registerIP: registerIP:
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10140, 10141, 10142, 10143, 10144, 10145, 10146, 10147, 10148, 10149, 10150, 10151, 10152, 10153, 10154, 10155 ] ports: [ 10140, 10141, 10142, 10143, 10144, 10145, 10146, 10147, 10148, 10149, 10150, 10151, 10152, 10153, 10154, 10155 ]
prometheus: prometheus:
# Enable or disable Prometheus monitoring # Enable or disable Prometheus monitoring
enable: true enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12140, 12141, 12142, 12143, 12144, 12145, 12146, 12147, 12148, 12149, 12150, 12151, 12152, 12153, 12154, 12155 ] ports: [ 12140, 12141, 12142, 12143, 12144, 12145, 12146, 12147, 12148, 12149, 12150, 12151, 12152, 12153, 12154, 12155 ]
# IP address that the RPC/WebSocket service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP # IP address that the RPC/WebSocket service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+3 -1
View File
@@ -1,6 +1,8 @@
prometheus: prometheus:
# Enable or disable Prometheus monitoring # Enable or disable Prometheus monitoring
enable: true enable: true
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that Prometheus listens on; each port corresponds to an instance of monitoring. Ensure these are managed accordingly # List of ports that Prometheus listens on; each port corresponds to an instance of monitoring. Ensure these are managed accordingly
# Because four instances have been launched, four ports need to be specified # It will only take effect when autoSetPorts is set to false.
ports: [ 12020, 12021, 12022, 12023, 12024, 12025, 12026, 12027, 12028, 12029, 12030, 12031, 12032, 12033, 12034, 12035 ] ports: [ 12020, 12021, 12022, 12023, 12024, 12025, 12026, 12027, 12028, 12029, 12030, 12031, 12032, 12033, 12034, 12035 ]
+4
View File
@@ -3,13 +3,17 @@ rpc:
registerIP: registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0 listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10170, 10171, 10172, 10173, 10174, 10175, 10176, 10177, 10178, 10179, 10180, 10181, 10182, 10183, 10184, 10185 ] ports: [ 10170, 10171, 10172, 10173, 10174, 10175, 10176, 10177, 10178, 10179, 10180, 10181, 10182, 10183, 10184, 10185 ]
prometheus: prometheus:
# Enable or disable Prometheus monitoring # Enable or disable Prometheus monitoring
enable: true enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12170, 12171, 12172, 12173, 12174, 12175, 12176, 12177, 12178, 12179, 12180, 12182, 12183, 12184, 12185, 12186 ] ports: [ 12170, 12171, 12172, 12173, 12174, 12175, 12176, 12177, 12178, 12179, 12180, 12182, 12183, 12184, 12185, 12186 ]
maxConcurrentWorkers: 3 maxConcurrentWorkers: 3
+4
View File
@@ -3,13 +3,17 @@ rpc:
registerIP: registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0 listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10200 ] ports: [ 10200 ]
prometheus: prometheus:
# Enable or disable Prometheus monitoring # Enable or disable Prometheus monitoring
enable: true enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12200 ] ports: [ 12200 ]
tokenPolicy: tokenPolicy:
+4
View File
@@ -3,11 +3,15 @@ rpc:
registerIP: registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0 listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10220 ] ports: [ 10220 ]
prometheus: prometheus:
# Enable or disable Prometheus monitoring # Enable or disable Prometheus monitoring
enable: true enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12220 ] ports: [ 12220 ]
+4
View File
@@ -3,11 +3,15 @@ rpc:
registerIP: registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0 listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10240 ] ports: [ 10240 ]
prometheus: prometheus:
# Enable or disable Prometheus monitoring # Enable or disable Prometheus monitoring
enable: true enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12240 ] ports: [ 12240 ]
+4
View File
@@ -3,13 +3,17 @@ rpc:
registerIP: registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0 listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10260 ] ports: [ 10260 ]
prometheus: prometheus:
# Enable or disable Prometheus monitoring # Enable or disable Prometheus monitoring
enable: true enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12260 ] ports: [ 12260 ]
+4
View File
@@ -3,13 +3,17 @@ rpc:
registerIP: registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0 listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10280 ] ports: [ 10280 ]
prometheus: prometheus:
# Enable or disable Prometheus monitoring # Enable or disable Prometheus monitoring
enable: true enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12280 ] ports: [ 12280 ]
+11
View File
@@ -3,13 +3,17 @@ rpc:
registerIP: registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP # IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0 listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports # List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10300 ] ports: [ 10300 ]
prometheus: prometheus:
# Enable or disable Prometheus monitoring # Enable or disable Prometheus monitoring
enable: true enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup # List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12300 ] ports: [ 12300 ]
@@ -38,3 +42,10 @@ object:
accessKeySecret: accessKeySecret:
sessionToken: sessionToken:
publicRead: false publicRead: false
aws:
region: ap-southeast-2
bucket: testdemo832234
accessKeyID:
secretAccessKey:
sessionToken:
publicRead: false
+5 -1
View File
@@ -3,11 +3,15 @@ rpc:
registerIP: registerIP:
# Listening IP; 0.0.0.0 means both internal and external IPs are listened to, if blank, the internal network IP is automatically obtained by default # Listening IP; 0.0.0.0 means both internal and external IPs are listened to, if blank, the internal network IP is automatically obtained by default
listenIP: 0.0.0.0 listenIP: 0.0.0.0
# Listening ports; if multiple are configured, multiple instances will be launched, and must be consistent with the number of prometheus.ports # autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10320 ] ports: [ 10320 ]
prometheus: prometheus:
# Whether to enable prometheus # Whether to enable prometheus
enable: true enable: true
# Prometheus listening ports, must be consistent with the number of rpc.ports # Prometheus listening ports, must be consistent with the number of rpc.ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 12320 ] ports: [ 12320 ]
+82 -49
View File
@@ -8,7 +8,7 @@ global:
alerting: alerting:
alertmanagers: alertmanagers:
- static_configs: - static_configs:
- targets: [internal_ip:19093] - targets: [127.0.0.1:19093]
# Load rules once and periodically evaluate them according to the global evaluation_interval. # Load rules once and periodically evaluate them according to the global evaluation_interval.
rule_files: rule_files:
@@ -25,62 +25,95 @@ scrape_configs:
# prometheus fetches application services # prometheus fetches application services
- job_name: node_exporter - job_name: node_exporter
static_configs: static_configs:
- targets: [ internal_ip:20500 ] - targets: [ 127.0.0.1:19100 ]
- job_name: openimserver-openim-api - job_name: openimserver-openim-api
static_configs: http_sd_configs:
- targets: [ internal_ip:12002 ] - url: "http://127.0.0.1:10002/prometheus_discovery/api"
labels: # static_configs:
namespace: default # - targets: [ 127.0.0.1:12002 ]
# labels:
# namespace: default
- job_name: openimserver-openim-msggateway - job_name: openimserver-openim-msggateway
static_configs: http_sd_configs:
- targets: [ internal_ip:12140 ] - url: "http://127.0.0.1:10002/prometheus_discovery/msg_gateway"
# - targets: [ internal_ip:12140, internal_ip:12141, internal_ip:12142, internal_ip:12143, internal_ip:12144, internal_ip:12145, internal_ip:12146, internal_ip:12147, internal_ip:12148, internal_ip:12149, internal_ip:12150, internal_ip:12151, internal_ip:12152, internal_ip:12153, internal_ip:12154, internal_ip:12155 ] # static_configs:
labels: # - targets: [ 127.0.0.1:12140 ]
namespace: default # # - targets: [ 127.0.0.1:12140, 127.0.0.1:12141, 127.0.0.1:12142, 127.0.0.1:12143, 127.0.0.1:12144, 127.0.0.1:12145, 127.0.0.1:12146, 127.0.0.1:12147, 127.0.0.1:12148, 127.0.0.1:12149, 127.0.0.1:12150, 127.0.0.1:12151, 127.0.0.1:12152, 127.0.0.1:12153, 127.0.0.1:12154, 127.0.0.1:12155 ]
# labels:
# namespace: default
- job_name: openimserver-openim-msgtransfer - job_name: openimserver-openim-msgtransfer
static_configs: http_sd_configs:
- targets: [ internal_ip:12020, internal_ip:12021, internal_ip:12022, internal_ip:12023, internal_ip:12024, internal_ip:12025, internal_ip:12026, internal_ip:12027 ] - url: "http://127.0.0.1:10002/prometheus_discovery/msg_transfer"
# - targets: [ internal_ip:12020, internal_ip:12021, internal_ip:12022, internal_ip:12023, internal_ip:12024, internal_ip:12025, internal_ip:12026, internal_ip:12027, internal_ip:12028, internal_ip:12029, internal_ip:12030, internal_ip:12031, internal_ip:12032, internal_ip:12033, internal_ip:12034, internal_ip:12035 ] # static_configs:
labels: # - targets: [ 127.0.0.1:12020, 127.0.0.1:12021, 127.0.0.1:12022, 127.0.0.1:12023, 127.0.0.1:12024, 127.0.0.1:12025, 127.0.0.1:12026, 127.0.0.1:12027 ]
namespace: default # # - targets: [ 127.0.0.1:12020, 127.0.0.1:12021, 127.0.0.1:12022, 127.0.0.1:12023, 127.0.0.1:12024, 127.0.0.1:12025, 127.0.0.1:12026, 127.0.0.1:12027, 127.0.0.1:12028, 127.0.0.1:12029, 127.0.0.1:12030, 127.0.0.1:12031, 127.0.0.1:12032, 127.0.0.1:12033, 127.0.0.1:12034, 127.0.0.1:12035 ]
# labels:
# namespace: default
- job_name: openimserver-openim-push - job_name: openimserver-openim-push
static_configs: http_sd_configs:
- targets: [ internal_ip:12170, internal_ip:12171, internal_ip:12172, internal_ip:12173, internal_ip:12174, internal_ip:12175, internal_ip:12176, internal_ip:12177 ] - url: "http://127.0.0.1:10002/prometheus_discovery/push"
# - targets: [ internal_ip:12170, internal_ip:12171, internal_ip:12172, internal_ip:12173, internal_ip:12174, internal_ip:12175, internal_ip:12176, internal_ip:12177, internal_ip:12178, internal_ip:12179, internal_ip:12180, internal_ip:12182, internal_ip:12183, internal_ip:12184, internal_ip:12185, internal_ip:12186 ] # static_configs:
labels: # - targets: [ 127.0.0.1:12170, 127.0.0.1:12171, 127.0.0.1:12172, 127.0.0.1:12173, 127.0.0.1:12174, 127.0.0.1:12175, 127.0.0.1:12176, 127.0.0.1:12177 ]
namespace: default ## - targets: [ 127.0.0.1:12170, 127.0.0.1:12171, 127.0.0.1:12172, 127.0.0.1:12173, 127.0.0.1:12174, 127.0.0.1:12175, 127.0.0.1:12176, 127.0.0.1:12177, 127.0.0.1:12178, 127.0.0.1:12179, 127.0.0.1:12180, 127.0.0.1:12182, 127.0.0.1:12183, 127.0.0.1:12184, 127.0.0.1:12185, 127.0.0.1:12186 ]
# labels:
# namespace: default
- job_name: openimserver-openim-rpc-auth - job_name: openimserver-openim-rpc-auth
static_configs: http_sd_configs:
- targets: [ internal_ip:12200 ] - url: "http://127.0.0.1:10002/prometheus_discovery/auth"
labels: # static_configs:
namespace: default # - targets: [ 127.0.0.1:12200 ]
# labels:
# namespace: default
- job_name: openimserver-openim-rpc-conversation - job_name: openimserver-openim-rpc-conversation
static_configs: http_sd_configs:
- targets: [ internal_ip:12220 ] - url: "http://127.0.0.1:10002/prometheus_discovery/conversation"
labels: # static_configs:
namespace: default # - targets: [ 127.0.0.1:12220 ]
# labels:
# namespace: default
- job_name: openimserver-openim-rpc-friend - job_name: openimserver-openim-rpc-friend
static_configs: http_sd_configs:
- targets: [ internal_ip:12240 ] - url: "http://127.0.0.1:10002/prometheus_discovery/friend"
labels: # static_configs:
namespace: default # - targets: [ 127.0.0.1:12240 ]
# labels:
# namespace: default
- job_name: openimserver-openim-rpc-group - job_name: openimserver-openim-rpc-group
static_configs: http_sd_configs:
- targets: [ internal_ip:12260 ] - url: "http://127.0.0.1:10002/prometheus_discovery/group"
labels: # static_configs:
namespace: default # - targets: [ 127.0.0.1:12260 ]
# labels:
# namespace: default.
- job_name: openimserver-openim-rpc-msg - job_name: openimserver-openim-rpc-msg
static_configs: http_sd_configs:
- targets: [ internal_ip:12280 ] - url: "http://127.0.0.1:10002/prometheus_discovery/msg"
labels: # static_configs:
namespace: default # - targets: [ 127.0.0.1:12280 ]
# labels:
# namespace: default
- job_name: openimserver-openim-rpc-third - job_name: openimserver-openim-rpc-third
static_configs: http_sd_configs:
- targets: [ internal_ip:12300 ] - url: "http://127.0.0.1:10002/prometheus_discovery/third"
labels: # static_configs:
namespace: default # - targets: [ 127.0.0.1:12300 ]
# labels:
# namespace: default
- job_name: openimserver-openim-rpc-user - job_name: openimserver-openim-rpc-user
static_configs: http_sd_configs:
- targets: [ internal_ip:12320 ] - url: "http://127.0.0.1:10002/prometheus_discovery/user"
labels: # static_configs:
namespace: default # - targets: [ 127.0.0.1:12320 ]
# labels:
# namespace: default
+151 -138
View File
@@ -1,175 +1,188 @@
# OpenIM Application Containerization Deployment Guide # Kubernetes Deployment
OpenIM supports a variety of cluster deployment methods, including but not limited to `helm`, `sealos`, `kustomize` ## Resource Requests
Various contributors, as well as previous official releases, have provided some referenceable solutions: - CPU: 2 cores
- Memory: 4 GiB
- Disk usage: 20 GiB (on Node)
+ [k8s-jenkins Repository](https://github.com/OpenIMSDK/k8s-jenkins) ## Preconditions
+ [open-im-server-k8s-deploy Repository](https://github.com/openimsdk/open-im-server-k8s-deploy)
+ [openim-charts Repository](https://github.com/OpenIMSDK/openim-charts)
+ [deploy-openim Repository](https://github.com/showurl/deploy-openim)
### Dependency Check ensure that you have already deployed the following components:
```bash - Redis
Kubernetes: >= 1.16.0-0 - MongoDB
Helm: >= 3.0 - Kafka
``` - MinIO
### Minimum Configuration ## Origin Deploy
The recommended minimum configuration for a production environment is as follows: ### Enter the target dir
`cd ./deployments/deploy/`
### Deploy configs and dependencies
Upate your configMap `openim-config.yml`. **You can check the official docs for more details.**
In `openim-config.yml`, you need modify the following configurations:
**discovery.yml**
- `kubernetes.namespace`: default is `default`, you can change it to your namespace.
**mongodb.yml**
- `address`: set to your already mongodb address or mongo Service name and port in your deployed.
- `database`: set to your mongodb database name.(Need have a created database.)
- `authSource`: set to your mongodb authSource. (authSource is specify the database name associated with the user's credentials, user need create in this database.)
**kafka.yml**
- `address`: set to your already kafka address or kafka Service name and port in your deployed.
**redis.yml**
- `address`: set to your already redis address or redis Service name and port in your deployed.
**minio.yml**
- `internalAddress`: set to your minio Service name and port in your deployed.
- `externalAddress`: set to your already expose minio external address.
### Set the secret
A Secret is an object that contains a small amount of sensitive data. Such as password and secret. Secret is similar to ConfigMaps.
#### Redis:
Update the `redis-password` value in `redis-secret.yml` to your Redis password encoded in base64.
```yaml ```yaml
CPU: 4 apiVersion: v1
Memory: 8G kind: Secret
Disk: 100G metadata:
name: openim-redis-secret
type: Opaque
data:
redis-password: b3BlbklNMTIz # update to your redis password encoded in base64, if need empty, you can set to ""
``` ```
## Configuration File Generation #### Mongo:
We have automated all the files, making the generation of configuration files optional for OpenIM. However, if you desire custom configurations, you can follow the steps below: Update the `mongo_openim_username`, `mongo_openim_password` value in `mongo-secret.yml` to your Mongo username and password encoded in base64.
```bash ```yaml
$ make init apiVersion: v1
# Alternatively, use script: kind: Secret
# ./scripts/init-config.sh metadata:
name: openim-mongo-secret
type: Opaque
data:
mongo_openim_username: b3BlbklN # update to your mongo username encoded in base64, if need empty, you can set to "" (this user credentials need in authSource database).
mongo_openim_password: b3BlbklNMTIz # update to your mongo password encoded in base64, if need empty, you can set to ""
``` ```
At this point, configuration files will be generated under `deployments/openim/config`, which you can modify as per your requirements. #### Minio:
## Cluster Setup Update the `minio-root-user` and `minio-root-password` value in `minio-secret.yml` to your MinIO accessKeyID and secretAccessKey encoded in base64.
If you already have a `kubernetes` cluster, or if you wish to build a `kubernetes` cluster from scratch, you can skip this step. ```yaml
apiVersion: v1
For a quick start, I used [sealos](https://github.com/labring/sealos) to rapidly set up the cluster, with sealos also being a wrapper for kubeadm at its core: kind: Secret
metadata:
```bash name: openim-minio-secret
$ SEALOS_VERSION=`curl -s https://api.github.com/repos/labring/sealos/releases/latest | grep -oE '"tag_name": "[^"]+"' | head -n1 | cut -d'"' -f4` && \ type: Opaque
curl -sfL https://raw.githubusercontent.com/labring/sealos/${SEALOS_VERSION}/scripts/install.sh | data:
sh -s ${SEALOS_VERSION} labring/sealos minio-root-user: cm9vdA== # update to your minio accessKeyID encoded in base64, if need empty, you can set to ""
minio-root-password: b3BlbklNMTIz # update to your minio secretAccessKey encoded in base64, if need empty, you can set to ""
``` ```
**Supported Versions:** #### Kafka:
+ docker: `labring/kubernetes-docker`:(v1.24.0~v1.27.0) Update the `kafka-password` value in `kafka-secret.yml` to your Kafka password encoded in base64.
+ containerd: `labring/kubernetes`:(v1.24.0~v1.27.0)
#### Cluster Installation: ```yaml
apiVersion: v1
Cluster details are as follows: kind: Secret
metadata:
| Hostname | IP Address | System Info | name: openim-kafka-secret
| -------- | ---------- | ------------------------------------------------------------ | type: Opaque
| master01 | 10.0.0.9 | `Linux VM-0-9-ubuntu 5.15.0-76-generic #83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux` | data:
| node01 | 10.0.0.4 | Similar to master01 | kafka-password: b3BlbklNMTIz # update to your kafka password encoded in base64, if need empty, you can set to ""
| node02 | 10.0.0.10 | Similar to master01 |
```bash
$ export CLUSTER_USERNAME=ubuntu
$ export CLUSTER_PASSWORD=123456
$ sudo sealos run labring/kubernetes:v1.25.0 labring/helm:v3.8.2 labring/calico:v3.24.1 \
--masters 10.0.0.9 \
--nodes 10.0.0.4,10.0.0.10 \
-u "$CLUSTER_USERNAME" \
-p "$CLUSTER_PASSWORD"
``` ```
> **Node** Uninstallation method: using `kubeadm` for uninstallation does not remove `etcd` and `cni` related configurations. Manual clearance or using `sealos` for uninstallation is needed. ### Apply the secret.
```shell
kubectl apply -f redis-secret.yml -f minio-secret.yml -f mongo-secret.yml -f kafka-secret.yml
```
### Apply all config
`kubectl apply -f ./openim-config.yml`
> Attation: If you use `default` namespace, you can excute `clusterRile.yml` to create a cluster role binding for default service account.
> >
> ```bash > Namespace is modify to `discovery.yml` in `openim-config.yml`, you can change `kubernetes.namespace` to your namespace.
> $ sealos reset
> ```
If you are local, you can also use Kind and Minikube to test, for example, using Kind: **Excute `clusterRole.yml`**
`kubectl apply -f ./clusterRole.yml`
### run all deployments and services
> Note: Ensure that infrastructure services like MinIO, Redis, and Kafka are running before deploying the main applications.
```bash ```bash
$ GO111MODULE="on" go get sigs.k8s.io/kind@v0.11.1 kubectl apply \
$ kind create cluster -f openim-api-deployment.yml \
-f openim-api-service.yml \
-f openim-crontask-deployment.yml \
-f openim-rpc-user-deployment.yml \
-f openim-rpc-user-service.yml \
-f openim-msggateway-deployment.yml \
-f openim-msggateway-service.yml \
-f openim-push-deployment.yml \
-f openim-push-service.yml \
-f openim-msgtransfer-service.yml \
-f openim-msgtransfer-deployment.yml \
-f openim-rpc-conversation-deployment.yml \
-f openim-rpc-conversation-service.yml \
-f openim-rpc-auth-deployment.yml \
-f openim-rpc-auth-service.yml \
-f openim-rpc-group-deployment.yml \
-f openim-rpc-group-service.yml \
-f openim-rpc-friend-deployment.yml \
-f openim-rpc-friend-service.yml \
-f openim-rpc-msg-deployment.yml \
-f openim-rpc-msg-service.yml \
-f openim-rpc-third-deployment.yml \
-f openim-rpc-third-service.yml
``` ```
### Installing helm ### Verification
Helm simplifies the deployment and management of Kubernetes applications to a large extent by offering version control and release management through packaging. After deploying the services, verify that everything is running smoothly:
**Using Script:**
```bash ```bash
$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash # Check the status of all pods
kubectl get pods
# Check the status of services
kubectl get svc
# Check the status of deployments
kubectl get deployments
# View all resources
kubectl get all
``` ```
**Adding Repository:** ### clean all
```bash `kubectl delete -f ./`
$ helm repo add brigade https://openimsdk.github.io/openim-charts
```
### OpenIM Image Strategy ### Notes:
Automated offerings include aliyun, ghcr, docker hub: [Image Documentation](https://github.com/openimsdk/open-im-server/blob/main/docs/contrib/images.md) - If you use a specific namespace for your deployment, be sure to append the -n <namespace> flag to your kubectl commands.
**Local Test Build Method:**
```bash
$ make image
```
> This command assists in quickly building the required images locally. For a detailed build strategy, refer to the [Build Documentation](https://github.com/openimsdk/open-im-server/blob/main/build/README.md).
## Installation
Explore our Helm-Charts repository and read through: [Helm-Charts Repository](https://github.com/openimsdk/helm-charts)
Using the helm charts repository, you can ignore the following configuration, but if you want to just use the server and scale on top of it, you can go ahead:
**Use the Helm template to generate the deployment yaml file: `openim-charts.yaml`**
**Gen Image:**
```bash
../scripts/genconfig.sh ../scripts/install/environment.sh ./templates/helm-image.yaml > ./charts/generated-configs/helm-image.yaml
```
**Gen Charts:**
```bash
for chart in ./charts/*/; do
if [[ "$chart" == *"generated-configs"* || "$chart" == *"helmfile.yaml"* ]]; then
continue
fi
if [ -f "${chart}values.yaml" ]; then
helm template "$chart" -f "./charts/generated-configs/helm-image.yaml" -f "./charts/generated-configs/config.yaml" -f "./charts/generated-configs/notification.yaml" >> openim-charts.yaml
else
helm template "$chart" >> openim-charts.yaml
fi
done
```
**Use Helmfile:**
```bash
GO111MODULE=on go get github.com/roboll/helmfile@latest
```
```bash
export MONGO_ADDRESS=im-mongo
export MONGO_PORT=27017
export REDIS_ADDRESS=im-redis-master
export REDIS_PORT=6379
export KAFKA_ADDRESS=im-kafka
export KAFKA_PORT=9092
export OBJECT_APIURL="https://openim.server.com/api"
export MINIO_ENDPOINT="http://im-minio:9000"
export MINIO_SIGN_ENDPOINT="https://openim.server.com/im-minio-api"
mkdir ./charts/generated-configs
../scripts/genconfig.sh ../scripts/install/environment.sh ./templates/config.yaml > ./charts/generated-configs/config.yaml
cp ../config/notification.yaml ./charts/generated-configs/notification.yaml
../scripts/genconfig.sh ../scripts/install/environment.sh ./templates/helm-image.yaml > ./charts/generated-configs/helm-image.yaml
```
```bash
helmfile apply
```
+7
View File
@@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: openim-kafka-secret
type: Opaque
data:
kafka-password: ""
+8
View File
@@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: openim-minio-secret
type: Opaque
data:
minio-root-user: cm9vdA== # Base64 encoded "root"
minio-root-password: b3BlbklNMTIz # Base64 encoded "openIM123"
+79
View File
@@ -0,0 +1,79 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio
labels:
app: minio
spec:
replicas: 2
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
containers:
- name: minio
image: minio/minio:RELEASE.2024-01-11T07-46-16Z
ports:
- containerPort: 9000 # MinIO service port
- containerPort: 9090 # MinIO console port
volumeMounts:
- name: minio-data
mountPath: /data
- name: minio-config
mountPath: /root/.minio
env:
- name: TZ
value: "Asia/Shanghai"
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: openim-minio-secret
key: minio-root-user
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: openim-minio-secret
key: minio-root-password
command:
- "/bin/sh"
- "-c"
- |
mkdir -p /data && \
minio server /data --console-address ":9090"
volumes:
- name: minio-data
persistentVolumeClaim:
claimName: minio-pvc
- name: minio-config
persistentVolumeClaim:
claimName: minio-config-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-config-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
+8
View File
@@ -0,0 +1,8 @@
apiVersion: v1
kind: Secret
metadata:
name: openim-mongo-secret
type: Opaque
data:
mongo_openim_username: b3BlbklN # base64 for "openIM", this user credentials need in authSource database.
mongo_openim_password: b3BlbklNMTIz # base64 for "openIM123"
+108
View File
@@ -0,0 +1,108 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-statefulset
spec:
serviceName: "mongo"
replicas: 2
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:7.0
command: ["/bin/bash", "-c"]
args:
- >
docker-entrypoint.sh mongod --wiredTigerCacheSizeGB ${wiredTigerCacheSizeGB} --auth &
until mongosh -u ${MONGO_INITDB_ROOT_USERNAME} -p ${MONGO_INITDB_ROOT_PASSWORD} --authenticationDatabase admin --eval "db.runCommand({ ping: 1 })" &>/dev/null; do
echo "Waiting for MongoDB to start...";
sleep 1;
done &&
mongosh -u ${MONGO_INITDB_ROOT_USERNAME} -p ${MONGO_INITDB_ROOT_PASSWORD} --authenticationDatabase admin --eval "
db = db.getSiblingDB(\"${MONGO_INITDB_DATABASE}\");
if (!db.getUser(\"${MONGO_OPENIM_USERNAME}\")) {
db.createUser({
user: \"${MONGO_OPENIM_USERNAME}\",
pwd: \"${MONGO_OPENIM_PASSWORD}\",
roles: [{role: \"readWrite\", db: \"${MONGO_INITDB_DATABASE}\"}]
});
print(\"User created successfully: \");
print(\"Username: ${MONGO_OPENIM_USERNAME}\");
print(\"Password: ${MONGO_OPENIM_PASSWORD}\");
print(\"Database: ${MONGO_INITDB_DATABASE}\");
} else {
print(\"User already exists in database: ${MONGO_INITDB_DATABASE}, Username: ${MONGO_OPENIM_USERNAME}\");
}
" &&
tail -f /dev/null
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_initdb_root_username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_initdb_root_password
- name: MONGO_INITDB_DATABASE
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_initdb_database
- name: MONGO_OPENIM_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_openim_username
- name: MONGO_OPENIM_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_openim_password
- name: TZ
value: "Asia/Shanghai"
- name: wiredTigerCacheSizeGB
value: "1"
volumeMounts:
- name: mongo-storage
mountPath: /data/db
volumes:
- name: mongo-storage
persistentVolumeClaim:
claimName: mongo-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Secret
metadata:
name: openim-mongo-init-secret
type: Opaque
data:
mongo_initdb_root_username: cm9vdA== # base64 for "root"
mongo_initdb_root_password: b3BlbklNMTIz # base64 for "openIM123"
mongo_initdb_database: b3BlbmltX3Yz # base64 for "openim_v3"
mongo_openim_username: b3BlbklN # base64 for "openIM"
mongo_openim_password: b3BlbklNMTIz # base64 for "openIM123"
@@ -0,0 +1,47 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: openim-api
spec:
replicas: 2
selector:
matchLabels:
app: openim-api
template:
metadata:
labels:
app: openim-api
spec:
containers:
- name: openim-api-container
image: openim/openim-api:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10002
- containerPort: 12002
volumes:
- name: openim-config
configMap:
name: openim-config
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,36 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: messagegateway-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: messagegateway-rpc-server
template:
metadata:
labels:
app: messagegateway-rpc-server
spec:
containers:
- name: openim-msggateway-container
image: openim/openim-msggateway:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10140
- containerPort: 12001
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -0,0 +1,50 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: openim-msgtransfer-server
spec:
replicas: 2
selector:
matchLabels:
app: openim-msgtransfer-server
template:
metadata:
labels:
app: openim-msgtransfer-server
spec:
containers:
- name: openim-msgtransfer-container
image: openim/openim-msgtransfer:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
- name: IMENV_KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: openim-kafka-secret
key: kafka-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 12020
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -0,0 +1,41 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: push-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: push-rpc-server
template:
metadata:
labels:
app: push-rpc-server
spec:
containers:
- name: push-rpc-server-container
image: openim/openim-push:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: openim-kafka-secret
key: kafka-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10170
- containerPort: 12170
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -0,0 +1,37 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: auth-rpc-server
template:
metadata:
labels:
app: auth-rpc-server
spec:
containers:
- name: auth-rpc-server-container
image: openim/openim-rpc-auth:v3.8.3
imagePullPolicy: Never
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10200
- containerPort: 12200
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -0,0 +1,46 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: conversation-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: conversation-rpc-server
template:
metadata:
labels:
app: conversation-rpc-server
spec:
containers:
- name: conversation-rpc-server-container
image: openim/openim-rpc-conversation:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10220
- containerPort: 12220
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -0,0 +1,46 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: friend-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: friend-rpc-server
template:
metadata:
labels:
app: friend-rpc-server
spec:
containers:
- name: friend-rpc-server-container
image: openim/openim-rpc-friend:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10240
- containerPort: 12240
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -0,0 +1,46 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: group-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: group-rpc-server
template:
metadata:
labels:
app: group-rpc-server
spec:
containers:
- name: group-rpc-server-container
image: openim/openim-rpc-group:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10260
- containerPort: 12260
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -0,0 +1,51 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: msg-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: msg-rpc-server
template:
metadata:
labels:
app: msg-rpc-server
spec:
containers:
- name: msg-rpc-server-container
image: openim/openim-rpc-msg:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
- name: IMENV_KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: openim-kafka-secret
key: kafka-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10280
- containerPort: 12280
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -0,0 +1,56 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: third-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: third-rpc-server
template:
metadata:
labels:
app: third-rpc-server
spec:
containers:
- name: third-rpc-server-container
image: openim/openim-rpc-third:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_MINIO_ACCESSKEYID
valueFrom:
secretKeyRef:
name: openim-minio-secret
key: minio-root-user
- name: IMENV_MINIO_SECRETACCESSKEY
valueFrom:
secretKeyRef:
name: openim-minio-secret
key: minio-root-password
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10300
- containerPort: 12300
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -0,0 +1,51 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: user-rpc-server
template:
metadata:
labels:
app: user-rpc-server
spec:
containers:
- name: user-rpc-server-container
image: openim/openim-rpc-user:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
- name: IMENV_KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: openim-kafka-secret
key: kafka-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10320
- containerPort: 12320
volumes:
- name: openim-config
configMap:
name: openim-config
+7
View File
@@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: openim-redis-secret
type: Opaque
data:
redis-password: b3BlbklNMTIz # "openIM123" in base64
+55
View File
@@ -0,0 +1,55 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-statefulset
spec:
serviceName: "redis"
replicas: 2
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7.0.0
ports:
- containerPort: 6379
env:
- name: TZ
value: "Asia/Shanghai"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secret
key: redis-password
volumeMounts:
- name: redis-data
mountPath: /data
command:
[
"/bin/sh",
"-c",
'redis-server --requirepass "$REDIS_PASSWORD" --appendonly yes',
]
volumes:
- name: redis-config-volume
configMap:
name: openim-config
- name: redis-data
persistentVolumeClaim:
claimName: redis-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
+65 -46
View File
@@ -146,49 +146,68 @@ services:
networks: networks:
- openim - openim
# prometheus: prometheus:
# image: ${PROMETHEUS_IMAGE} image: ${PROMETHEUS_IMAGE}
# container_name: prometheus container_name: prometheus
# restart: always restart: always
# user: root user: root
# volumes: profiles:
# - ./config/prometheus.yml:/etc/prometheus/prometheus.yml - m
# - ./config/instance-down-rules.yml:/etc/prometheus/instance-down-rules.yml volumes:
# - ${DATA_DIR}/components/prometheus/data:/prometheus - ./config/prometheus.yml:/etc/prometheus/prometheus.yml
# command: - ./config/instance-down-rules.yml:/etc/prometheus/instance-down-rules.yml
# - '--config.file=/etc/prometheus/prometheus.yml' - ${DATA_DIR}/components/prometheus/data:/prometheus
# - '--storage.tsdb.path=/prometheus' command:
# ports: - '--config.file=/etc/prometheus/prometheus.yml'
# - "19091:9090" - '--storage.tsdb.path=/prometheus'
# networks: - '--web.listen-address=:${PROMETHEUS_PORT}'
# - openim network_mode: host
#
# alertmanager: alertmanager:
# image: ${ALERTMANAGER_IMAGE} image: ${ALERTMANAGER_IMAGE}
# container_name: alertmanager container_name: alertmanager
# restart: always restart: always
# volumes: profiles:
# - ./config/alertmanager.yml:/etc/alertmanager/alertmanager.yml - m
# - ./config/email.tmpl:/etc/alertmanager/email.tmpl volumes:
# ports: - ./config/alertmanager.yml:/etc/alertmanager/alertmanager.yml
# - "19093:9093" - ./config/email.tmpl:/etc/alertmanager/email.tmpl
# networks: command:
# - openim - '--config.file=/etc/alertmanager/alertmanager.yml'
# - '--web.listen-address=:${ALERTMANAGER_PORT}'
# grafana: network_mode: host
# image: ${GRAFANA_IMAGE}
# container_name: grafana grafana:
# user: root image: ${GRAFANA_IMAGE}
# restart: always container_name: grafana
# environment: user: root
# - GF_SECURITY_ALLOW_EMBEDDING=true restart: always
# - GF_SESSION_COOKIE_SAMESITE=none profiles:
# - GF_SESSION_COOKIE_SECURE=true - m
# - GF_AUTH_ANONYMOUS_ENABLED=true environment:
# - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin - GF_SECURITY_ALLOW_EMBEDDING=true
# ports: - GF_SESSION_COOKIE_SAMESITE=none
# - "13000:3000" - GF_SESSION_COOKIE_SECURE=true
# volumes: - GF_AUTH_ANONYMOUS_ENABLED=true
# - ${DATA_DIR:-./}/components/grafana:/var/lib/grafana - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
# networks: - GF_SERVER_HTTP_PORT=${GRAFANA_PORT}
# - openim volumes:
- ${DATA_DIR:-./}/components/grafana:/var/lib/grafana
network_mode: host
node-exporter:
image: ${NODE_EXPORTER_IMAGE}
container_name: node-exporter
restart: always
profiles:
- m
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/rootfs'
- '--web.listen-address=:${NODE_EXPORTER_PORT}'
network_mode: host
+23 -25
View File
@@ -1,8 +1,6 @@
module github.com/openimsdk/open-im-server/v3 module github.com/openimsdk/open-im-server/v3
go 1.22.0 go 1.22.7
toolchain go1.23.2
require ( require (
firebase.google.com/go/v4 v4.14.1 firebase.google.com/go/v4 v4.14.1
@@ -14,15 +12,15 @@ require (
github.com/gorilla/websocket v1.5.1 github.com/gorilla/websocket v1.5.1
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/mitchellh/mapstructure v1.5.0 github.com/mitchellh/mapstructure v1.5.0
github.com/openimsdk/protocol v0.0.72-alpha.54 github.com/openimsdk/protocol v0.0.72-alpha.71
github.com/openimsdk/tools v0.0.50-alpha.32 github.com/openimsdk/tools v0.0.50-alpha.67
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_golang v1.18.0 github.com/prometheus/client_golang v1.18.0
github.com/stretchr/testify v1.9.0 github.com/stretchr/testify v1.9.0
go.mongodb.org/mongo-driver v1.14.0 go.mongodb.org/mongo-driver v1.14.0
google.golang.org/api v0.170.0 google.golang.org/api v0.170.0
google.golang.org/grpc v1.66.2 google.golang.org/grpc v1.68.0
google.golang.org/protobuf v1.34.2 google.golang.org/protobuf v1.35.1
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
) )
@@ -43,38 +41,38 @@ require (
github.com/shirou/gopsutil v3.21.11+incompatible github.com/shirou/gopsutil v3.21.11+incompatible
github.com/spf13/viper v1.18.2 github.com/spf13/viper v1.18.2
github.com/stathat/consistent v1.0.0 github.com/stathat/consistent v1.0.0
go.etcd.io/etcd/client/v3 v3.5.13
go.uber.org/automaxprocs v1.5.3 go.uber.org/automaxprocs v1.5.3
golang.org/x/exp v0.0.0-20230905200255-921286631fa9
golang.org/x/sync v0.8.0 golang.org/x/sync v0.8.0
) )
require ( require (
cloud.google.com/go v0.112.1 // indirect cloud.google.com/go v0.112.1 // indirect
cloud.google.com/go/compute/metadata v0.3.0 // indirect cloud.google.com/go/compute/metadata v0.5.0 // indirect
cloud.google.com/go/firestore v1.15.0 // indirect cloud.google.com/go/firestore v1.15.0 // indirect
cloud.google.com/go/iam v1.1.7 // indirect cloud.google.com/go/iam v1.1.7 // indirect
cloud.google.com/go/longrunning v0.5.5 // indirect cloud.google.com/go/longrunning v0.5.5 // indirect
cloud.google.com/go/storage v1.40.0 // indirect cloud.google.com/go/storage v1.40.0 // indirect
github.com/MicahParks/keyfunc v1.9.0 // indirect github.com/MicahParks/keyfunc v1.9.0 // indirect
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible // indirect github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible // indirect
github.com/aws/aws-sdk-go-v2 v1.23.1 // indirect github.com/aws/aws-sdk-go-v2 v1.32.5 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 // indirect github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 // indirect
github.com/aws/aws-sdk-go-v2/config v1.25.4 // indirect github.com/aws/aws-sdk-go-v2/config v1.28.5 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.16.3 // indirect github.com/aws/aws-sdk-go-v2/credentials v1.17.46 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.5 // indirect github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.20 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.4 // indirect github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.4 // indirect github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 // indirect github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 // indirect github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1 // indirect github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 // indirect github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.4 // indirect github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.5 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 // indirect github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1 // indirect github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.17.3 // indirect github.com/aws/aws-sdk-go-v2/service/sso v1.24.6 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.1 // indirect github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.25.4 // indirect github.com/aws/aws-sdk-go-v2/service/sts v1.33.1 // indirect
github.com/aws/smithy-go v1.17.0 // indirect github.com/aws/smithy-go v1.22.1 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/bytedance/sonic v1.11.6 // indirect github.com/bytedance/sonic v1.11.6 // indirect
github.com/bytedance/sonic/loader v0.1.1 // indirect github.com/bytedance/sonic/loader v0.1.1 // indirect
@@ -165,7 +163,6 @@ require (
github.com/yusufpapurcu/wmi v1.2.4 // indirect github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.etcd.io/etcd/api/v3 v3.5.13 // indirect go.etcd.io/etcd/api/v3 v3.5.13 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.13 // indirect go.etcd.io/etcd/client/pkg/v3 v3.5.13 // indirect
go.etcd.io/etcd/client/v3 v3.5.13 // indirect
go.opencensus.io v0.24.0 // indirect go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 // indirect go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
@@ -175,15 +172,16 @@ require (
go.uber.org/atomic v1.9.0 // indirect go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.11.0 // indirect go.uber.org/multierr v1.11.0 // indirect
golang.org/x/arch v0.7.0 // indirect golang.org/x/arch v0.7.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/image v0.15.0 // indirect golang.org/x/image v0.15.0 // indirect
golang.org/x/net v0.29.0 // indirect golang.org/x/net v0.29.0 // indirect
golang.org/x/oauth2 v0.21.0 // indirect golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/sys v0.25.0 // indirect golang.org/x/sys v0.25.0 // indirect
golang.org/x/text v0.18.0 // indirect golang.org/x/text v0.18.0 // indirect
golang.org/x/time v0.5.0 // indirect golang.org/x/time v0.5.0 // indirect
google.golang.org/appengine/v2 v2.0.2 // indirect google.golang.org/appengine/v2 v2.0.2 // indirect
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 // indirect google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240604185151-ef581f913117 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 // indirect
gorm.io/gorm v1.25.8 // indirect gorm.io/gorm v1.25.8 // indirect
stathat.com/c/consistent v1.0.0 // indirect stathat.com/c/consistent v1.0.0 // indirect
+40 -40
View File
@@ -1,8 +1,8 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.112.1 h1:uJSeirPke5UNZHIb4SxfZklVSiWWVqW4oXlETwZziwM= cloud.google.com/go v0.112.1 h1:uJSeirPke5UNZHIb4SxfZklVSiWWVqW4oXlETwZziwM=
cloud.google.com/go v0.112.1/go.mod h1:+Vbu+Y1UU+I1rjmzeMOb/8RfkKJK2Gyxi1X6jJCZLo4= cloud.google.com/go v0.112.1/go.mod h1:+Vbu+Y1UU+I1rjmzeMOb/8RfkKJK2Gyxi1X6jJCZLo4=
cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc= cloud.google.com/go/compute/metadata v0.5.0 h1:Zr0eK8JbFv6+Wi4ilXAR8FJ3wyNdpxHKJNPos6LTZOY=
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k= cloud.google.com/go/compute/metadata v0.5.0/go.mod h1:aHnloV2TPI38yx4s9+wAZhHykWvVCfu7hQbF+9CWoiY=
cloud.google.com/go/firestore v1.15.0 h1:/k8ppuWOtNuDHt2tsRV42yI21uaGnKDEQnRFeBpbFF8= cloud.google.com/go/firestore v1.15.0 h1:/k8ppuWOtNuDHt2tsRV42yI21uaGnKDEQnRFeBpbFF8=
cloud.google.com/go/firestore v1.15.0/go.mod h1:GWOxFXcv8GZUtYpWHw/w6IuYNux/BtmeVTMmjrm4yhk= cloud.google.com/go/firestore v1.15.0/go.mod h1:GWOxFXcv8GZUtYpWHw/w6IuYNux/BtmeVTMmjrm4yhk=
cloud.google.com/go/iam v1.1.7 h1:z4VHOhwKLF/+UYXAJDFwGtNF0b6gjsW1Pk9Ml0U/IoM= cloud.google.com/go/iam v1.1.7 h1:z4VHOhwKLF/+UYXAJDFwGtNF0b6gjsW1Pk9Ml0U/IoM=
@@ -21,42 +21,42 @@ github.com/MicahParks/keyfunc v1.9.0/go.mod h1:IdnCilugA0O/99dW+/MkvlyrsX8+L8+x9
github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mod h1:1pk82RBxDY/JZnPQrtqHlUFfCctgdorsd9M06fMynOM= github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mod h1:1pk82RBxDY/JZnPQrtqHlUFfCctgdorsd9M06fMynOM=
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible h1:8psS8a+wKfiLt1iVDX79F7Y6wUM49Lcha2FMXt4UM8g= github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible h1:8psS8a+wKfiLt1iVDX79F7Y6wUM49Lcha2FMXt4UM8g=
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8= github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=
github.com/aws/aws-sdk-go-v2 v1.23.1 h1:qXaFsOOMA+HsZtX8WoCa+gJnbyW7qyFFBlPqvTSzbaI= github.com/aws/aws-sdk-go-v2 v1.32.5 h1:U8vdWJuY7ruAkzaOdD7guwJjD06YSKmnKCJs7s3IkIo=
github.com/aws/aws-sdk-go-v2 v1.23.1/go.mod h1:i1XDttT4rnf6vxc9AuskLc6s7XBee8rlLilKlc03uAA= github.com/aws/aws-sdk-go-v2 v1.32.5/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 h1:ZY3108YtBNq96jNZTICHxN1gSBSbnvIdYwwqnvCV4Mc= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 h1:ZY3108YtBNq96jNZTICHxN1gSBSbnvIdYwwqnvCV4Mc=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1/go.mod h1:t8PYl/6LzdAqsU4/9tz28V/kU+asFePvpOMkdul0gEQ= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1/go.mod h1:t8PYl/6LzdAqsU4/9tz28V/kU+asFePvpOMkdul0gEQ=
github.com/aws/aws-sdk-go-v2/config v1.25.4 h1:r+X1x8QI6FEPdJDWCNBDZHyAcyFwSjHN8q8uuus+Axs= github.com/aws/aws-sdk-go-v2/config v1.28.5 h1:Za41twdCXbuyyWv9LndXxZZv3QhTG1DinqlFsSuvtI0=
github.com/aws/aws-sdk-go-v2/config v1.25.4/go.mod h1:8GTjImECskr7D88P/Nn9uM4M4rLY9i77hLJZgkZEWV8= github.com/aws/aws-sdk-go-v2/config v1.28.5/go.mod h1:4VsPbHP8JdcdUDmbTVgNL/8w9SqOkM5jyY8ljIxLO3o=
github.com/aws/aws-sdk-go-v2/credentials v1.16.3 h1:8PeI2krzzjDJ5etmgaMiD1JswsrLrWvKKu/uBUtNy1g= github.com/aws/aws-sdk-go-v2/credentials v1.17.46 h1:AU7RcriIo2lXjUfHFnFKYsLCwgbz1E7Mm95ieIRDNUg=
github.com/aws/aws-sdk-go-v2/credentials v1.16.3/go.mod h1:Kdh/okh+//vQ/AjEt81CjvkTo64+/zIE4OewP7RpfXk= github.com/aws/aws-sdk-go-v2/credentials v1.17.46/go.mod h1:1FmYyLGL08KQXQ6mcTlifyFXfJVCNJTVGuQP4m0d/UA=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.5 h1:KehRNiVzIfAcj6gw98zotVbb/K67taJE0fkfgM6vzqU= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.20 h1:sDSXIrlsFSFJtWKLQS4PUWRvrT580rrnuLydJrCQ/yA=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.5/go.mod h1:VhnExhw6uXy9QzetvpXDolo1/hjhx4u9qukBGkuUwjs= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.20/go.mod h1:WZ/c+w0ofps+/OUqMwWgnfrgzZH1DZO1RIkktICsqnY=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.4 h1:LAm3Ycm9HJfbSCd5I+wqC2S9Ej7FPrgr5CQoOljJZcE= github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.24 h1:4usbeaes3yJnCFC7kfeyhkdkPtoRYPa/hTmCqMpKpLI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.4/go.mod h1:xEhvbJcyUf/31yfGSQBe01fukXwXJ0gxDp7rLfymWE0= github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.24/go.mod h1:5CI1JemjVwde8m2WG3cz23qHKPOxbpkq0HaoreEgLIY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.4 h1:4GV0kKZzUxiWxSVpn/9gwR0g21NF1Jsyduzo9rHgC/Q= github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.24 h1:N1zsICrQglfzaBnrfM0Ys00860C+QFwu6u/5+LomP+o=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.4/go.mod h1:dYvTNAggxDZy6y1AF7YDwXsPuHFy/VNEpEI/2dWK9IU= github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.24/go.mod h1:dCn9HbJ8+K31i8IQ8EWmWj0EiIk0+vKiHNMxTTYveAg=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 h1:uR9lXYjdPX0xY+NhvaJ4dD8rpSRz5VY81ccIIoNG+lw= github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1/go.mod h1:6fQQgfuGmw8Al/3M2IgIllycxV7ZW7WCdVSqfBeUiCY= github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 h1:40Q4X5ebZruRtknEZH/bg91sT5pR853F7/1X9QRbI54= github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 h1:40Q4X5ebZruRtknEZH/bg91sT5pR853F7/1X9QRbI54=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4/go.mod h1:u77N7eEECzUv7F0xl2gcfK/vzc8wcjWobpy+DcrLJ5E= github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4/go.mod h1:u77N7eEECzUv7F0xl2gcfK/vzc8wcjWobpy+DcrLJ5E=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1 h1:rpkF4n0CyFcrJUG/rNNohoTmhtWlFTRI4BsZOh9PvLs= github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 h1:iXtILhvDxB6kPvEXgsDhGaZCSC6LQET5ZHSdJozeI0Y=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1/go.mod h1:l9ymW25HOqymeU2m1gbUQ3rUIsTwKs8gYHXkqDQUhiI= github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1/go.mod h1:9nu0fVANtYiAePIBh2/pFUSwtJ402hLnp854CNoDOeE=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 h1:6DRKQc+9cChgzL5gplRGusI5dBGeiEod4m/pmGbcX48= github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 h1:6DRKQc+9cChgzL5gplRGusI5dBGeiEod4m/pmGbcX48=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4/go.mod h1:s8ORvrW4g4v7IvYKIAoBg17w3GQ+XuwXDXYrQ5SkzU0= github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4/go.mod h1:s8ORvrW4g4v7IvYKIAoBg17w3GQ+XuwXDXYrQ5SkzU0=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.4 h1:rdovz3rEu0vZKbzoMYPTehp0E8veoE9AyfzqCr5Eeao= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.5 h1:wtpJ4zcwrSbwhECWQoI/g6WM9zqCcSpHDJIWSbMLOu4=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.4/go.mod h1:aYCGNjyUCUelhofxlZyj63srdxWUSsBSGg5l6MCuXuE= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.5/go.mod h1:qu/W9HXQbbQ4+1+JcZp0ZNPV31ym537ZJN+fiS7Ti8E=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 h1:o3DcfCxGDIT20pTbVKVhp3vWXOj/VvgazNJvumWeYW0= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 h1:o3DcfCxGDIT20pTbVKVhp3vWXOj/VvgazNJvumWeYW0=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4/go.mod h1:Uy0KVOxuTK2ne+/PKQ+VvEeWmjMMksE17k/2RK/r5oM= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4/go.mod h1:Uy0KVOxuTK2ne+/PKQ+VvEeWmjMMksE17k/2RK/r5oM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1 h1:1w11lfXOa8HoHoSlNtt4mqv/N3HmDOa+OnUH3Y9DHm8= github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1 h1:1w11lfXOa8HoHoSlNtt4mqv/N3HmDOa+OnUH3Y9DHm8=
github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1/go.mod h1:dqJ5JBL0clzgHriH35Amx3LRFY6wNIPUX7QO/BerSBo= github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1/go.mod h1:dqJ5JBL0clzgHriH35Amx3LRFY6wNIPUX7QO/BerSBo=
github.com/aws/aws-sdk-go-v2/service/sso v1.17.3 h1:CdsSOGlFF3Pn+koXOIpTtvX7st0IuGsZ8kJqcWMlX54= github.com/aws/aws-sdk-go-v2/service/sso v1.24.6 h1:3zu537oLmsPfDMyjnUS2g+F2vITgy5pB74tHI+JBNoM=
github.com/aws/aws-sdk-go-v2/service/sso v1.17.3/go.mod h1:oA6VjNsLll2eVuUoF2D+CMyORgNzPEW/3PyUdq6WQjI= github.com/aws/aws-sdk-go-v2/service/sso v1.24.6/go.mod h1:WJSZH2ZvepM6t6jwu4w/Z45Eoi75lPN7DcydSRtJg6Y=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.1 h1:cbRqFTVnJV+KRpwFl76GJdIZJKKCdTPnjUZ7uWh3pIU= github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.5 h1:K0OQAsDywb0ltlFrZm0JHPY3yZp/S9OaoLU33S7vPS8=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.1/go.mod h1:hHL974p5auvXlZPIjJTblXJpbkfK4klBczlsEaMCGVY= github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.5/go.mod h1:ORITg+fyuMoeiQFiVGoqB3OydVTLkClw/ljbblMq6Cc=
github.com/aws/aws-sdk-go-v2/service/sts v1.25.4 h1:yEvZ4neOQ/KpUqyR+X0ycUTW/kVRNR4nDZ38wStHGAA= github.com/aws/aws-sdk-go-v2/service/sts v1.33.1 h1:6SZUVRQNvExYlMLbHdlKB48x0fLbc2iVROyaNEwBHbU=
github.com/aws/aws-sdk-go-v2/service/sts v1.25.4/go.mod h1:feTnm2Tk/pJxdX+eooEsxvlvTWBvDm6CasRZ+JOs2IY= github.com/aws/aws-sdk-go-v2/service/sts v1.33.1/go.mod h1:GqWyYCwLXnlUB1lOAXQyNSPqPLQJvmo8J0DWBzp9mtg=
github.com/aws/smithy-go v1.17.0 h1:wWJD7LX6PBV6etBUwO0zElG0nWN9rUhp0WdYeHSHAaI= github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
github.com/aws/smithy-go v1.17.0/go.mod h1:NukqUGpCZIILqqiV0NIjeFh24kd/FAa4beRb6nbIUPE= github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8= github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@@ -319,10 +319,10 @@ github.com/onsi/gomega v1.25.0 h1:Vw7br2PCDYijJHSfBOWhov+8cAnUf8MfMaIOV323l6Y=
github.com/onsi/gomega v1.25.0/go.mod h1:r+zV744Re+DiYCIPRlYOTxn0YkOLcAnW8k1xXdMPGhM= github.com/onsi/gomega v1.25.0/go.mod h1:r+zV744Re+DiYCIPRlYOTxn0YkOLcAnW8k1xXdMPGhM=
github.com/openimsdk/gomake v0.0.14-alpha.5 h1:VY9c5x515lTfmdhhPjMvR3BBRrRquAUCFsz7t7vbv7Y= github.com/openimsdk/gomake v0.0.14-alpha.5 h1:VY9c5x515lTfmdhhPjMvR3BBRrRquAUCFsz7t7vbv7Y=
github.com/openimsdk/gomake v0.0.14-alpha.5/go.mod h1:PndCozNc2IsQIciyn9mvEblYWZwJmAI+06z94EY+csI= github.com/openimsdk/gomake v0.0.14-alpha.5/go.mod h1:PndCozNc2IsQIciyn9mvEblYWZwJmAI+06z94EY+csI=
github.com/openimsdk/protocol v0.0.72-alpha.54 h1:opato7N4QjjRq/SHD54bDSVBpOEEDp1VLWVk5Os2A9s= github.com/openimsdk/protocol v0.0.72-alpha.71 h1:R3utzOlqepaJWTAmnfJi4ccUM/XIoFasSyjQMOipM70=
github.com/openimsdk/protocol v0.0.72-alpha.54/go.mod h1:OZQA9FR55lseYoN2Ql1XAHYKHJGu7OMNkUbuekrKCM8= github.com/openimsdk/protocol v0.0.72-alpha.71/go.mod h1:WF7EuE55vQvpyUAzDXcqg+B+446xQyEba0X35lTINmw=
github.com/openimsdk/tools v0.0.50-alpha.32 h1:JEsUFHFnaYg230TG+Ke3SUnaA2h44t4kABAzEdv5VZw= github.com/openimsdk/tools v0.0.50-alpha.67 h1:K7kguqvPbjldHAi7pGhcG2ERkctCqG9ZFlteT7UKaxM=
github.com/openimsdk/tools v0.0.50-alpha.32/go.mod h1:r5U6RbxcR4xhKb2fhTmKGC9Yt5LcErHBVt3lhXQIHSo= github.com/openimsdk/tools v0.0.50-alpha.67/go.mod h1:B+oqV0zdewN7OiEHYJm+hW+8/Te7B8tHHgD8rK5ZLZk=
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM= github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs= github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ= github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
@@ -497,8 +497,8 @@ golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo= golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo=
golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0= golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.21.0 h1:tsimM75w1tF/uws5rbeHzIWxEqElMehnc+iW793zsZs= golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.21.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -565,8 +565,8 @@ google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo= google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 h1:9+tzLLstTlPTRyJTh+ah5wIMsBW5c4tQwGTN3thOW9Y= google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 h1:9+tzLLstTlPTRyJTh+ah5wIMsBW5c4tQwGTN3thOW9Y=
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9/go.mod h1:mqHbVIp48Muh7Ywss/AD6I5kNVKZMmAa/QEW58Gxp2s= google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9/go.mod h1:mqHbVIp48Muh7Ywss/AD6I5kNVKZMmAa/QEW58Gxp2s=
google.golang.org/genproto/googleapis/api v0.0.0-20240604185151-ef581f913117 h1:+rdxYoE3E5htTEWIe15GlN6IfvbURM//Jt0mmkmm6ZU= google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 h1:hjSy6tcFQZ171igDaN5QHOw2n6vx40juYbC/x67CEhc=
google.golang.org/genproto/googleapis/api v0.0.0-20240604185151-ef581f913117/go.mod h1:OimBR/bc1wPO9iV4NC2bpyjy3VnAwZh5EBPQdtaE5oo= google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:qpvKtACPCQhAdu3PyQgV4l3LMXZEtft7y8QcarRsp9I=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 h1:pPJltXNxVzT4pK9yD8vR9X75DaWYYmLGMsEvBfFQZzQ= google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 h1:pPJltXNxVzT4pK9yD8vR9X75DaWYYmLGMsEvBfFQZzQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU= google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@@ -574,8 +574,8 @@ google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyac
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.66.2 h1:3QdXkuq3Bkh7w+ywLdLvM56cmGvQHUMZpiCzt6Rqaoo= google.golang.org/grpc v1.68.0 h1:aHQeeJbo8zAkAa3pRzrVjZlbz6uSfeOXlJNQM0RAbz0=
google.golang.org/grpc v1.66.2/go.mod h1:s3/l6xSSCURdVfAnL+TqCNMyTDAGN6+lZeVxnZR128Y= google.golang.org/grpc v1.68.0/go.mod h1:fmSPC5AsjSBCK54MyHRx48kpOti1/jRfOlwEWywNjWA=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -585,8 +585,8 @@ google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg= google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw= google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
+9 -8
View File
@@ -16,29 +16,30 @@ package api
import ( import (
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/auth" "github.com/openimsdk/protocol/auth"
"github.com/openimsdk/tools/a2r" "github.com/openimsdk/tools/a2r"
) )
type AuthApi rpcclient.Auth type AuthApi struct {
Client auth.AuthClient
}
func NewAuthApi(client rpcclient.Auth) AuthApi { func NewAuthApi(client auth.AuthClient) AuthApi {
return AuthApi(client) return AuthApi{client}
} }
func (o *AuthApi) GetAdminToken(c *gin.Context) { func (o *AuthApi) GetAdminToken(c *gin.Context) {
a2r.Call(auth.AuthClient.GetAdminToken, o.Client, c) a2r.Call(c, auth.AuthClient.GetAdminToken, o.Client)
} }
func (o *AuthApi) GetUserToken(c *gin.Context) { func (o *AuthApi) GetUserToken(c *gin.Context) {
a2r.Call(auth.AuthClient.GetUserToken, o.Client, c) a2r.Call(c, auth.AuthClient.GetUserToken, o.Client)
} }
func (o *AuthApi) ParseToken(c *gin.Context) { func (o *AuthApi) ParseToken(c *gin.Context) {
a2r.Call(auth.AuthClient.ParseToken, o.Client, c) a2r.Call(c, auth.AuthClient.ParseToken, o.Client)
} }
func (o *AuthApi) ForceLogout(c *gin.Context) { func (o *AuthApi) ForceLogout(c *gin.Context) {
a2r.Call(auth.AuthClient.ForceLogout, o.Client, c) a2r.Call(c, auth.AuthClient.ForceLogout, o.Client)
} }
+313
View File
@@ -0,0 +1,313 @@
package api
//
//import (
// "encoding/json"
// "reflect"
// "strconv"
// "time"
//
// "github.com/gin-gonic/gin"
// "github.com/openimsdk/open-im-server/v3/pkg/apistruct"
// "github.com/openimsdk/open-im-server/v3/pkg/authverify"
// "github.com/openimsdk/open-im-server/v3/pkg/common/config"
// "github.com/openimsdk/open-im-server/v3/pkg/common/discovery/etcd"
// "github.com/openimsdk/open-im-server/v3/version"
// "github.com/openimsdk/tools/apiresp"
// "github.com/openimsdk/tools/errs"
// "github.com/openimsdk/tools/log"
// "github.com/openimsdk/tools/utils/runtimeenv"
// clientv3 "go.etcd.io/etcd/client/v3"
//)
//
//const (
// // wait for Restart http call return
// waitHttp = time.Millisecond * 200
//)
//
//type ConfigManager struct {
// imAdminUserID []string
// config *config.AllConfig
// client *clientv3.Client
//
// configPath string
// runtimeEnv string
//}
//
//func NewConfigManager(IMAdminUserID []string, cfg *config.AllConfig, client *clientv3.Client, configPath string, runtimeEnv string) *ConfigManager {
// cm := &ConfigManager{
// imAdminUserID: IMAdminUserID,
// config: cfg,
// client: client,
// configPath: configPath,
// runtimeEnv: runtimeEnv,
// }
// return cm
//}
//
//func (cm *ConfigManager) CheckAdmin(c *gin.Context) {
// if err := authverify.CheckAdmin(c, cm.imAdminUserID); err != nil {
// apiresp.GinError(c, err)
// c.Abort()
// }
//}
//
//func (cm *ConfigManager) GetConfig(c *gin.Context) {
// var req apistruct.GetConfigReq
// if err := c.BindJSON(&req); err != nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
// return
// }
// conf := cm.config.Name2Config(req.ConfigName)
// if conf == nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail("config name not found").Wrap())
// return
// }
// b, err := json.Marshal(conf)
// if err != nil {
// apiresp.GinError(c, err)
// return
// }
// apiresp.GinSuccess(c, string(b))
//}
//
//func (cm *ConfigManager) GetConfigList(c *gin.Context) {
// var resp apistruct.GetConfigListResp
// resp.ConfigNames = cm.config.GetConfigNames()
// resp.Environment = runtimeenv.PrintRuntimeEnvironment()
// resp.Version = version.Version
//
// apiresp.GinSuccess(c, resp)
//}
//
//func (cm *ConfigManager) SetConfig(c *gin.Context) {
// if cm.config.Discovery.Enable != config.ETCD {
// apiresp.GinError(c, errs.New("only etcd support set config").Wrap())
// return
// }
// var req apistruct.SetConfigReq
// if err := c.BindJSON(&req); err != nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
// return
// }
// var err error
// switch req.ConfigName {
// case cm.config.Discovery.GetConfigFileName():
// err = compareAndSave[config.Discovery](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Kafka.GetConfigFileName():
// err = compareAndSave[config.Kafka](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.LocalCache.GetConfigFileName():
// err = compareAndSave[config.LocalCache](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Log.GetConfigFileName():
// err = compareAndSave[config.Log](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Minio.GetConfigFileName():
// err = compareAndSave[config.Minio](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Mongo.GetConfigFileName():
// err = compareAndSave[config.Mongo](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Notification.GetConfigFileName():
// err = compareAndSave[config.Notification](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.API.GetConfigFileName():
// err = compareAndSave[config.API](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.CronTask.GetConfigFileName():
// err = compareAndSave[config.CronTask](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.MsgGateway.GetConfigFileName():
// err = compareAndSave[config.MsgGateway](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.MsgTransfer.GetConfigFileName():
// err = compareAndSave[config.MsgTransfer](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Push.GetConfigFileName():
// err = compareAndSave[config.Push](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Auth.GetConfigFileName():
// err = compareAndSave[config.Auth](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Conversation.GetConfigFileName():
// err = compareAndSave[config.Conversation](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Friend.GetConfigFileName():
// err = compareAndSave[config.Friend](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Group.GetConfigFileName():
// err = compareAndSave[config.Group](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Msg.GetConfigFileName():
// err = compareAndSave[config.Msg](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Third.GetConfigFileName():
// err = compareAndSave[config.Third](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.User.GetConfigFileName():
// err = compareAndSave[config.User](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Redis.GetConfigFileName():
// err = compareAndSave[config.Redis](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Share.GetConfigFileName():
// err = compareAndSave[config.Share](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Webhooks.GetConfigFileName():
// err = compareAndSave[config.Webhooks](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// default:
// apiresp.GinError(c, errs.ErrArgs.Wrap())
// return
// }
// if err != nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
// return
// }
// apiresp.GinSuccess(c, nil)
//}
//
//func compareAndSave[T any](c *gin.Context, old any, req *apistruct.SetConfigReq, cm *ConfigManager) error {
// conf := new(T)
// err := json.Unmarshal([]byte(req.Data), &conf)
// if err != nil {
// return errs.ErrArgs.WithDetail(err.Error()).Wrap()
// }
// eq := reflect.DeepEqual(old, conf)
// if eq {
// return nil
// }
// data, err := json.Marshal(conf)
// if err != nil {
// return errs.ErrArgs.WithDetail(err.Error()).Wrap()
// }
// _, err = cm.client.Put(c, etcd.BuildKey(req.ConfigName), string(data))
// if err != nil {
// return errs.WrapMsg(err, "save to etcd failed")
// }
// return nil
//}
//
//func (cm *ConfigManager) ResetConfig(c *gin.Context) {
// go func() {
// if err := cm.resetConfig(c, true); err != nil {
// log.ZError(c, "reset config err", err)
// }
// }()
// apiresp.GinSuccess(c, nil)
//}
//
//func (cm *ConfigManager) resetConfig(c *gin.Context, checkChange bool, ops ...clientv3.Op) error {
// txn := cm.client.Txn(c)
// type initConf struct {
// old any
// new any
// }
// configMap := map[string]*initConf{
// cm.config.Discovery.GetConfigFileName(): {old: &cm.config.Discovery, new: new(config.Discovery)},
// cm.config.Kafka.GetConfigFileName(): {old: &cm.config.Kafka, new: new(config.Kafka)},
// cm.config.LocalCache.GetConfigFileName(): {old: &cm.config.LocalCache, new: new(config.LocalCache)},
// cm.config.Log.GetConfigFileName(): {old: &cm.config.Log, new: new(config.Log)},
// cm.config.Minio.GetConfigFileName(): {old: &cm.config.Minio, new: new(config.Minio)},
// cm.config.Mongo.GetConfigFileName(): {old: &cm.config.Mongo, new: new(config.Mongo)},
// cm.config.Notification.GetConfigFileName(): {old: &cm.config.Notification, new: new(config.Notification)},
// cm.config.API.GetConfigFileName(): {old: &cm.config.API, new: new(config.API)},
// cm.config.CronTask.GetConfigFileName(): {old: &cm.config.CronTask, new: new(config.CronTask)},
// cm.config.MsgGateway.GetConfigFileName(): {old: &cm.config.MsgGateway, new: new(config.MsgGateway)},
// cm.config.MsgTransfer.GetConfigFileName(): {old: &cm.config.MsgTransfer, new: new(config.MsgTransfer)},
// cm.config.Push.GetConfigFileName(): {old: &cm.config.Push, new: new(config.Push)},
// cm.config.Auth.GetConfigFileName(): {old: &cm.config.Auth, new: new(config.Auth)},
// cm.config.Conversation.GetConfigFileName(): {old: &cm.config.Conversation, new: new(config.Conversation)},
// cm.config.Friend.GetConfigFileName(): {old: &cm.config.Friend, new: new(config.Friend)},
// cm.config.Group.GetConfigFileName(): {old: &cm.config.Group, new: new(config.Group)},
// cm.config.Msg.GetConfigFileName(): {old: &cm.config.Msg, new: new(config.Msg)},
// cm.config.Third.GetConfigFileName(): {old: &cm.config.Third, new: new(config.Third)},
// cm.config.User.GetConfigFileName(): {old: &cm.config.User, new: new(config.User)},
// cm.config.Redis.GetConfigFileName(): {old: &cm.config.Redis, new: new(config.Redis)},
// cm.config.Share.GetConfigFileName(): {old: &cm.config.Share, new: new(config.Share)},
// cm.config.Webhooks.GetConfigFileName(): {old: &cm.config.Webhooks, new: new(config.Webhooks)},
// }
//
// changedKeys := make([]string, 0, len(configMap))
// for k, v := range configMap {
// err := config.Load(
// cm.configPath,
// k,
// config.EnvPrefixMap[k],
// cm.runtimeEnv,
// v.new,
// )
// if err != nil {
// log.ZError(c, "load config failed", err)
// continue
// }
// equal := reflect.DeepEqual(v.old, v.new)
// if !checkChange || !equal {
// changedKeys = append(changedKeys, k)
// }
// }
//
// for _, k := range changedKeys {
// data, err := json.Marshal(configMap[k].new)
// if err != nil {
// log.ZError(c, "marshal config failed", err)
// continue
// }
// ops = append(ops, clientv3.OpPut(etcd.BuildKey(k), string(data)))
// }
// if len(ops) > 0 {
// txn.Then(ops...)
// _, err := txn.Commit()
// if err != nil {
// return errs.WrapMsg(err, "commit etcd txn failed")
// }
// }
// return nil
//}
//
//func (cm *ConfigManager) Restart(c *gin.Context) {
// go cm.restart(c)
// apiresp.GinSuccess(c, nil)
//}
//
//func (cm *ConfigManager) restart(c *gin.Context) {
// time.Sleep(waitHttp) // wait for Restart http call return
// t := time.Now().Unix()
// _, err := cm.client.Put(c, etcd.BuildKey(etcd.RestartKey), strconv.Itoa(int(t)))
// if err != nil {
// log.ZError(c, "restart etcd put key failed", err)
// }
//}
//
//func (cm *ConfigManager) SetEnableConfigManager(c *gin.Context) {
// if cm.config.Discovery.Enable != config.ETCD {
// apiresp.GinError(c, errs.New("only etcd support config manager").Wrap())
// return
// }
// var req apistruct.SetEnableConfigManagerReq
// if err := c.BindJSON(&req); err != nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
// return
// }
// var enableStr string
// if req.Enable {
// enableStr = etcd.Enable
// } else {
// enableStr = etcd.Disable
// }
// resp, err := cm.client.Get(c, etcd.BuildKey(etcd.EnableConfigCenterKey))
// if err != nil {
// apiresp.GinError(c, errs.WrapMsg(err, "getEnableConfigManager failed"))
// return
// }
// if !(resp.Count > 0 && string(resp.Kvs[0].Value) == etcd.Enable) && req.Enable {
// go func() {
// time.Sleep(waitHttp) // wait for Restart http call return
// err := cm.resetConfig(c, false, clientv3.OpPut(etcd.BuildKey(etcd.EnableConfigCenterKey), enableStr))
// if err != nil {
// log.ZError(c, "resetConfig failed", err)
// }
// }()
// } else {
// _, err = cm.client.Put(c, etcd.BuildKey(etcd.EnableConfigCenterKey), enableStr)
// if err != nil {
// apiresp.GinError(c, errs.WrapMsg(err, "setEnableConfigManager failed"))
// return
// }
// }
//
// apiresp.GinSuccess(c, nil)
//}
//
//func (cm *ConfigManager) GetEnableConfigManager(c *gin.Context) {
// resp, err := cm.client.Get(c, etcd.BuildKey(etcd.EnableConfigCenterKey))
// if err != nil {
// apiresp.GinError(c, errs.WrapMsg(err, "getEnableConfigManager failed"))
// return
// }
// var enable bool
// if resp.Count > 0 && string(resp.Kvs[0].Value) == etcd.Enable {
// enable = true
// }
// apiresp.GinSuccess(c, &apistruct.GetEnableConfigManagerResp{Enable: enable})
//}
+16 -15
View File
@@ -16,57 +16,58 @@ package api
import ( import (
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/conversation" "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/tools/a2r" "github.com/openimsdk/tools/a2r"
) )
type ConversationApi rpcclient.Conversation type ConversationApi struct {
Client conversation.ConversationClient
}
func NewConversationApi(client rpcclient.Conversation) ConversationApi { func NewConversationApi(client conversation.ConversationClient) ConversationApi {
return ConversationApi(client) return ConversationApi{client}
} }
func (o *ConversationApi) GetAllConversations(c *gin.Context) { func (o *ConversationApi) GetAllConversations(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetAllConversations, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetAllConversations, o.Client)
} }
func (o *ConversationApi) GetSortedConversationList(c *gin.Context) { func (o *ConversationApi) GetSortedConversationList(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetSortedConversationList, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetSortedConversationList, o.Client)
} }
func (o *ConversationApi) GetConversation(c *gin.Context) { func (o *ConversationApi) GetConversation(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetConversation, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetConversation, o.Client)
} }
func (o *ConversationApi) GetConversations(c *gin.Context) { func (o *ConversationApi) GetConversations(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetConversations, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetConversations, o.Client)
} }
func (o *ConversationApi) SetConversations(c *gin.Context) { func (o *ConversationApi) SetConversations(c *gin.Context) {
a2r.Call(conversation.ConversationClient.SetConversations, o.Client, c) a2r.Call(c, conversation.ConversationClient.SetConversations, o.Client)
} }
func (o *ConversationApi) GetConversationOfflinePushUserIDs(c *gin.Context) { func (o *ConversationApi) GetConversationOfflinePushUserIDs(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetConversationOfflinePushUserIDs, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetConversationOfflinePushUserIDs, o.Client)
} }
func (o *ConversationApi) GetFullOwnerConversationIDs(c *gin.Context) { func (o *ConversationApi) GetFullOwnerConversationIDs(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetFullOwnerConversationIDs, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetFullOwnerConversationIDs, o.Client)
} }
func (o *ConversationApi) GetIncrementalConversation(c *gin.Context) { func (o *ConversationApi) GetIncrementalConversation(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetIncrementalConversation, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetIncrementalConversation, o.Client)
} }
func (o *ConversationApi) GetOwnerConversation(c *gin.Context) { func (o *ConversationApi) GetOwnerConversation(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetOwnerConversation, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetOwnerConversation, o.Client)
} }
func (o *ConversationApi) GetNotNotifyConversationIDs(c *gin.Context) { func (o *ConversationApi) GetNotNotifyConversationIDs(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetNotNotifyConversationIDs, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetNotNotifyConversationIDs, o.Client)
} }
func (o *ConversationApi) GetPinnedConversationIDs(c *gin.Context) { func (o *ConversationApi) GetPinnedConversationIDs(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetPinnedConversationIDs, o.Client, c) a2r.Call(c, conversation.ConversationClient.GetPinnedConversationIDs, o.Client)
} }
+26 -25
View File
@@ -17,99 +17,100 @@ package api
import ( import (
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/relation" "github.com/openimsdk/protocol/relation"
"github.com/openimsdk/tools/a2r" "github.com/openimsdk/tools/a2r"
) )
type FriendApi rpcclient.Friend type FriendApi struct {
Client relation.FriendClient
}
func NewFriendApi(client rpcclient.Friend) FriendApi { func NewFriendApi(client relation.FriendClient) FriendApi {
return FriendApi(client) return FriendApi{client}
} }
func (o *FriendApi) ApplyToAddFriend(c *gin.Context) { func (o *FriendApi) ApplyToAddFriend(c *gin.Context) {
a2r.Call(relation.FriendClient.ApplyToAddFriend, o.Client, c) a2r.Call(c, relation.FriendClient.ApplyToAddFriend, o.Client)
} }
func (o *FriendApi) RespondFriendApply(c *gin.Context) { func (o *FriendApi) RespondFriendApply(c *gin.Context) {
a2r.Call(relation.FriendClient.RespondFriendApply, o.Client, c) a2r.Call(c, relation.FriendClient.RespondFriendApply, o.Client)
} }
func (o *FriendApi) DeleteFriend(c *gin.Context) { func (o *FriendApi) DeleteFriend(c *gin.Context) {
a2r.Call(relation.FriendClient.DeleteFriend, o.Client, c) a2r.Call(c, relation.FriendClient.DeleteFriend, o.Client)
} }
func (o *FriendApi) GetFriendApplyList(c *gin.Context) { func (o *FriendApi) GetFriendApplyList(c *gin.Context) {
a2r.Call(relation.FriendClient.GetPaginationFriendsApplyTo, o.Client, c) a2r.Call(c, relation.FriendClient.GetPaginationFriendsApplyTo, o.Client)
} }
func (o *FriendApi) GetDesignatedFriendsApply(c *gin.Context) { func (o *FriendApi) GetDesignatedFriendsApply(c *gin.Context) {
a2r.Call(relation.FriendClient.GetDesignatedFriendsApply, o.Client, c) a2r.Call(c, relation.FriendClient.GetDesignatedFriendsApply, o.Client)
} }
func (o *FriendApi) GetSelfApplyList(c *gin.Context) { func (o *FriendApi) GetSelfApplyList(c *gin.Context) {
a2r.Call(relation.FriendClient.GetPaginationFriendsApplyFrom, o.Client, c) a2r.Call(c, relation.FriendClient.GetPaginationFriendsApplyFrom, o.Client)
} }
func (o *FriendApi) GetFriendList(c *gin.Context) { func (o *FriendApi) GetFriendList(c *gin.Context) {
a2r.Call(relation.FriendClient.GetPaginationFriends, o.Client, c) a2r.Call(c, relation.FriendClient.GetPaginationFriends, o.Client)
} }
func (o *FriendApi) GetDesignatedFriends(c *gin.Context) { func (o *FriendApi) GetDesignatedFriends(c *gin.Context) {
a2r.Call(relation.FriendClient.GetDesignatedFriends, o.Client, c) a2r.Call(c, relation.FriendClient.GetDesignatedFriends, o.Client)
} }
func (o *FriendApi) SetFriendRemark(c *gin.Context) { func (o *FriendApi) SetFriendRemark(c *gin.Context) {
a2r.Call(relation.FriendClient.SetFriendRemark, o.Client, c) a2r.Call(c, relation.FriendClient.SetFriendRemark, o.Client)
} }
func (o *FriendApi) AddBlack(c *gin.Context) { func (o *FriendApi) AddBlack(c *gin.Context) {
a2r.Call(relation.FriendClient.AddBlack, o.Client, c) a2r.Call(c, relation.FriendClient.AddBlack, o.Client)
} }
func (o *FriendApi) GetPaginationBlacks(c *gin.Context) { func (o *FriendApi) GetPaginationBlacks(c *gin.Context) {
a2r.Call(relation.FriendClient.GetPaginationBlacks, o.Client, c) a2r.Call(c, relation.FriendClient.GetPaginationBlacks, o.Client)
} }
func (o *FriendApi) GetSpecifiedBlacks(c *gin.Context) { func (o *FriendApi) GetSpecifiedBlacks(c *gin.Context) {
a2r.Call(relation.FriendClient.GetSpecifiedBlacks, o.Client, c) a2r.Call(c, relation.FriendClient.GetSpecifiedBlacks, o.Client)
} }
func (o *FriendApi) RemoveBlack(c *gin.Context) { func (o *FriendApi) RemoveBlack(c *gin.Context) {
a2r.Call(relation.FriendClient.RemoveBlack, o.Client, c) a2r.Call(c, relation.FriendClient.RemoveBlack, o.Client)
} }
func (o *FriendApi) ImportFriends(c *gin.Context) { func (o *FriendApi) ImportFriends(c *gin.Context) {
a2r.Call(relation.FriendClient.ImportFriends, o.Client, c) a2r.Call(c, relation.FriendClient.ImportFriends, o.Client)
} }
func (o *FriendApi) IsFriend(c *gin.Context) { func (o *FriendApi) IsFriend(c *gin.Context) {
a2r.Call(relation.FriendClient.IsFriend, o.Client, c) a2r.Call(c, relation.FriendClient.IsFriend, o.Client)
} }
func (o *FriendApi) GetFriendIDs(c *gin.Context) { func (o *FriendApi) GetFriendIDs(c *gin.Context) {
a2r.Call(relation.FriendClient.GetFriendIDs, o.Client, c) a2r.Call(c, relation.FriendClient.GetFriendIDs, o.Client)
} }
func (o *FriendApi) GetSpecifiedFriendsInfo(c *gin.Context) { func (o *FriendApi) GetSpecifiedFriendsInfo(c *gin.Context) {
a2r.Call(relation.FriendClient.GetSpecifiedFriendsInfo, o.Client, c) a2r.Call(c, relation.FriendClient.GetSpecifiedFriendsInfo, o.Client)
} }
func (o *FriendApi) UpdateFriends(c *gin.Context) { func (o *FriendApi) UpdateFriends(c *gin.Context) {
a2r.Call(relation.FriendClient.UpdateFriends, o.Client, c) a2r.Call(c, relation.FriendClient.UpdateFriends, o.Client)
} }
func (o *FriendApi) GetIncrementalFriends(c *gin.Context) { func (o *FriendApi) GetIncrementalFriends(c *gin.Context) {
a2r.Call(relation.FriendClient.GetIncrementalFriends, o.Client, c) a2r.Call(c, relation.FriendClient.GetIncrementalFriends, o.Client)
} }
// GetIncrementalBlacks is temporarily unused. // GetIncrementalBlacks is temporarily unused.
// Deprecated: This function is currently unused and may be removed in future versions. // Deprecated: This function is currently unused and may be removed in future versions.
func (o *FriendApi) GetIncrementalBlacks(c *gin.Context) { func (o *FriendApi) GetIncrementalBlacks(c *gin.Context) {
a2r.Call(relation.FriendClient.GetIncrementalBlacks, o.Client, c) a2r.Call(c, relation.FriendClient.GetIncrementalBlacks, o.Client)
} }
func (o *FriendApi) GetFullFriendUserIDs(c *gin.Context) { func (o *FriendApi) GetFullFriendUserIDs(c *gin.Context) {
a2r.Call(relation.FriendClient.GetFullFriendUserIDs, o.Client, c) a2r.Call(c, relation.FriendClient.GetFullFriendUserIDs, o.Client)
} }
+41 -40
View File
@@ -16,151 +16,152 @@ package api
import ( import (
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/group" "github.com/openimsdk/protocol/group"
"github.com/openimsdk/tools/a2r" "github.com/openimsdk/tools/a2r"
) )
type GroupApi rpcclient.Group type GroupApi struct {
Client group.GroupClient
}
func NewGroupApi(client rpcclient.Group) GroupApi { func NewGroupApi(client group.GroupClient) GroupApi {
return GroupApi(client) return GroupApi{client}
} }
func (o *GroupApi) CreateGroup(c *gin.Context) { func (o *GroupApi) CreateGroup(c *gin.Context) {
a2r.Call(group.GroupClient.CreateGroup, o.Client, c) a2r.Call(c, group.GroupClient.CreateGroup, o.Client)
} }
func (o *GroupApi) SetGroupInfo(c *gin.Context) { func (o *GroupApi) SetGroupInfo(c *gin.Context) {
a2r.Call(group.GroupClient.SetGroupInfo, o.Client, c) a2r.Call(c, group.GroupClient.SetGroupInfo, o.Client)
} }
func (o *GroupApi) SetGroupInfoEx(c *gin.Context) { func (o *GroupApi) SetGroupInfoEx(c *gin.Context) {
a2r.Call(group.GroupClient.SetGroupInfoEx, o.Client, c) a2r.Call(c, group.GroupClient.SetGroupInfoEx, o.Client)
} }
func (o *GroupApi) JoinGroup(c *gin.Context) { func (o *GroupApi) JoinGroup(c *gin.Context) {
a2r.Call(group.GroupClient.JoinGroup, o.Client, c) a2r.Call(c, group.GroupClient.JoinGroup, o.Client)
} }
func (o *GroupApi) QuitGroup(c *gin.Context) { func (o *GroupApi) QuitGroup(c *gin.Context) {
a2r.Call(group.GroupClient.QuitGroup, o.Client, c) a2r.Call(c, group.GroupClient.QuitGroup, o.Client)
} }
func (o *GroupApi) ApplicationGroupResponse(c *gin.Context) { func (o *GroupApi) ApplicationGroupResponse(c *gin.Context) {
a2r.Call(group.GroupClient.GroupApplicationResponse, o.Client, c) a2r.Call(c, group.GroupClient.GroupApplicationResponse, o.Client)
} }
func (o *GroupApi) TransferGroupOwner(c *gin.Context) { func (o *GroupApi) TransferGroupOwner(c *gin.Context) {
a2r.Call(group.GroupClient.TransferGroupOwner, o.Client, c) a2r.Call(c, group.GroupClient.TransferGroupOwner, o.Client)
} }
func (o *GroupApi) GetRecvGroupApplicationList(c *gin.Context) { func (o *GroupApi) GetRecvGroupApplicationList(c *gin.Context) {
a2r.Call(group.GroupClient.GetGroupApplicationList, o.Client, c) a2r.Call(c, group.GroupClient.GetGroupApplicationList, o.Client)
} }
func (o *GroupApi) GetUserReqGroupApplicationList(c *gin.Context) { func (o *GroupApi) GetUserReqGroupApplicationList(c *gin.Context) {
a2r.Call(group.GroupClient.GetUserReqApplicationList, o.Client, c) a2r.Call(c, group.GroupClient.GetUserReqApplicationList, o.Client)
} }
func (o *GroupApi) GetGroupUsersReqApplicationList(c *gin.Context) { func (o *GroupApi) GetGroupUsersReqApplicationList(c *gin.Context) {
a2r.Call(group.GroupClient.GetGroupUsersReqApplicationList, o.Client, c) a2r.Call(c, group.GroupClient.GetGroupUsersReqApplicationList, o.Client)
} }
func (o *GroupApi) GetSpecifiedUserGroupRequestInfo(c *gin.Context) { func (o *GroupApi) GetSpecifiedUserGroupRequestInfo(c *gin.Context) {
a2r.Call(group.GroupClient.GetSpecifiedUserGroupRequestInfo, o.Client, c) a2r.Call(c, group.GroupClient.GetSpecifiedUserGroupRequestInfo, o.Client)
} }
func (o *GroupApi) GetGroupsInfo(c *gin.Context) { func (o *GroupApi) GetGroupsInfo(c *gin.Context) {
a2r.Call(group.GroupClient.GetGroupsInfo, o.Client, c) a2r.Call(c, group.GroupClient.GetGroupsInfo, o.Client)
//a2r.Call(group.GroupClient.GetGroupsInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupsInfo)) //a2r.Call(c, group.GroupClient.GetGroupsInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupsInfo))
} }
func (o *GroupApi) KickGroupMember(c *gin.Context) { func (o *GroupApi) KickGroupMember(c *gin.Context) {
a2r.Call(group.GroupClient.KickGroupMember, o.Client, c) a2r.Call(c, group.GroupClient.KickGroupMember, o.Client)
} }
func (o *GroupApi) GetGroupMembersInfo(c *gin.Context) { func (o *GroupApi) GetGroupMembersInfo(c *gin.Context) {
a2r.Call(group.GroupClient.GetGroupMembersInfo, o.Client, c) a2r.Call(c, group.GroupClient.GetGroupMembersInfo, o.Client)
//a2r.Call(group.GroupClient.GetGroupMembersInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupMembersInfo)) //a2r.Call(c, group.GroupClient.GetGroupMembersInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupMembersInfo))
} }
func (o *GroupApi) GetGroupMemberList(c *gin.Context) { func (o *GroupApi) GetGroupMemberList(c *gin.Context) {
a2r.Call(group.GroupClient.GetGroupMemberList, o.Client, c) a2r.Call(c, group.GroupClient.GetGroupMemberList, o.Client)
} }
func (o *GroupApi) InviteUserToGroup(c *gin.Context) { func (o *GroupApi) InviteUserToGroup(c *gin.Context) {
a2r.Call(group.GroupClient.InviteUserToGroup, o.Client, c) a2r.Call(c, group.GroupClient.InviteUserToGroup, o.Client)
} }
func (o *GroupApi) GetJoinedGroupList(c *gin.Context) { func (o *GroupApi) GetJoinedGroupList(c *gin.Context) {
a2r.Call(group.GroupClient.GetJoinedGroupList, o.Client, c) a2r.Call(c, group.GroupClient.GetJoinedGroupList, o.Client)
} }
func (o *GroupApi) DismissGroup(c *gin.Context) { func (o *GroupApi) DismissGroup(c *gin.Context) {
a2r.Call(group.GroupClient.DismissGroup, o.Client, c) a2r.Call(c, group.GroupClient.DismissGroup, o.Client)
} }
func (o *GroupApi) MuteGroupMember(c *gin.Context) { func (o *GroupApi) MuteGroupMember(c *gin.Context) {
a2r.Call(group.GroupClient.MuteGroupMember, o.Client, c) a2r.Call(c, group.GroupClient.MuteGroupMember, o.Client)
} }
func (o *GroupApi) CancelMuteGroupMember(c *gin.Context) { func (o *GroupApi) CancelMuteGroupMember(c *gin.Context) {
a2r.Call(group.GroupClient.CancelMuteGroupMember, o.Client, c) a2r.Call(c, group.GroupClient.CancelMuteGroupMember, o.Client)
} }
func (o *GroupApi) MuteGroup(c *gin.Context) { func (o *GroupApi) MuteGroup(c *gin.Context) {
a2r.Call(group.GroupClient.MuteGroup, o.Client, c) a2r.Call(c, group.GroupClient.MuteGroup, o.Client)
} }
func (o *GroupApi) CancelMuteGroup(c *gin.Context) { func (o *GroupApi) CancelMuteGroup(c *gin.Context) {
a2r.Call(group.GroupClient.CancelMuteGroup, o.Client, c) a2r.Call(c, group.GroupClient.CancelMuteGroup, o.Client)
} }
func (o *GroupApi) SetGroupMemberInfo(c *gin.Context) { func (o *GroupApi) SetGroupMemberInfo(c *gin.Context) {
a2r.Call(group.GroupClient.SetGroupMemberInfo, o.Client, c) a2r.Call(c, group.GroupClient.SetGroupMemberInfo, o.Client)
} }
func (o *GroupApi) GetGroupAbstractInfo(c *gin.Context) { func (o *GroupApi) GetGroupAbstractInfo(c *gin.Context) {
a2r.Call(group.GroupClient.GetGroupAbstractInfo, o.Client, c) a2r.Call(c, group.GroupClient.GetGroupAbstractInfo, o.Client)
} }
// func (g *Group) SetGroupMemberNickname(c *gin.Context) { // func (g *Group) SetGroupMemberNickname(c *gin.Context) {
// a2r.Call(group.GroupClient.SetGroupMemberNickname, g.userClient, c) // a2r.Call(c, group.GroupClient.SetGroupMemberNickname, g.userClient)
//} //}
// //
// func (g *Group) GetGroupAllMemberList(c *gin.Context) { // func (g *Group) GetGroupAllMemberList(c *gin.Context) {
// a2r.Call(group.GroupClient.GetGroupAllMember, g.userClient, c) // a2r.Call(c, group.GroupClient.GetGroupAllMember, g.userClient)
//} //}
func (o *GroupApi) GroupCreateCount(c *gin.Context) { func (o *GroupApi) GroupCreateCount(c *gin.Context) {
a2r.Call(group.GroupClient.GroupCreateCount, o.Client, c) a2r.Call(c, group.GroupClient.GroupCreateCount, o.Client)
} }
func (o *GroupApi) GetGroups(c *gin.Context) { func (o *GroupApi) GetGroups(c *gin.Context) {
a2r.Call(group.GroupClient.GetGroups, o.Client, c) a2r.Call(c, group.GroupClient.GetGroups, o.Client)
} }
func (o *GroupApi) GetGroupMemberUserIDs(c *gin.Context) { func (o *GroupApi) GetGroupMemberUserIDs(c *gin.Context) {
a2r.Call(group.GroupClient.GetGroupMemberUserIDs, o.Client, c) a2r.Call(c, group.GroupClient.GetGroupMemberUserIDs, o.Client)
} }
func (o *GroupApi) GetIncrementalJoinGroup(c *gin.Context) { func (o *GroupApi) GetIncrementalJoinGroup(c *gin.Context) {
a2r.Call(group.GroupClient.GetIncrementalJoinGroup, o.Client, c) a2r.Call(c, group.GroupClient.GetIncrementalJoinGroup, o.Client)
} }
func (o *GroupApi) GetIncrementalGroupMember(c *gin.Context) { func (o *GroupApi) GetIncrementalGroupMember(c *gin.Context) {
a2r.Call(group.GroupClient.GetIncrementalGroupMember, o.Client, c) a2r.Call(c, group.GroupClient.GetIncrementalGroupMember, o.Client)
} }
func (o *GroupApi) GetIncrementalGroupMemberBatch(c *gin.Context) { func (o *GroupApi) GetIncrementalGroupMemberBatch(c *gin.Context) {
a2r.Call(group.GroupClient.BatchGetIncrementalGroupMember, o.Client, c) a2r.Call(c, group.GroupClient.BatchGetIncrementalGroupMember, o.Client)
} }
func (o *GroupApi) GetFullGroupMemberUserIDs(c *gin.Context) { func (o *GroupApi) GetFullGroupMemberUserIDs(c *gin.Context) {
a2r.Call(group.GroupClient.GetFullGroupMemberUserIDs, o.Client, c) a2r.Call(c, group.GroupClient.GetFullGroupMemberUserIDs, o.Client)
} }
func (o *GroupApi) GetFullJoinGroupIDs(c *gin.Context) { func (o *GroupApi) GetFullJoinGroupIDs(c *gin.Context) {
a2r.Call(group.GroupClient.GetFullJoinGroupIDs, o.Client, c) a2r.Call(c, group.GroupClient.GetFullJoinGroupIDs, o.Client)
} }
+71 -30
View File
@@ -1,25 +1,9 @@
// Copyright © 2023 OpenIM. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package api package api
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/tools/utils/datautil"
"github.com/openimsdk/tools/utils/network"
"net" "net"
"net/http" "net/http"
"os" "os"
@@ -28,12 +12,21 @@ import (
"syscall" "syscall"
"time" "time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/tools/mw"
"github.com/openimsdk/tools/utils/datautil"
"github.com/openimsdk/tools/utils/network"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister" kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics" "github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/tools/discovery" "github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/discovery/etcd"
"github.com/openimsdk/tools/errs" "github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log" "github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/system/program" "github.com/openimsdk/tools/system/program"
"github.com/openimsdk/tools/utils/jsonutil"
) )
type Config struct { type Config struct {
@@ -42,8 +35,8 @@ type Config struct {
Discovery config.Discovery Discovery config.Discovery
} }
func Start(ctx context.Context, index int, config *Config) error { func Start(ctx context.Context, index int, cfg *Config) error {
apiPort, err := datautil.GetElemByIndex(config.API.Api.Ports, index) apiPort, err := datautil.GetElemByIndex(cfg.API.Api.Ports, index)
if err != nil { if err != nil {
return err return err
} }
@@ -51,10 +44,14 @@ func Start(ctx context.Context, index int, config *Config) error {
var client discovery.SvcDiscoveryRegistry var client discovery.SvcDiscoveryRegistry
// Determine whether zk is passed according to whether it is a clustered deployment // Determine whether zk is passed according to whether it is a clustered deployment
client, err = kdisc.NewDiscoveryRegister(&config.Discovery, &config.Share) client, err = kdisc.NewDiscoveryRegister(&cfg.Discovery, &cfg.Share, []string{
cfg.Share.RpcRegisterName.MessageGateway,
})
if err != nil { if err != nil {
return errs.WrapMsg(err, "failed to register discovery service") return errs.WrapMsg(err, "failed to register discovery service")
} }
client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin")))
var ( var (
netDone = make(chan struct{}, 1) netDone = make(chan struct{}, 1)
@@ -62,29 +59,73 @@ func Start(ctx context.Context, index int, config *Config) error {
prometheusPort int prometheusPort int
) )
router := newGinRouter(client, config) router, err := newGinRouter(ctx, client, cfg)
if config.API.Prometheus.Enable { if err != nil {
go func() { return err
prometheusPort, err = datautil.GetElemByIndex(config.API.Prometheus.Ports, index) }
registerIP, err := network.GetRpcRegisterIP("")
if err != nil {
return err
}
getAutoPort := func() (net.Listener, int, error) {
registerAddr := net.JoinHostPort(registerIP, "0")
listener, err := net.Listen("tcp", registerAddr)
if err != nil {
return nil, 0, errs.WrapMsg(err, "listen err", "registerAddr", registerAddr)
}
_, portStr, _ := net.SplitHostPort(listener.Addr().String())
port, _ := strconv.Atoi(portStr)
return listener, port, nil
}
if cfg.API.Prometheus.AutoSetPorts && cfg.Discovery.Enable != config.ETCD {
return errs.New("only etcd support autoSetPorts", "RegisterName", "api").Wrap()
}
if cfg.API.Prometheus.Enable {
var (
listener net.Listener
)
if cfg.API.Prometheus.AutoSetPorts {
listener, prometheusPort, err = getAutoPort()
if err != nil { if err != nil {
netErr = err return err
netDone <- struct{}{}
return
} }
if err := prommetrics.ApiInit(prometheusPort); err != nil && err != http.ErrServerClosed {
etcdClient := client.(*etcd.SvcDiscoveryRegistryImpl).GetClient()
_, err = etcdClient.Put(ctx, prommetrics.BuildDiscoveryKey(prommetrics.APIKeyName), jsonutil.StructToJsonString(prommetrics.BuildDefaultTarget(registerIP, prometheusPort)))
if err != nil {
return errs.WrapMsg(err, "etcd put err")
}
} else {
prometheusPort, err = datautil.GetElemByIndex(cfg.API.Prometheus.Ports, index)
if err != nil {
return err
}
listener, err = net.Listen("tcp", fmt.Sprintf(":%d", prometheusPort))
if err != nil {
return errs.WrapMsg(err, "listen err", "addr", fmt.Sprintf(":%d", prometheusPort))
}
}
go func() {
if err := prommetrics.ApiInit(listener); err != nil && !errors.Is(err, http.ErrServerClosed) {
netErr = errs.WrapMsg(err, fmt.Sprintf("api prometheus start err: %d", prometheusPort)) netErr = errs.WrapMsg(err, fmt.Sprintf("api prometheus start err: %d", prometheusPort))
netDone <- struct{}{} netDone <- struct{}{}
} }
}() }()
} }
address := net.JoinHostPort(network.GetListenIP(config.API.Api.ListenIP), strconv.Itoa(apiPort)) address := net.JoinHostPort(network.GetListenIP(cfg.API.Api.ListenIP), strconv.Itoa(apiPort))
server := http.Server{Addr: address, Handler: router} server := http.Server{Addr: address, Handler: router}
log.CInfo(ctx, "API server is initializing", "address", address, "apiPort", apiPort, "prometheusPort", prometheusPort) log.CInfo(ctx, "API server is initializing", "address", address, "apiPort", apiPort, "prometheusPort", prometheusPort)
go func() { go func() {
err = server.ListenAndServe() err = server.ListenAndServe()
if err != nil && err != http.ErrServerClosed { if err != nil && !errors.Is(err, http.ErrServerClosed) {
netErr = errs.WrapMsg(err, fmt.Sprintf("api start err: %s", server.Addr)) netErr = errs.WrapMsg(err, fmt.Sprintf("api start err: %s", server.Addr))
netDone <- struct{}{} netDone <- struct{}{}
+36 -49
View File
@@ -2,17 +2,17 @@ package jssdk
import ( import (
"context" "context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"sort"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/openimsdk/protocol/conversation" "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/group"
"github.com/openimsdk/protocol/jssdk" "github.com/openimsdk/protocol/jssdk"
"github.com/openimsdk/protocol/msg" "github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/relation" "github.com/openimsdk/protocol/relation"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
"github.com/openimsdk/protocol/user"
"github.com/openimsdk/tools/mcontext" "github.com/openimsdk/tools/mcontext"
"github.com/openimsdk/tools/utils/datautil" "github.com/openimsdk/tools/utils/datautil"
"sort"
) )
const ( const (
@@ -20,22 +20,23 @@ const (
defaultGetActiveConversation = 100 defaultGetActiveConversation = 100
) )
func NewJSSdkApi(user user.UserClient, friend relation.FriendClient, group group.GroupClient, msg msg.MsgClient, conv conversation.ConversationClient) *JSSdk { func NewJSSdkApi(userClient *rpcli.UserClient, relationClient *rpcli.RelationClient, groupClient *rpcli.GroupClient,
conversationClient *rpcli.ConversationClient, msgClient *rpcli.MsgClient) *JSSdk {
return &JSSdk{ return &JSSdk{
user: user, userClient: userClient,
friend: friend, relationClient: relationClient,
group: group, groupClient: groupClient,
msg: msg, conversationClient: conversationClient,
conv: conv, msgClient: msgClient,
} }
} }
type JSSdk struct { type JSSdk struct {
user user.UserClient userClient *rpcli.UserClient
friend relation.FriendClient relationClient *rpcli.RelationClient
group group.GroupClient groupClient *rpcli.GroupClient
msg msg.MsgClient conversationClient *rpcli.ConversationClient
conv conversation.ConversationClient msgClient *rpcli.MsgClient
} }
func (x *JSSdk) GetActiveConversations(c *gin.Context) { func (x *JSSdk) GetActiveConversations(c *gin.Context) {
@@ -67,11 +68,11 @@ func (x *JSSdk) fillConversations(ctx context.Context, conversations []*jssdk.Co
groupMap map[string]*sdkws.GroupInfo groupMap map[string]*sdkws.GroupInfo
) )
if len(userIDs) > 0 { if len(userIDs) > 0 {
users, err := field(ctx, x.user.GetDesignateUsers, &user.GetDesignateUsersReq{UserIDs: userIDs}, (*user.GetDesignateUsersResp).GetUsersInfo) users, err := x.userClient.GetUsersInfo(ctx, userIDs)
if err != nil { if err != nil {
return err return err
} }
friends, err := field(ctx, x.friend.GetFriendInfo, &relation.GetFriendInfoReq{OwnerUserID: conversations[0].Conversation.OwnerUserID, FriendUserIDs: userIDs}, (*relation.GetFriendInfoResp).GetFriendInfos) friends, err := x.relationClient.GetFriendsInfo(ctx, conversations[0].Conversation.OwnerUserID, userIDs)
if err != nil { if err != nil {
return err return err
} }
@@ -79,11 +80,11 @@ func (x *JSSdk) fillConversations(ctx context.Context, conversations []*jssdk.Co
friendMap = datautil.SliceToMap(friends, (*relation.FriendInfoOnly).GetFriendUserID) friendMap = datautil.SliceToMap(friends, (*relation.FriendInfoOnly).GetFriendUserID)
} }
if len(groupIDs) > 0 { if len(groupIDs) > 0 {
resp, err := x.group.GetGroupsInfo(ctx, &group.GetGroupsInfoReq{GroupIDs: groupIDs}) groups, err := x.groupClient.GetGroupsInfo(ctx, groupIDs)
if err != nil { if err != nil {
return err return err
} }
groupMap = datautil.SliceToMap(resp.GroupInfos, (*sdkws.GroupInfo).GetGroupID) groupMap = datautil.SliceToMap(groups, (*sdkws.GroupInfo).GetGroupID)
} }
for _, c := range conversations { for _, c := range conversations {
if c.Conversation.GroupID == "" { if c.Conversation.GroupID == "" {
@@ -101,21 +102,18 @@ func (x *JSSdk) getActiveConversations(ctx context.Context, req *jssdk.GetActive
req.Count = defaultGetActiveConversation req.Count = defaultGetActiveConversation
} }
req.OwnerUserID = mcontext.GetOpUserID(ctx) req.OwnerUserID = mcontext.GetOpUserID(ctx)
conversationIDs, err := field(ctx, x.conv.GetConversationIDs, conversationIDs, err := x.conversationClient.GetConversationIDs(ctx, req.OwnerUserID)
&conversation.GetConversationIDsReq{UserID: req.OwnerUserID}, (*conversation.GetConversationIDsResp).GetConversationIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if len(conversationIDs) == 0 { if len(conversationIDs) == 0 {
return &jssdk.GetActiveConversationsResp{}, nil return &jssdk.GetActiveConversationsResp{}, nil
} }
readSeq, err := field(ctx, x.msg.GetHasReadSeqs, readSeq, err := x.msgClient.GetHasReadSeqs(ctx, conversationIDs, req.OwnerUserID)
&msg.GetHasReadSeqsReq{UserID: req.OwnerUserID, ConversationIDs: conversationIDs}, (*msg.SeqsInfoResp).GetMaxSeqs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
activeConversation, err := field(ctx, x.msg.GetActiveConversation, activeConversation, err := x.msgClient.GetActiveConversation(ctx, conversationIDs)
&msg.GetActiveConversationReq{ConversationIDs: conversationIDs}, (*msg.GetActiveConversationResp).GetConversations)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -126,8 +124,7 @@ func (x *JSSdk) getActiveConversations(ctx context.Context, req *jssdk.GetActive
Conversation: activeConversation, Conversation: activeConversation,
} }
if len(activeConversation) > 1 { if len(activeConversation) > 1 {
pinnedConversationIDs, err := field(ctx, x.conv.GetPinnedConversationIDs, pinnedConversationIDs, err := x.conversationClient.GetPinnedConversationIDs(ctx, req.OwnerUserID)
&conversation.GetPinnedConversationIDsReq{UserID: req.OwnerUserID}, (*conversation.GetPinnedConversationIDsResp).GetConversationIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -135,25 +132,18 @@ func (x *JSSdk) getActiveConversations(ctx context.Context, req *jssdk.GetActive
} }
sort.Sort(&sortConversations) sort.Sort(&sortConversations)
sortList := sortConversations.Top(int(req.Count)) sortList := sortConversations.Top(int(req.Count))
conversations, err := field(ctx, x.conv.GetConversations, conversations, err := x.conversationClient.GetConversations(ctx, datautil.Slice(sortList, func(c *msg.ActiveConversation) string {
&conversation.GetConversationsReq{ return c.ConversationID
OwnerUserID: req.OwnerUserID, }), req.OwnerUserID)
ConversationIDs: datautil.Slice(sortList, func(c *msg.ActiveConversation) string {
return c.ConversationID
})}, (*conversation.GetConversationsResp).GetConversations)
if err != nil { if err != nil {
return nil, err return nil, err
} }
msgs, err := field(ctx, x.msg.GetSeqMessage, msgs, err := x.msgClient.GetSeqMessage(ctx, req.OwnerUserID, datautil.Slice(sortList, func(c *msg.ActiveConversation) *msg.ConversationSeqs {
&msg.GetSeqMessageReq{ return &msg.ConversationSeqs{
UserID: req.OwnerUserID, ConversationID: c.ConversationID,
Conversations: datautil.Slice(sortList, func(c *msg.ActiveConversation) *msg.ConversationSeqs { Seqs: []int64{c.MaxSeq},
return &msg.ConversationSeqs{ }
ConversationID: c.ConversationID, }))
Seqs: []int64{c.MaxSeq},
}
}),
}, (*msg.GetSeqMessageResp).GetMsgs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -195,7 +185,7 @@ func (x *JSSdk) getActiveConversations(ctx context.Context, req *jssdk.GetActive
func (x *JSSdk) getConversations(ctx context.Context, req *jssdk.GetConversationsReq) (*jssdk.GetConversationsResp, error) { func (x *JSSdk) getConversations(ctx context.Context, req *jssdk.GetConversationsReq) (*jssdk.GetConversationsResp, error) {
req.OwnerUserID = mcontext.GetOpUserID(ctx) req.OwnerUserID = mcontext.GetOpUserID(ctx)
conversations, err := field(ctx, x.conv.GetConversations, &conversation.GetConversationsReq{OwnerUserID: req.OwnerUserID, ConversationIDs: req.ConversationIDs}, (*conversation.GetConversationsResp).GetConversations) conversations, err := x.conversationClient.GetConversations(ctx, req.ConversationIDs, req.OwnerUserID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -205,13 +195,11 @@ func (x *JSSdk) getConversations(ctx context.Context, req *jssdk.GetConversation
req.ConversationIDs = datautil.Slice(conversations, func(c *conversation.Conversation) string { req.ConversationIDs = datautil.Slice(conversations, func(c *conversation.Conversation) string {
return c.ConversationID return c.ConversationID
}) })
maxSeqs, err := field(ctx, x.msg.GetMaxSeqs, maxSeqs, err := x.msgClient.GetMaxSeqs(ctx, req.ConversationIDs)
&msg.GetMaxSeqsReq{ConversationIDs: req.ConversationIDs}, (*msg.SeqsInfoResp).GetMaxSeqs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
readSeqs, err := field(ctx, x.msg.GetHasReadSeqs, readSeqs, err := x.msgClient.GetHasReadSeqs(ctx, req.ConversationIDs, req.OwnerUserID)
&msg.GetHasReadSeqsReq{UserID: req.OwnerUserID, ConversationIDs: req.ConversationIDs}, (*msg.SeqsInfoResp).GetMaxSeqs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -226,8 +214,7 @@ func (x *JSSdk) getConversations(ctx context.Context, req *jssdk.GetConversation
} }
var msgs map[string]*sdkws.PullMsgs var msgs map[string]*sdkws.PullMsgs
if len(conversationSeqs) > 0 { if len(conversationSeqs) > 0 {
msgs, err = field(ctx, x.msg.GetSeqMessage, msgs, err = x.msgClient.GetSeqMessage(ctx, req.OwnerUserID, conversationSeqs)
&msg.GetSeqMessageReq{UserID: req.OwnerUserID, Conversations: conversationSeqs}, (*msg.GetSeqMessageResp).GetMsgs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
+36 -30
View File
@@ -21,7 +21,7 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/apistruct" "github.com/openimsdk/open-im-server/v3/pkg/apistruct"
"github.com/openimsdk/open-im-server/v3/pkg/authverify" "github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msg" "github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
@@ -37,16 +37,14 @@ import (
) )
type MessageApi struct { type MessageApi struct {
*rpcclient.Message Client msg.MsgClient
validate *validator.Validate userClient *rpcli.UserClient
userRpcClient *rpcclient.UserRpcClient
imAdminUserID []string imAdminUserID []string
validate *validator.Validate
} }
func NewMessageApi(msgRpcClient *rpcclient.Message, userRpcClient *rpcclient.User, func NewMessageApi(client msg.MsgClient, userClient *rpcli.UserClient, imAdminUserID []string) MessageApi {
imAdminUserID []string) MessageApi { return MessageApi{Client: client, userClient: userClient, imAdminUserID: imAdminUserID, validate: validator.New()}
return MessageApi{Message: msgRpcClient, validate: validator.New(),
userRpcClient: rpcclient.NewUserRpcClientByUser(userRpcClient), imAdminUserID: imAdminUserID}
} }
func (*MessageApi) SetOptions(options map[string]bool, value bool) { func (*MessageApi) SetOptions(options map[string]bool, value bool) {
@@ -108,51 +106,51 @@ func (m *MessageApi) newUserSendMsgReq(_ *gin.Context, params *apistruct.SendMsg
} }
func (m *MessageApi) GetSeq(c *gin.Context) { func (m *MessageApi) GetSeq(c *gin.Context) {
a2r.Call(msg.MsgClient.GetMaxSeq, m.Client, c) a2r.Call(c, msg.MsgClient.GetMaxSeq, m.Client)
} }
func (m *MessageApi) PullMsgBySeqs(c *gin.Context) { func (m *MessageApi) PullMsgBySeqs(c *gin.Context) {
a2r.Call(msg.MsgClient.PullMessageBySeqs, m.Client, c) a2r.Call(c, msg.MsgClient.PullMessageBySeqs, m.Client)
} }
func (m *MessageApi) RevokeMsg(c *gin.Context) { func (m *MessageApi) RevokeMsg(c *gin.Context) {
a2r.Call(msg.MsgClient.RevokeMsg, m.Client, c) a2r.Call(c, msg.MsgClient.RevokeMsg, m.Client)
} }
func (m *MessageApi) MarkMsgsAsRead(c *gin.Context) { func (m *MessageApi) MarkMsgsAsRead(c *gin.Context) {
a2r.Call(msg.MsgClient.MarkMsgsAsRead, m.Client, c) a2r.Call(c, msg.MsgClient.MarkMsgsAsRead, m.Client)
} }
func (m *MessageApi) MarkConversationAsRead(c *gin.Context) { func (m *MessageApi) MarkConversationAsRead(c *gin.Context) {
a2r.Call(msg.MsgClient.MarkConversationAsRead, m.Client, c) a2r.Call(c, msg.MsgClient.MarkConversationAsRead, m.Client)
} }
func (m *MessageApi) GetConversationsHasReadAndMaxSeq(c *gin.Context) { func (m *MessageApi) GetConversationsHasReadAndMaxSeq(c *gin.Context) {
a2r.Call(msg.MsgClient.GetConversationsHasReadAndMaxSeq, m.Client, c) a2r.Call(c, msg.MsgClient.GetConversationsHasReadAndMaxSeq, m.Client)
} }
func (m *MessageApi) SetConversationHasReadSeq(c *gin.Context) { func (m *MessageApi) SetConversationHasReadSeq(c *gin.Context) {
a2r.Call(msg.MsgClient.SetConversationHasReadSeq, m.Client, c) a2r.Call(c, msg.MsgClient.SetConversationHasReadSeq, m.Client)
} }
func (m *MessageApi) ClearConversationsMsg(c *gin.Context) { func (m *MessageApi) ClearConversationsMsg(c *gin.Context) {
a2r.Call(msg.MsgClient.ClearConversationsMsg, m.Client, c) a2r.Call(c, msg.MsgClient.ClearConversationsMsg, m.Client)
} }
func (m *MessageApi) UserClearAllMsg(c *gin.Context) { func (m *MessageApi) UserClearAllMsg(c *gin.Context) {
a2r.Call(msg.MsgClient.UserClearAllMsg, m.Client, c) a2r.Call(c, msg.MsgClient.UserClearAllMsg, m.Client)
} }
func (m *MessageApi) DeleteMsgs(c *gin.Context) { func (m *MessageApi) DeleteMsgs(c *gin.Context) {
a2r.Call(msg.MsgClient.DeleteMsgs, m.Client, c) a2r.Call(c, msg.MsgClient.DeleteMsgs, m.Client)
} }
func (m *MessageApi) DeleteMsgPhysicalBySeq(c *gin.Context) { func (m *MessageApi) DeleteMsgPhysicalBySeq(c *gin.Context) {
a2r.Call(msg.MsgClient.DeleteMsgPhysicalBySeq, m.Client, c) a2r.Call(c, msg.MsgClient.DeleteMsgPhysicalBySeq, m.Client)
} }
func (m *MessageApi) DeleteMsgPhysical(c *gin.Context) { func (m *MessageApi) DeleteMsgPhysical(c *gin.Context) {
a2r.Call(msg.MsgClient.DeleteMsgPhysical, m.Client, c) a2r.Call(c, msg.MsgClient.DeleteMsgPhysical, m.Client)
} }
func (m *MessageApi) getSendMsgReq(c *gin.Context, req apistruct.SendMsg) (sendMsgReq *msg.SendMsgReq, err error) { func (m *MessageApi) getSendMsgReq(c *gin.Context, req apistruct.SendMsg) (sendMsgReq *msg.SendMsgReq, err error) {
@@ -176,7 +174,7 @@ func (m *MessageApi) getSendMsgReq(c *gin.Context, req apistruct.SendMsg) (sendM
case constant.OANotification: case constant.OANotification:
data = apistruct.OANotificationElem{} data = apistruct.OANotificationElem{}
req.SessionType = constant.NotificationChatType req.SessionType = constant.NotificationChatType
if err = m.userRpcClient.GetNotificationByID(c, req.SendID); err != nil { if err = m.userClient.GetNotificationByID(c, req.SendID); err != nil {
return nil, err return nil, err
} }
default: default:
@@ -310,10 +308,10 @@ func (m *MessageApi) BatchSendMsg(c *gin.Context) {
var recvIDs []string var recvIDs []string
if req.IsSendAll { if req.IsSendAll {
pageNumber := 1 var pageNumber int32 = 1
showNumber := 500 const showNumber = 500
for { for {
recvIDsPart, err := m.userRpcClient.GetAllUserIDs(c, int32(pageNumber), int32(showNumber)) recvIDsPart, err := m.userClient.GetAllUserIDs(c, pageNumber, showNumber)
if err != nil { if err != nil {
apiresp.GinError(c, err) apiresp.GinError(c, err)
return return
@@ -351,25 +349,33 @@ func (m *MessageApi) BatchSendMsg(c *gin.Context) {
} }
func (m *MessageApi) CheckMsgIsSendSuccess(c *gin.Context) { func (m *MessageApi) CheckMsgIsSendSuccess(c *gin.Context) {
a2r.Call(msg.MsgClient.GetSendMsgStatus, m.Client, c) a2r.Call(c, msg.MsgClient.GetSendMsgStatus, m.Client)
} }
func (m *MessageApi) GetUsersOnlineStatus(c *gin.Context) { func (m *MessageApi) GetUsersOnlineStatus(c *gin.Context) {
a2r.Call(msg.MsgClient.GetSendMsgStatus, m.Client, c) a2r.Call(c, msg.MsgClient.GetSendMsgStatus, m.Client)
} }
func (m *MessageApi) GetActiveUser(c *gin.Context) { func (m *MessageApi) GetActiveUser(c *gin.Context) {
a2r.Call(msg.MsgClient.GetActiveUser, m.Client, c) a2r.Call(c, msg.MsgClient.GetActiveUser, m.Client)
} }
func (m *MessageApi) GetActiveGroup(c *gin.Context) { func (m *MessageApi) GetActiveGroup(c *gin.Context) {
a2r.Call(msg.MsgClient.GetActiveGroup, m.Client, c) a2r.Call(c, msg.MsgClient.GetActiveGroup, m.Client)
} }
func (m *MessageApi) SearchMsg(c *gin.Context) { func (m *MessageApi) SearchMsg(c *gin.Context) {
a2r.Call(msg.MsgClient.SearchMessage, m.Client, c) a2r.Call(c, msg.MsgClient.SearchMessage, m.Client)
} }
func (m *MessageApi) GetServerTime(c *gin.Context) { func (m *MessageApi) GetServerTime(c *gin.Context) {
a2r.Call(msg.MsgClient.GetServerTime, m.Client, c) a2r.Call(c, msg.MsgClient.GetServerTime, m.Client)
}
func (m *MessageApi) GetStreamMsg(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetServerTime, m.Client)
}
func (m *MessageApi) AppendStreamMsg(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetServerTime, m.Client)
} }
+114
View File
@@ -0,0 +1,114 @@
package api
import (
"encoding/json"
"net/http"
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/tools/apiresp"
"github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/discovery/etcd"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log"
clientv3 "go.etcd.io/etcd/client/v3"
)
type PrometheusDiscoveryApi struct {
config *Config
client *clientv3.Client
}
func NewPrometheusDiscoveryApi(cfg *Config, client discovery.SvcDiscoveryRegistry) *PrometheusDiscoveryApi {
api := &PrometheusDiscoveryApi{
config: cfg,
}
if cfg.Discovery.Enable == config.ETCD {
api.client = client.(*etcd.SvcDiscoveryRegistryImpl).GetClient()
}
return api
}
func (p *PrometheusDiscoveryApi) Enable(c *gin.Context) {
if p.config.Discovery.Enable != config.ETCD {
c.JSON(http.StatusOK, []struct{}{})
c.Abort()
}
}
func (p *PrometheusDiscoveryApi) discovery(c *gin.Context, key string) {
eResp, err := p.client.Get(c, prommetrics.BuildDiscoveryKey(key))
if err != nil {
// Log and respond with an error if preparation fails.
apiresp.GinError(c, errs.WrapMsg(err, "etcd get err"))
return
}
if len(eResp.Kvs) == 0 {
c.JSON(http.StatusOK, []*prommetrics.Target{})
}
var (
resp = &prommetrics.RespTarget{
Targets: make([]string, 0, len(eResp.Kvs)),
}
)
for i := range eResp.Kvs {
var target prommetrics.Target
err = json.Unmarshal(eResp.Kvs[i].Value, &target)
if err != nil {
log.ZError(c, "prometheus unmarshal err", errs.Wrap(err))
}
resp.Targets = append(resp.Targets, target.Target)
if resp.Labels == nil {
resp.Labels = target.Labels
}
}
c.JSON(200, []*prommetrics.RespTarget{resp})
}
func (p *PrometheusDiscoveryApi) Api(c *gin.Context) {
p.discovery(c, prommetrics.APIKeyName)
}
func (p *PrometheusDiscoveryApi) User(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.User)
}
func (p *PrometheusDiscoveryApi) Group(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Group)
}
func (p *PrometheusDiscoveryApi) Msg(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Msg)
}
func (p *PrometheusDiscoveryApi) Friend(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Friend)
}
func (p *PrometheusDiscoveryApi) Conversation(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Conversation)
}
func (p *PrometheusDiscoveryApi) Third(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Third)
}
func (p *PrometheusDiscoveryApi) Auth(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Auth)
}
func (p *PrometheusDiscoveryApi) Push(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Push)
}
func (p *PrometheusDiscoveryApi) MessageGateway(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.MessageGateway)
}
func (p *PrometheusDiscoveryApi) MessageTransfer(c *gin.Context) {
p.discovery(c, prommetrics.MessageTransferKeyName)
}
+83 -43
View File
@@ -1,7 +1,18 @@
package api package api
import ( import (
"fmt" "context"
"net/http"
"strings"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
pbAuth "github.com/openimsdk/protocol/auth"
"github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/group"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/relation"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/protocol/user"
"github.com/openimsdk/open-im-server/v3/internal/api/jssdk" "github.com/openimsdk/open-im-server/v3/internal/api/jssdk"
@@ -10,15 +21,8 @@ import (
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/gin-gonic/gin/binding" "github.com/gin-gonic/gin/binding"
"github.com/go-playground/validator/v10" "github.com/go-playground/validator/v10"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"net/http"
"strings"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics" "github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/tools/apiresp" "github.com/openimsdk/tools/apiresp"
"github.com/openimsdk/tools/discovery" "github.com/openimsdk/tools/discovery"
@@ -48,23 +52,40 @@ func prommetricsGin() gin.HandlerFunc {
} }
} }
func newGinRouter(disCov discovery.SvcDiscoveryRegistry, config *Config) *gin.Engine { func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, config *Config) (*gin.Engine, error) {
disCov.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()), authConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Auth)
grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin"))) if err != nil {
return nil, err
}
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return nil, err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return nil, err
}
friendConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Friend)
if err != nil {
return nil, err
}
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return nil, err
}
thirdConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Third)
if err != nil {
return nil, err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return nil, err
}
gin.SetMode(gin.ReleaseMode) gin.SetMode(gin.ReleaseMode)
r := gin.New() r := gin.New()
if v, ok := binding.Validator.Engine().(*validator.Validate); ok { if v, ok := binding.Validator.Engine().(*validator.Validate); ok {
_ = v.RegisterValidation("required_if", RequiredIf) _ = v.RegisterValidation("required_if", RequiredIf)
} }
// init rpc client here
userRpc := rpcclient.NewUser(disCov, config.Share.RpcRegisterName.User, config.Share.RpcRegisterName.MessageGateway,
config.Share.IMAdminUserID)
groupRpc := rpcclient.NewGroup(disCov, config.Share.RpcRegisterName.Group)
friendRpc := rpcclient.NewFriend(disCov, config.Share.RpcRegisterName.Friend)
messageRpc := rpcclient.NewMessage(disCov, config.Share.RpcRegisterName.Msg)
conversationRpc := rpcclient.NewConversation(disCov, config.Share.RpcRegisterName.Conversation)
authRpc := rpcclient.NewAuth(disCov, config.Share.RpcRegisterName.Auth)
thirdRpc := rpcclient.NewThird(disCov, config.Share.RpcRegisterName.Third, config.API.Prometheus.GrafanaURL)
switch config.API.Api.CompressionLevel { switch config.API.Api.CompressionLevel {
case NoCompression: case NoCompression:
case DefaultCompression: case DefaultCompression:
@@ -74,10 +95,9 @@ func newGinRouter(disCov discovery.SvcDiscoveryRegistry, config *Config) *gin.En
case BestSpeed: case BestSpeed:
r.Use(gzip.Gzip(gzip.BestSpeed)) r.Use(gzip.Gzip(gzip.BestSpeed))
} }
r.Use(prommetricsGin(), gin.RecoveryWithWriter(gin.DefaultErrorWriter, mw.GinPanicErr), mw.CorsHandler(), mw.GinParseOperationID(), GinParseToken(authRpc)) r.Use(prommetricsGin(), gin.RecoveryWithWriter(gin.DefaultErrorWriter, mw.GinPanicErr), mw.CorsHandler(), mw.GinParseOperationID(), GinParseToken(rpcli.NewAuthClient(authConn)))
u := NewUserApi(*userRpc) u := NewUserApi(user.NewUserClient(userConn), client, config.Share.RpcRegisterName)
m := NewMessageApi(messageRpc, userRpc, config.Share.IMAdminUserID) m := NewMessageApi(msg.NewMsgClient(msgConn), rpcli.NewUserClient(userConn), config.Share.IMAdminUserID)
j := jssdk.NewJSSdkApi(userRpc.Client, friendRpc.Client, groupRpc.Client, messageRpc.Client, conversationRpc.Client)
userRouterGroup := r.Group("/user") userRouterGroup := r.Group("/user")
{ {
userRouterGroup.POST("/user_register", u.UserRegister) userRouterGroup.POST("/user_register", u.UserRegister)
@@ -105,9 +125,9 @@ func newGinRouter(disCov discovery.SvcDiscoveryRegistry, config *Config) *gin.En
userRouterGroup.POST("/search_notification_account", u.SearchNotificationAccount) userRouterGroup.POST("/search_notification_account", u.SearchNotificationAccount)
} }
// friend routing group // friend routing group
friendRouterGroup := r.Group("/friend")
{ {
f := NewFriendApi(*friendRpc) f := NewFriendApi(relation.NewFriendClient(friendConn))
friendRouterGroup := r.Group("/friend")
friendRouterGroup.POST("/delete_friend", f.DeleteFriend) friendRouterGroup.POST("/delete_friend", f.DeleteFriend)
friendRouterGroup.POST("/get_friend_apply_list", f.GetFriendApplyList) friendRouterGroup.POST("/get_friend_apply_list", f.GetFriendApplyList)
friendRouterGroup.POST("/get_designated_friend_apply", f.GetDesignatedFriendsApply) friendRouterGroup.POST("/get_designated_friend_apply", f.GetDesignatedFriendsApply)
@@ -130,9 +150,10 @@ func newGinRouter(disCov discovery.SvcDiscoveryRegistry, config *Config) *gin.En
friendRouterGroup.POST("/get_incremental_friends", f.GetIncrementalFriends) friendRouterGroup.POST("/get_incremental_friends", f.GetIncrementalFriends)
friendRouterGroup.POST("/get_full_friend_user_ids", f.GetFullFriendUserIDs) friendRouterGroup.POST("/get_full_friend_user_ids", f.GetFullFriendUserIDs)
} }
g := NewGroupApi(*groupRpc)
groupRouterGroup := r.Group("/group") g := NewGroupApi(group.NewGroupClient(groupConn))
{ {
groupRouterGroup := r.Group("/group")
groupRouterGroup.POST("/create_group", g.CreateGroup) groupRouterGroup.POST("/create_group", g.CreateGroup)
groupRouterGroup.POST("/set_group_info", g.SetGroupInfo) groupRouterGroup.POST("/set_group_info", g.SetGroupInfo)
groupRouterGroup.POST("/set_group_info_ex", g.SetGroupInfoEx) groupRouterGroup.POST("/set_group_info_ex", g.SetGroupInfoEx)
@@ -166,18 +187,19 @@ func newGinRouter(disCov discovery.SvcDiscoveryRegistry, config *Config) *gin.En
groupRouterGroup.POST("/get_full_join_group_ids", g.GetFullJoinGroupIDs) groupRouterGroup.POST("/get_full_join_group_ids", g.GetFullJoinGroupIDs)
} }
// certificate // certificate
authRouterGroup := r.Group("/auth")
{ {
a := NewAuthApi(*authRpc) a := NewAuthApi(pbAuth.NewAuthClient(authConn))
authRouterGroup := r.Group("/auth")
authRouterGroup.POST("/get_admin_token", a.GetAdminToken) authRouterGroup.POST("/get_admin_token", a.GetAdminToken)
authRouterGroup.POST("/get_user_token", a.GetUserToken) authRouterGroup.POST("/get_user_token", a.GetUserToken)
authRouterGroup.POST("/parse_token", a.ParseToken) authRouterGroup.POST("/parse_token", a.ParseToken)
authRouterGroup.POST("/force_logout", a.ForceLogout) authRouterGroup.POST("/force_logout", a.ForceLogout)
} }
// Third service // Third service
thirdGroup := r.Group("/third")
{ {
t := NewThirdApi(*thirdRpc) t := NewThirdApi(third.NewThirdClient(thirdConn), config.API.Prometheus.GrafanaURL)
thirdGroup := r.Group("/third")
thirdGroup.GET("/prometheus", t.GetPrometheus) thirdGroup.GET("/prometheus", t.GetPrometheus)
thirdGroup.POST("/fcm_update_token", t.FcmUpdateToken) thirdGroup.POST("/fcm_update_token", t.FcmUpdateToken)
thirdGroup.POST("/set_app_badge", t.SetAppBadge) thirdGroup.POST("/set_app_badge", t.SetAppBadge)
@@ -200,8 +222,8 @@ func newGinRouter(disCov discovery.SvcDiscoveryRegistry, config *Config) *gin.En
objectGroup.GET("/*name", t.ObjectRedirect) objectGroup.GET("/*name", t.ObjectRedirect)
} }
// Message // Message
msgGroup := r.Group("/msg")
{ {
msgGroup := r.Group("/msg")
msgGroup.POST("/newest_seq", m.GetSeq) msgGroup.POST("/newest_seq", m.GetSeq)
msgGroup.POST("/search_msg", m.SearchMsg) msgGroup.POST("/search_msg", m.SearchMsg)
msgGroup.POST("/send_msg", m.SendMessage) msgGroup.POST("/send_msg", m.SendMessage)
@@ -224,9 +246,9 @@ func newGinRouter(disCov discovery.SvcDiscoveryRegistry, config *Config) *gin.En
msgGroup.POST("/get_server_time", m.GetServerTime) msgGroup.POST("/get_server_time", m.GetServerTime)
} }
// Conversation // Conversation
conversationGroup := r.Group("/conversation")
{ {
c := NewConversationApi(*conversationRpc) c := NewConversationApi(conversation.NewConversationClient(conversationConn))
conversationGroup := r.Group("/conversation")
conversationGroup.POST("/get_sorted_conversation_list", c.GetSortedConversationList) conversationGroup.POST("/get_sorted_conversation_list", c.GetSortedConversationList)
conversationGroup.POST("/get_all_conversations", c.GetAllConversations) conversationGroup.POST("/get_all_conversations", c.GetAllConversations)
conversationGroup.POST("/get_conversation", c.GetConversation) conversationGroup.POST("/get_conversation", c.GetConversation)
@@ -240,22 +262,40 @@ func newGinRouter(disCov discovery.SvcDiscoveryRegistry, config *Config) *gin.En
conversationGroup.POST("/get_pinned_conversation_ids", c.GetPinnedConversationIDs) conversationGroup.POST("/get_pinned_conversation_ids", c.GetPinnedConversationIDs)
} }
statisticsGroup := r.Group("/statistics")
{ {
statisticsGroup := r.Group("/statistics")
statisticsGroup.POST("/user/register", u.UserRegisterCount) statisticsGroup.POST("/user/register", u.UserRegisterCount)
statisticsGroup.POST("/user/active", m.GetActiveUser) statisticsGroup.POST("/user/active", m.GetActiveUser)
statisticsGroup.POST("/group/create", g.GroupCreateCount) statisticsGroup.POST("/group/create", g.GroupCreateCount)
statisticsGroup.POST("/group/active", m.GetActiveGroup) statisticsGroup.POST("/group/active", m.GetActiveGroup)
} }
jssdk := r.Group("/jssdk") {
jssdk.POST("/get_conversations", j.GetConversations) j := jssdk.NewJSSdkApi(rpcli.NewUserClient(userConn), rpcli.NewRelationClient(friendConn),
jssdk.POST("/get_active_conversations", j.GetActiveConversations) rpcli.NewGroupClient(groupConn), rpcli.NewConversationClient(conversationConn), rpcli.NewMsgClient(msgConn))
jssdk := r.Group("/jssdk")
return r jssdk.POST("/get_conversations", j.GetConversations)
jssdk.POST("/get_active_conversations", j.GetActiveConversations)
}
{
pd := NewPrometheusDiscoveryApi(config, client)
proDiscoveryGroup := r.Group("/prometheus_discovery", pd.Enable)
proDiscoveryGroup.GET("/api", pd.Api)
proDiscoveryGroup.GET("/user", pd.User)
proDiscoveryGroup.GET("/group", pd.Group)
proDiscoveryGroup.GET("/msg", pd.Msg)
proDiscoveryGroup.GET("/friend", pd.Friend)
proDiscoveryGroup.GET("/conversation", pd.Conversation)
proDiscoveryGroup.GET("/third", pd.Third)
proDiscoveryGroup.GET("/auth", pd.Auth)
proDiscoveryGroup.GET("/push", pd.Push)
proDiscoveryGroup.GET("/msg_gateway", pd.MessageGateway)
proDiscoveryGroup.GET("/msg_transfer", pd.MessageTransfer)
}
return r, nil
} }
func GinParseToken(authRPC *rpcclient.Auth) gin.HandlerFunc { func GinParseToken(authClient *rpcli.AuthClient) gin.HandlerFunc {
return func(c *gin.Context) { return func(c *gin.Context) {
switch c.Request.Method { switch c.Request.Method {
case http.MethodPost: case http.MethodPost:
@@ -273,7 +313,7 @@ func GinParseToken(authRPC *rpcclient.Auth) gin.HandlerFunc {
c.Abort() c.Abort()
return return
} }
resp, err := authRPC.ParseToken(c, token) resp, err := authClient.ParseToken(c, token)
if err != nil { if err != nil {
apiresp.GinError(c, err) apiresp.GinError(c, err)
c.Abort() c.Abort()
-32
View File
@@ -1,32 +0,0 @@
// Copyright © 2023 OpenIM. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package api
import (
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/user"
"github.com/openimsdk/tools/a2r"
)
type StatisticsApi rpcclient.User
func NewStatisticsApi(client rpcclient.User) StatisticsApi {
return StatisticsApi(client)
}
func (s *StatisticsApi) UserRegister(c *gin.Context) {
a2r.Call(user.UserClient.UserRegisterCount, s.Client, c)
}
+19 -17
View File
@@ -24,25 +24,27 @@ import (
"strings" "strings"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/third" "github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/a2r" "github.com/openimsdk/tools/a2r"
"github.com/openimsdk/tools/errs" "github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/mcontext" "github.com/openimsdk/tools/mcontext"
) )
type ThirdApi rpcclient.Third type ThirdApi struct {
GrafanaUrl string
Client third.ThirdClient
}
func NewThirdApi(client rpcclient.Third) ThirdApi { func NewThirdApi(client third.ThirdClient, grafanaUrl string) ThirdApi {
return ThirdApi(client) return ThirdApi{Client: client, GrafanaUrl: grafanaUrl}
} }
func (o *ThirdApi) FcmUpdateToken(c *gin.Context) { func (o *ThirdApi) FcmUpdateToken(c *gin.Context) {
a2r.Call(third.ThirdClient.FcmUpdateToken, o.Client, c) a2r.Call(c, third.ThirdClient.FcmUpdateToken, o.Client)
} }
func (o *ThirdApi) SetAppBadge(c *gin.Context) { func (o *ThirdApi) SetAppBadge(c *gin.Context) {
a2r.Call(third.ThirdClient.SetAppBadge, o.Client, c) a2r.Call(c, third.ThirdClient.SetAppBadge, o.Client)
} }
// #################### s3 #################### // #################### s3 ####################
@@ -77,44 +79,44 @@ func setURLPrefix(c *gin.Context, urlPrefix *string) error {
} }
func (o *ThirdApi) PartLimit(c *gin.Context) { func (o *ThirdApi) PartLimit(c *gin.Context) {
a2r.Call(third.ThirdClient.PartLimit, o.Client, c) a2r.Call(c, third.ThirdClient.PartLimit, o.Client)
} }
func (o *ThirdApi) PartSize(c *gin.Context) { func (o *ThirdApi) PartSize(c *gin.Context) {
a2r.Call(third.ThirdClient.PartSize, o.Client, c) a2r.Call(c, third.ThirdClient.PartSize, o.Client)
} }
func (o *ThirdApi) InitiateMultipartUpload(c *gin.Context) { func (o *ThirdApi) InitiateMultipartUpload(c *gin.Context) {
opt := setURLPrefixOption(third.ThirdClient.InitiateMultipartUpload, func(req *third.InitiateMultipartUploadReq) error { opt := setURLPrefixOption(third.ThirdClient.InitiateMultipartUpload, func(req *third.InitiateMultipartUploadReq) error {
return setURLPrefix(c, &req.UrlPrefix) return setURLPrefix(c, &req.UrlPrefix)
}) })
a2r.Call(third.ThirdClient.InitiateMultipartUpload, o.Client, c, opt) a2r.Call(c, third.ThirdClient.InitiateMultipartUpload, o.Client, opt)
} }
func (o *ThirdApi) AuthSign(c *gin.Context) { func (o *ThirdApi) AuthSign(c *gin.Context) {
a2r.Call(third.ThirdClient.AuthSign, o.Client, c) a2r.Call(c, third.ThirdClient.AuthSign, o.Client)
} }
func (o *ThirdApi) CompleteMultipartUpload(c *gin.Context) { func (o *ThirdApi) CompleteMultipartUpload(c *gin.Context) {
opt := setURLPrefixOption(third.ThirdClient.CompleteMultipartUpload, func(req *third.CompleteMultipartUploadReq) error { opt := setURLPrefixOption(third.ThirdClient.CompleteMultipartUpload, func(req *third.CompleteMultipartUploadReq) error {
return setURLPrefix(c, &req.UrlPrefix) return setURLPrefix(c, &req.UrlPrefix)
}) })
a2r.Call(third.ThirdClient.CompleteMultipartUpload, o.Client, c, opt) a2r.Call(c, third.ThirdClient.CompleteMultipartUpload, o.Client, opt)
} }
func (o *ThirdApi) AccessURL(c *gin.Context) { func (o *ThirdApi) AccessURL(c *gin.Context) {
a2r.Call(third.ThirdClient.AccessURL, o.Client, c) a2r.Call(c, third.ThirdClient.AccessURL, o.Client)
} }
func (o *ThirdApi) InitiateFormData(c *gin.Context) { func (o *ThirdApi) InitiateFormData(c *gin.Context) {
a2r.Call(third.ThirdClient.InitiateFormData, o.Client, c) a2r.Call(c, third.ThirdClient.InitiateFormData, o.Client)
} }
func (o *ThirdApi) CompleteFormData(c *gin.Context) { func (o *ThirdApi) CompleteFormData(c *gin.Context) {
opt := setURLPrefixOption(third.ThirdClient.CompleteFormData, func(req *third.CompleteFormDataReq) error { opt := setURLPrefixOption(third.ThirdClient.CompleteFormData, func(req *third.CompleteFormDataReq) error {
return setURLPrefix(c, &req.UrlPrefix) return setURLPrefix(c, &req.UrlPrefix)
}) })
a2r.Call(third.ThirdClient.CompleteFormData, o.Client, c, opt) a2r.Call(c, third.ThirdClient.CompleteFormData, o.Client, opt)
} }
func (o *ThirdApi) ObjectRedirect(c *gin.Context) { func (o *ThirdApi) ObjectRedirect(c *gin.Context) {
@@ -156,15 +158,15 @@ func (o *ThirdApi) ObjectRedirect(c *gin.Context) {
// #################### logs ####################. // #################### logs ####################.
func (o *ThirdApi) UploadLogs(c *gin.Context) { func (o *ThirdApi) UploadLogs(c *gin.Context) {
a2r.Call(third.ThirdClient.UploadLogs, o.Client, c) a2r.Call(c, third.ThirdClient.UploadLogs, o.Client)
} }
func (o *ThirdApi) DeleteLogs(c *gin.Context) { func (o *ThirdApi) DeleteLogs(c *gin.Context) {
a2r.Call(third.ThirdClient.DeleteLogs, o.Client, c) a2r.Call(c, third.ThirdClient.DeleteLogs, o.Client)
} }
func (o *ThirdApi) SearchLogs(c *gin.Context) { func (o *ThirdApi) SearchLogs(c *gin.Context) {
a2r.Call(third.ThirdClient.SearchLogs, o.Client, c) a2r.Call(c, third.ThirdClient.SearchLogs, o.Client)
} }
func (o *ThirdApi) GetPrometheus(c *gin.Context) { func (o *ThirdApi) GetPrometheus(c *gin.Context) {
+31 -26
View File
@@ -16,52 +16,57 @@ package api
import ( import (
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway" "github.com/openimsdk/protocol/msggateway"
"github.com/openimsdk/protocol/user" "github.com/openimsdk/protocol/user"
"github.com/openimsdk/tools/a2r" "github.com/openimsdk/tools/a2r"
"github.com/openimsdk/tools/apiresp" "github.com/openimsdk/tools/apiresp"
"github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/errs" "github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log" "github.com/openimsdk/tools/log"
) )
type UserApi rpcclient.User type UserApi struct {
Client user.UserClient
discov discovery.SvcDiscoveryRegistry
config config.RpcRegisterName
}
func NewUserApi(client rpcclient.User) UserApi { func NewUserApi(client user.UserClient, discov discovery.SvcDiscoveryRegistry, config config.RpcRegisterName) UserApi {
return UserApi(client) return UserApi{Client: client, discov: discov, config: config}
} }
func (u *UserApi) UserRegister(c *gin.Context) { func (u *UserApi) UserRegister(c *gin.Context) {
a2r.Call(user.UserClient.UserRegister, u.Client, c) a2r.Call(c, user.UserClient.UserRegister, u.Client)
} }
// UpdateUserInfo is deprecated. Use UpdateUserInfoEx // UpdateUserInfo is deprecated. Use UpdateUserInfoEx
func (u *UserApi) UpdateUserInfo(c *gin.Context) { func (u *UserApi) UpdateUserInfo(c *gin.Context) {
a2r.Call(user.UserClient.UpdateUserInfo, u.Client, c) a2r.Call(c, user.UserClient.UpdateUserInfo, u.Client)
} }
func (u *UserApi) UpdateUserInfoEx(c *gin.Context) { func (u *UserApi) UpdateUserInfoEx(c *gin.Context) {
a2r.Call(user.UserClient.UpdateUserInfoEx, u.Client, c) a2r.Call(c, user.UserClient.UpdateUserInfoEx, u.Client)
} }
func (u *UserApi) SetGlobalRecvMessageOpt(c *gin.Context) { func (u *UserApi) SetGlobalRecvMessageOpt(c *gin.Context) {
a2r.Call(user.UserClient.SetGlobalRecvMessageOpt, u.Client, c) a2r.Call(c, user.UserClient.SetGlobalRecvMessageOpt, u.Client)
} }
func (u *UserApi) GetUsersPublicInfo(c *gin.Context) { func (u *UserApi) GetUsersPublicInfo(c *gin.Context) {
a2r.Call(user.UserClient.GetDesignateUsers, u.Client, c) a2r.Call(c, user.UserClient.GetDesignateUsers, u.Client)
} }
func (u *UserApi) GetAllUsersID(c *gin.Context) { func (u *UserApi) GetAllUsersID(c *gin.Context) {
a2r.Call(user.UserClient.GetAllUserID, u.Client, c) a2r.Call(c, user.UserClient.GetAllUserID, u.Client)
} }
func (u *UserApi) AccountCheck(c *gin.Context) { func (u *UserApi) AccountCheck(c *gin.Context) {
a2r.Call(user.UserClient.AccountCheck, u.Client, c) a2r.Call(c, user.UserClient.AccountCheck, u.Client)
} }
func (u *UserApi) GetUsers(c *gin.Context) { func (u *UserApi) GetUsers(c *gin.Context) {
a2r.Call(user.UserClient.GetPaginationUsers, u.Client, c) a2r.Call(c, user.UserClient.GetPaginationUsers, u.Client)
} }
// GetUsersOnlineStatus Get user online status. // GetUsersOnlineStatus Get user online status.
@@ -71,7 +76,7 @@ func (u *UserApi) GetUsersOnlineStatus(c *gin.Context) {
apiresp.GinError(c, err) apiresp.GinError(c, err)
return return
} }
conns, err := u.Discov.GetConns(c, u.MessageGateWayRpcName) conns, err := u.discov.GetConns(c, u.config.MessageGateway)
if err != nil { if err != nil {
apiresp.GinError(c, err) apiresp.GinError(c, err)
return return
@@ -122,7 +127,7 @@ func (u *UserApi) GetUsersOnlineStatus(c *gin.Context) {
} }
func (u *UserApi) UserRegisterCount(c *gin.Context) { func (u *UserApi) UserRegisterCount(c *gin.Context) {
a2r.Call(user.UserClient.UserRegisterCount, u.Client, c) a2r.Call(c, user.UserClient.UserRegisterCount, u.Client)
} }
// GetUsersOnlineTokenDetail Get user online token details. // GetUsersOnlineTokenDetail Get user online token details.
@@ -135,7 +140,7 @@ func (u *UserApi) GetUsersOnlineTokenDetail(c *gin.Context) {
apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap()) apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
return return
} }
conns, err := u.Discov.GetConns(c, u.MessageGateWayRpcName) conns, err := u.discov.GetConns(c, u.config.MessageGateway)
if err != nil { if err != nil {
apiresp.GinError(c, err) apiresp.GinError(c, err)
return return
@@ -188,52 +193,52 @@ func (u *UserApi) GetUsersOnlineTokenDetail(c *gin.Context) {
// SubscriberStatus Presence status of subscribed users. // SubscriberStatus Presence status of subscribed users.
func (u *UserApi) SubscriberStatus(c *gin.Context) { func (u *UserApi) SubscriberStatus(c *gin.Context) {
a2r.Call(user.UserClient.SubscribeOrCancelUsersStatus, u.Client, c) a2r.Call(c, user.UserClient.SubscribeOrCancelUsersStatus, u.Client)
} }
// GetUserStatus Get the online status of the user. // GetUserStatus Get the online status of the user.
func (u *UserApi) GetUserStatus(c *gin.Context) { func (u *UserApi) GetUserStatus(c *gin.Context) {
a2r.Call(user.UserClient.GetUserStatus, u.Client, c) a2r.Call(c, user.UserClient.GetUserStatus, u.Client)
} }
// GetSubscribeUsersStatus Get the online status of subscribers. // GetSubscribeUsersStatus Get the online status of subscribers.
func (u *UserApi) GetSubscribeUsersStatus(c *gin.Context) { func (u *UserApi) GetSubscribeUsersStatus(c *gin.Context) {
a2r.Call(user.UserClient.GetSubscribeUsersStatus, u.Client, c) a2r.Call(c, user.UserClient.GetSubscribeUsersStatus, u.Client)
} }
// ProcessUserCommandAdd user general function add. // ProcessUserCommandAdd user general function add.
func (u *UserApi) ProcessUserCommandAdd(c *gin.Context) { func (u *UserApi) ProcessUserCommandAdd(c *gin.Context) {
a2r.Call(user.UserClient.ProcessUserCommandAdd, u.Client, c) a2r.Call(c, user.UserClient.ProcessUserCommandAdd, u.Client)
} }
// ProcessUserCommandDelete user general function delete. // ProcessUserCommandDelete user general function delete.
func (u *UserApi) ProcessUserCommandDelete(c *gin.Context) { func (u *UserApi) ProcessUserCommandDelete(c *gin.Context) {
a2r.Call(user.UserClient.ProcessUserCommandDelete, u.Client, c) a2r.Call(c, user.UserClient.ProcessUserCommandDelete, u.Client)
} }
// ProcessUserCommandUpdate user general function update. // ProcessUserCommandUpdate user general function update.
func (u *UserApi) ProcessUserCommandUpdate(c *gin.Context) { func (u *UserApi) ProcessUserCommandUpdate(c *gin.Context) {
a2r.Call(user.UserClient.ProcessUserCommandUpdate, u.Client, c) a2r.Call(c, user.UserClient.ProcessUserCommandUpdate, u.Client)
} }
// ProcessUserCommandGet user general function get. // ProcessUserCommandGet user general function get.
func (u *UserApi) ProcessUserCommandGet(c *gin.Context) { func (u *UserApi) ProcessUserCommandGet(c *gin.Context) {
a2r.Call(user.UserClient.ProcessUserCommandGet, u.Client, c) a2r.Call(c, user.UserClient.ProcessUserCommandGet, u.Client)
} }
// ProcessUserCommandGet user general function get all. // ProcessUserCommandGet user general function get all.
func (u *UserApi) ProcessUserCommandGetAll(c *gin.Context) { func (u *UserApi) ProcessUserCommandGetAll(c *gin.Context) {
a2r.Call(user.UserClient.ProcessUserCommandGetAll, u.Client, c) a2r.Call(c, user.UserClient.ProcessUserCommandGetAll, u.Client)
} }
func (u *UserApi) AddNotificationAccount(c *gin.Context) { func (u *UserApi) AddNotificationAccount(c *gin.Context) {
a2r.Call(user.UserClient.AddNotificationAccount, u.Client, c) a2r.Call(c, user.UserClient.AddNotificationAccount, u.Client)
} }
func (u *UserApi) UpdateNotificationAccountInfo(c *gin.Context) { func (u *UserApi) UpdateNotificationAccountInfo(c *gin.Context) {
a2r.Call(user.UserClient.UpdateNotificationAccountInfo, u.Client, c) a2r.Call(c, user.UserClient.UpdateNotificationAccountInfo, u.Client)
} }
func (u *UserApi) SearchNotificationAccount(c *gin.Context) { func (u *UserApi) SearchNotificationAccount(c *gin.Context) {
a2r.Call(user.UserClient.SearchNotificationAccount, u.Client, c) a2r.Call(c, user.UserClient.SearchNotificationAccount, u.Client)
} }
+4 -4
View File
@@ -18,8 +18,6 @@ import (
"context" "context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"github.com/openimsdk/tools/mw"
"runtime/debug"
"sync" "sync"
"sync/atomic" "sync/atomic"
"time" "time"
@@ -133,7 +131,7 @@ func (c *Client) readMessage() {
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
c.closedErr = ErrPanic c.closedErr = ErrPanic
fmt.Println("socket have panic err:", r, string(debug.Stack())) log.ZPanic(c.ctx, "socket have panic err:", errs.ErrPanic(r))
} }
c.close() c.close()
}() }()
@@ -238,6 +236,8 @@ func (c *Client) handleMessage(message []byte) error {
resp, messageErr = c.longConnServer.GetSeqMessage(ctx, binaryReq) resp, messageErr = c.longConnServer.GetSeqMessage(ctx, binaryReq)
case WSGetConvMaxReadSeq: case WSGetConvMaxReadSeq:
resp, messageErr = c.longConnServer.GetConversationsHasReadAndMaxSeq(ctx, binaryReq) resp, messageErr = c.longConnServer.GetConversationsHasReadAndMaxSeq(ctx, binaryReq)
case WsPullConvLastMessage:
resp, messageErr = c.longConnServer.GetLastMessage(ctx, binaryReq)
case WsLogoutMsg: case WsLogoutMsg:
resp, messageErr = c.longConnServer.UserLogout(ctx, binaryReq) resp, messageErr = c.longConnServer.UserLogout(ctx, binaryReq)
case WsSetBackgroundStatus: case WsSetBackgroundStatus:
@@ -378,7 +378,7 @@ func (c *Client) activeHeartbeat(ctx context.Context) {
go func() { go func() {
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
mw.PanicStackToLog(ctx, r) log.ZPanic(ctx, "activeHeartbeat Panic", errs.ErrPanic(r))
} }
}() }()
log.ZDebug(ctx, "server initiative send heartbeat start.") log.ZDebug(ctx, "server initiative send heartbeat start.")
+1
View File
@@ -47,6 +47,7 @@ const (
WSSendSignalMsg = 1004 WSSendSignalMsg = 1004
WSPullMsg = 1005 WSPullMsg = 1005
WSGetConvMaxReadSeq = 1006 WSGetConvMaxReadSeq = 1006
WsPullConvLastMessage = 1007
WSPushMsg = 2001 WSPushMsg = 2001
WSKickOnlineMsg = 2002 WSKickOnlineMsg = 2002
WsLogoutMsg = 2003 WsLogoutMsg = 2003
+17 -15
View File
@@ -18,10 +18,11 @@ import (
"context" "context"
"sync/atomic" "sync/atomic"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/authverify" "github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/startrpc" "github.com/openimsdk/open-im-server/v3/pkg/common/startrpc"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway" "github.com/openimsdk/protocol/msggateway"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
@@ -35,9 +36,15 @@ import (
) )
func (s *Server) InitServer(ctx context.Context, config *Config, disCov discovery.SvcDiscoveryRegistry, server *grpc.Server) error { func (s *Server) InitServer(ctx context.Context, config *Config, disCov discovery.SvcDiscoveryRegistry, server *grpc.Server) error {
s.LongConnServer.SetDiscoveryRegistry(disCov, config) userConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
s.userClient = rpcli.NewUserClient(userConn)
if err := s.LongConnServer.SetDiscoveryRegistry(ctx, disCov, config); err != nil {
return err
}
msggateway.RegisterMsgGatewayServer(server, s) msggateway.RegisterMsgGatewayServer(server, s)
s.userRcp = rpcclient.NewUserRpcClient(disCov, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
if s.ready != nil { if s.ready != nil {
return s.ready(s) return s.ready(s)
} }
@@ -47,31 +54,35 @@ func (s *Server) InitServer(ctx context.Context, config *Config, disCov discover
func (s *Server) Start(ctx context.Context, index int, conf *Config) error { func (s *Server) Start(ctx context.Context, index int, conf *Config) error {
return startrpc.Start(ctx, &conf.Discovery, &conf.MsgGateway.Prometheus, conf.MsgGateway.ListenIP, return startrpc.Start(ctx, &conf.Discovery, &conf.MsgGateway.Prometheus, conf.MsgGateway.ListenIP,
conf.MsgGateway.RPC.RegisterIP, conf.MsgGateway.RPC.RegisterIP,
conf.MsgGateway.RPC.AutoSetPorts,
conf.MsgGateway.RPC.Ports, index, conf.MsgGateway.RPC.Ports, index,
conf.Share.RpcRegisterName.MessageGateway, conf.Share.RpcRegisterName.MessageGateway,
&conf.Share, &conf.Share,
conf, conf,
[]string{
conf.Share.RpcRegisterName.MessageGateway,
},
s.InitServer, s.InitServer,
) )
} }
type Server struct { type Server struct {
msggateway.UnimplementedMsgGatewayServer
rpcPort int rpcPort int
LongConnServer LongConnServer LongConnServer LongConnServer
config *Config config *Config
pushTerminal map[int]struct{} pushTerminal map[int]struct{}
ready func(srv *Server) error ready func(srv *Server) error
userRcp rpcclient.UserRpcClient
queue *memamq.MemoryQueue queue *memamq.MemoryQueue
userClient *rpcli.UserClient
} }
func (s *Server) SetLongConnServer(LongConnServer LongConnServer) { func (s *Server) SetLongConnServer(LongConnServer LongConnServer) {
s.LongConnServer = LongConnServer s.LongConnServer = LongConnServer
} }
func NewServer(rpcPort int, longConnServer LongConnServer, conf *Config, ready func(srv *Server) error) *Server { func NewServer(longConnServer LongConnServer, conf *Config, ready func(srv *Server) error) *Server {
s := &Server{ s := &Server{
rpcPort: rpcPort,
LongConnServer: longConnServer, LongConnServer: longConnServer,
pushTerminal: make(map[int]struct{}), pushTerminal: make(map[int]struct{}),
config: conf, config: conf,
@@ -83,10 +94,6 @@ func NewServer(rpcPort int, longConnServer LongConnServer, conf *Config, ready f
return s return s
} }
func (s *Server) OnlinePushMsg(context context.Context, req *msggateway.OnlinePushMsgReq) (*msggateway.OnlinePushMsgResp, error) {
panic("implement me")
}
func (s *Server) GetUsersOnlineStatus(ctx context.Context, req *msggateway.GetUsersOnlineStatusReq) (*msggateway.GetUsersOnlineStatusResp, error) { func (s *Server) GetUsersOnlineStatus(ctx context.Context, req *msggateway.GetUsersOnlineStatusReq) (*msggateway.GetUsersOnlineStatusResp, error) {
if !authverify.IsAppManagerUid(ctx, s.config.Share.IMAdminUserID) { if !authverify.IsAppManagerUid(ctx, s.config.Share.IMAdminUserID) {
return nil, errs.ErrNoPermission.WrapMsg("only app manager") return nil, errs.ErrNoPermission.WrapMsg("only app manager")
@@ -120,11 +127,6 @@ func (s *Server) GetUsersOnlineStatus(ctx context.Context, req *msggateway.GetUs
return &resp, nil return &resp, nil
} }
func (s *Server) OnlineBatchPushOneMsg(ctx context.Context, req *msggateway.OnlineBatchPushOneMsgReq) (*msggateway.OnlineBatchPushOneMsgResp, error) {
// todo implement
return nil, nil
}
func (s *Server) pushToUser(ctx context.Context, userID string, msgData *sdkws.MsgData) *msggateway.SingleMsgToUserResults { func (s *Server) pushToUser(ctx context.Context, userID string, msgData *sdkws.MsgData) *msggateway.SingleMsgToUserResults {
clients, ok := s.LongConnServer.GetUserAllCons(userID) clients, ok := s.LongConnServer.GetUserAllCons(userID)
if !ok { if !ok {
+9 -11
View File
@@ -16,13 +16,13 @@ package msggateway
import ( import (
"context" "context"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache" "github.com/openimsdk/open-im-server/v3/pkg/rpccache"
"github.com/openimsdk/tools/db/redisutil" "github.com/openimsdk/tools/db/redisutil"
"github.com/openimsdk/tools/utils/datautil"
"time"
"github.com/openimsdk/tools/log" "github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/utils/datautil"
) )
type Config struct { type Config struct {
@@ -35,16 +35,13 @@ type Config struct {
// Start run ws server. // Start run ws server.
func Start(ctx context.Context, index int, conf *Config) error { func Start(ctx context.Context, index int, conf *Config) error {
log.CInfo(ctx, "MSG-GATEWAY server is initializing", "rpcPorts", conf.MsgGateway.RPC.Ports, log.CInfo(ctx, "MSG-GATEWAY server is initializing", "autoSetPorts", conf.MsgGateway.RPC.AutoSetPorts,
"rpcPorts", conf.MsgGateway.RPC.Ports,
"wsPort", conf.MsgGateway.LongConnSvr.Ports, "prometheusPorts", conf.MsgGateway.Prometheus.Ports) "wsPort", conf.MsgGateway.LongConnSvr.Ports, "prometheusPorts", conf.MsgGateway.Prometheus.Ports)
wsPort, err := datautil.GetElemByIndex(conf.MsgGateway.LongConnSvr.Ports, index) wsPort, err := datautil.GetElemByIndex(conf.MsgGateway.LongConnSvr.Ports, index)
if err != nil { if err != nil {
return err return err
} }
rpcPort, err := datautil.GetElemByIndex(conf.MsgGateway.RPC.Ports, index)
if err != nil {
return err
}
rdb, err := redisutil.NewRedisClient(ctx, conf.RedisConfig.Build()) rdb, err := redisutil.NewRedisClient(ctx, conf.RedisConfig.Build())
if err != nil { if err != nil {
return err return err
@@ -57,9 +54,10 @@ func Start(ctx context.Context, index int, conf *Config) error {
WithMessageMaxMsgLength(conf.MsgGateway.LongConnSvr.WebsocketMaxMsgLen), WithMessageMaxMsgLength(conf.MsgGateway.LongConnSvr.WebsocketMaxMsgLen),
) )
hubServer := NewServer(rpcPort, longServer, conf, func(srv *Server) error { hubServer := NewServer(longServer, conf, func(srv *Server) error {
longServer.online, _ = rpccache.NewOnlineCache(srv.userRcp, nil, rdb, false, longServer.subscriberUserOnlineStatusChanges) var err error
return nil longServer.online, err = rpccache.NewOnlineCache(srv.userClient, nil, rdb, false, longServer.subscriberUserOnlineStatusChanges)
return err
}) })
go longServer.ChangeOnlineStatus(4) go longServer.ChangeOnlineStatus(4)
+44 -34
View File
@@ -17,17 +17,15 @@ package msggateway
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"sync" "sync"
"github.com/go-playground/validator/v10" "github.com/go-playground/validator/v10"
"google.golang.org/protobuf/proto" "google.golang.org/protobuf/proto"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/msg" "github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/push" "github.com/openimsdk/protocol/push"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
"github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/errs" "github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/utils/jsonutil" "github.com/openimsdk/tools/utils/jsonutil"
) )
@@ -102,34 +100,34 @@ func (r *Resp) String() string {
} }
type MessageHandler interface { type MessageHandler interface {
GetSeq(context context.Context, data *Req) ([]byte, error) GetSeq(ctx context.Context, data *Req) ([]byte, error)
SendMessage(context context.Context, data *Req) ([]byte, error) SendMessage(ctx context.Context, data *Req) ([]byte, error)
SendSignalMessage(context context.Context, data *Req) ([]byte, error) SendSignalMessage(ctx context.Context, data *Req) ([]byte, error)
PullMessageBySeqList(context context.Context, data *Req) ([]byte, error) PullMessageBySeqList(ctx context.Context, data *Req) ([]byte, error)
GetConversationsHasReadAndMaxSeq(context context.Context, data *Req) ([]byte, error) GetConversationsHasReadAndMaxSeq(ctx context.Context, data *Req) ([]byte, error)
GetSeqMessage(context context.Context, data *Req) ([]byte, error) GetSeqMessage(ctx context.Context, data *Req) ([]byte, error)
UserLogout(context context.Context, data *Req) ([]byte, error) UserLogout(ctx context.Context, data *Req) ([]byte, error)
SetUserDeviceBackground(context context.Context, data *Req) ([]byte, bool, error) SetUserDeviceBackground(ctx context.Context, data *Req) ([]byte, bool, error)
GetLastMessage(ctx context.Context, data *Req) ([]byte, error)
} }
var _ MessageHandler = (*GrpcHandler)(nil) var _ MessageHandler = (*GrpcHandler)(nil)
type GrpcHandler struct { type GrpcHandler struct {
msgRpcClient *rpcclient.MessageRpcClient validate *validator.Validate
pushClient *rpcclient.PushRpcClient msgClient *rpcli.MsgClient
validate *validator.Validate pushClient *rpcli.PushMsgServiceClient
} }
func NewGrpcHandler(validate *validator.Validate, client discovery.SvcDiscoveryRegistry, rpcRegisterName *config.RpcRegisterName) *GrpcHandler { func NewGrpcHandler(validate *validator.Validate, msgClient *rpcli.MsgClient, pushClient *rpcli.PushMsgServiceClient) *GrpcHandler {
msgRpcClient := rpcclient.NewMessageRpcClient(client, rpcRegisterName.Msg)
pushRpcClient := rpcclient.NewPushRpcClient(client, rpcRegisterName.Push)
return &GrpcHandler{ return &GrpcHandler{
msgRpcClient: &msgRpcClient, validate: validate,
pushClient: &pushRpcClient, validate: validate, msgClient: msgClient,
pushClient: pushClient,
} }
} }
func (g GrpcHandler) GetSeq(ctx context.Context, data *Req) ([]byte, error) { func (g *GrpcHandler) GetSeq(ctx context.Context, data *Req) ([]byte, error) {
req := sdkws.GetMaxSeqReq{} req := sdkws.GetMaxSeqReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil { if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "GetSeq: error unmarshaling request", "action", "unmarshal", "dataType", "GetMaxSeqReq") return nil, errs.WrapMsg(err, "GetSeq: error unmarshaling request", "action", "unmarshal", "dataType", "GetMaxSeqReq")
@@ -137,7 +135,7 @@ func (g GrpcHandler) GetSeq(ctx context.Context, data *Req) ([]byte, error) {
if err := g.validate.Struct(&req); err != nil { if err := g.validate.Struct(&req); err != nil {
return nil, errs.WrapMsg(err, "GetSeq: validation failed", "action", "validate", "dataType", "GetMaxSeqReq") return nil, errs.WrapMsg(err, "GetSeq: validation failed", "action", "validate", "dataType", "GetMaxSeqReq")
} }
resp, err := g.msgRpcClient.GetMaxSeq(ctx, &req) resp, err := g.msgClient.MsgClient.GetMaxSeq(ctx, &req)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -150,7 +148,7 @@ func (g GrpcHandler) GetSeq(ctx context.Context, data *Req) ([]byte, error) {
// SendMessage handles the sending of messages through gRPC. It unmarshals the request data, // SendMessage handles the sending of messages through gRPC. It unmarshals the request data,
// validates the message, and then sends it using the message RPC client. // validates the message, and then sends it using the message RPC client.
func (g GrpcHandler) SendMessage(ctx context.Context, data *Req) ([]byte, error) { func (g *GrpcHandler) SendMessage(ctx context.Context, data *Req) ([]byte, error) {
var msgData sdkws.MsgData var msgData sdkws.MsgData
if err := proto.Unmarshal(data.Data, &msgData); err != nil { if err := proto.Unmarshal(data.Data, &msgData); err != nil {
return nil, errs.WrapMsg(err, "SendMessage: error unmarshaling message data", "action", "unmarshal", "dataType", "MsgData") return nil, errs.WrapMsg(err, "SendMessage: error unmarshaling message data", "action", "unmarshal", "dataType", "MsgData")
@@ -161,7 +159,7 @@ func (g GrpcHandler) SendMessage(ctx context.Context, data *Req) ([]byte, error)
} }
req := msg.SendMsgReq{MsgData: &msgData} req := msg.SendMsgReq{MsgData: &msgData}
resp, err := g.msgRpcClient.SendMsg(ctx, &req) resp, err := g.msgClient.MsgClient.SendMsg(ctx, &req)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -174,8 +172,8 @@ func (g GrpcHandler) SendMessage(ctx context.Context, data *Req) ([]byte, error)
return c, nil return c, nil
} }
func (g GrpcHandler) SendSignalMessage(context context.Context, data *Req) ([]byte, error) { func (g *GrpcHandler) SendSignalMessage(ctx context.Context, data *Req) ([]byte, error) {
resp, err := g.msgRpcClient.SendMsg(context, nil) resp, err := g.msgClient.MsgClient.SendMsg(ctx, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -186,7 +184,7 @@ func (g GrpcHandler) SendSignalMessage(context context.Context, data *Req) ([]by
return c, nil return c, nil
} }
func (g GrpcHandler) PullMessageBySeqList(context context.Context, data *Req) ([]byte, error) { func (g *GrpcHandler) PullMessageBySeqList(ctx context.Context, data *Req) ([]byte, error) {
req := sdkws.PullMessageBySeqsReq{} req := sdkws.PullMessageBySeqsReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil { if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "err proto unmarshal", "action", "unmarshal", "dataType", "PullMessageBySeqsReq") return nil, errs.WrapMsg(err, "err proto unmarshal", "action", "unmarshal", "dataType", "PullMessageBySeqsReq")
@@ -194,7 +192,7 @@ func (g GrpcHandler) PullMessageBySeqList(context context.Context, data *Req) ([
if err := g.validate.Struct(data); err != nil { if err := g.validate.Struct(data); err != nil {
return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "PullMessageBySeqsReq") return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "PullMessageBySeqsReq")
} }
resp, err := g.msgRpcClient.PullMessageBySeqList(context, &req) resp, err := g.msgClient.MsgClient.PullMessageBySeqs(ctx, &req)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -205,7 +203,7 @@ func (g GrpcHandler) PullMessageBySeqList(context context.Context, data *Req) ([
return c, nil return c, nil
} }
func (g GrpcHandler) GetConversationsHasReadAndMaxSeq(context context.Context, data *Req) ([]byte, error) { func (g *GrpcHandler) GetConversationsHasReadAndMaxSeq(ctx context.Context, data *Req) ([]byte, error) {
req := msg.GetConversationsHasReadAndMaxSeqReq{} req := msg.GetConversationsHasReadAndMaxSeqReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil { if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "err proto unmarshal", "action", "unmarshal", "dataType", "GetConversationsHasReadAndMaxSeq") return nil, errs.WrapMsg(err, "err proto unmarshal", "action", "unmarshal", "dataType", "GetConversationsHasReadAndMaxSeq")
@@ -213,7 +211,7 @@ func (g GrpcHandler) GetConversationsHasReadAndMaxSeq(context context.Context, d
if err := g.validate.Struct(data); err != nil { if err := g.validate.Struct(data); err != nil {
return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "GetConversationsHasReadAndMaxSeq") return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "GetConversationsHasReadAndMaxSeq")
} }
resp, err := g.msgRpcClient.GetConversationsHasReadAndMaxSeq(context, &req) resp, err := g.msgClient.MsgClient.GetConversationsHasReadAndMaxSeq(ctx, &req)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -224,7 +222,7 @@ func (g GrpcHandler) GetConversationsHasReadAndMaxSeq(context context.Context, d
return c, nil return c, nil
} }
func (g GrpcHandler) GetSeqMessage(context context.Context, data *Req) ([]byte, error) { func (g *GrpcHandler) GetSeqMessage(ctx context.Context, data *Req) ([]byte, error) {
req := msg.GetSeqMessageReq{} req := msg.GetSeqMessageReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil { if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "GetSeqMessage") return nil, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "GetSeqMessage")
@@ -232,7 +230,7 @@ func (g GrpcHandler) GetSeqMessage(context context.Context, data *Req) ([]byte,
if err := g.validate.Struct(data); err != nil { if err := g.validate.Struct(data); err != nil {
return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "GetSeqMessage") return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "GetSeqMessage")
} }
resp, err := g.msgRpcClient.GetSeqMessage(context, &req) resp, err := g.msgClient.MsgClient.GetSeqMessage(ctx, &req)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -243,12 +241,12 @@ func (g GrpcHandler) GetSeqMessage(context context.Context, data *Req) ([]byte,
return c, nil return c, nil
} }
func (g GrpcHandler) UserLogout(context context.Context, data *Req) ([]byte, error) { func (g *GrpcHandler) UserLogout(ctx context.Context, data *Req) ([]byte, error) {
req := push.DelUserPushTokenReq{} req := push.DelUserPushTokenReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil { if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "DelUserPushTokenReq") return nil, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "DelUserPushTokenReq")
} }
resp, err := g.pushClient.DelUserPushToken(context, &req) resp, err := g.pushClient.PushMsgServiceClient.DelUserPushToken(ctx, &req)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -259,7 +257,7 @@ func (g GrpcHandler) UserLogout(context context.Context, data *Req) ([]byte, err
return c, nil return c, nil
} }
func (g GrpcHandler) SetUserDeviceBackground(_ context.Context, data *Req) ([]byte, bool, error) { func (g *GrpcHandler) SetUserDeviceBackground(ctx context.Context, data *Req) ([]byte, bool, error) {
req := sdkws.SetAppBackgroundStatusReq{} req := sdkws.SetAppBackgroundStatusReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil { if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, false, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "SetAppBackgroundStatusReq") return nil, false, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "SetAppBackgroundStatusReq")
@@ -269,3 +267,15 @@ func (g GrpcHandler) SetUserDeviceBackground(_ context.Context, data *Req) ([]by
} }
return nil, req.IsBackground, nil return nil, req.IsBackground, nil
} }
func (g *GrpcHandler) GetLastMessage(ctx context.Context, data *Req) ([]byte, error) {
var req msg.GetLastMessageReq
if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, err
}
resp, err := g.msgClient.GetLastMessage(ctx, &req)
if err != nil {
return nil, err
}
return proto.Marshal(resp)
}
+1 -1
View File
@@ -87,7 +87,7 @@ func (ws *WsServer) ChangeOnlineStatus(concurrent int) {
opIdCtx := mcontext.SetOperationID(context.Background(), operationIDPrefix+strconv.FormatInt(count.Add(1), 10)) opIdCtx := mcontext.SetOperationID(context.Background(), operationIDPrefix+strconv.FormatInt(count.Add(1), 10))
ctx, cancel := context.WithTimeout(opIdCtx, time.Second*5) ctx, cancel := context.WithTimeout(opIdCtx, time.Second*5)
defer cancel() defer cancel()
if _, err := ws.userClient.Client.SetUserOnlineStatus(ctx, req); err != nil { if err := ws.userClient.SetUserOnlineStatus(ctx, req); err != nil {
log.ZError(ctx, "update user online status", err) log.ZError(ctx, "update user online status", err)
} }
for _, ss := range req.Status { for _, ss := range req.Status {
+36 -15
View File
@@ -3,10 +3,7 @@ package msggateway
import ( import (
"context" "context"
"fmt" "fmt"
"github.com/openimsdk/open-im-server/v3/pkg/common/webhook" "github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache"
pbAuth "github.com/openimsdk/protocol/auth"
"github.com/openimsdk/tools/mcontext"
"net/http" "net/http"
"sync" "sync"
"sync/atomic" "sync/atomic"
@@ -15,12 +12,15 @@ import (
"github.com/go-playground/validator/v10" "github.com/go-playground/validator/v10"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics" "github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/common/webhook"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache"
pbAuth "github.com/openimsdk/protocol/auth"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway" "github.com/openimsdk/protocol/msggateway"
"github.com/openimsdk/tools/discovery" "github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/errs" "github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log" "github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mcontext"
"github.com/openimsdk/tools/utils/stringutil" "github.com/openimsdk/tools/utils/stringutil"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@@ -31,7 +31,7 @@ type LongConnServer interface {
GetUserAllCons(userID string) ([]*Client, bool) GetUserAllCons(userID string) ([]*Client, bool)
GetUserPlatformCons(userID string, platform int) ([]*Client, bool, bool) GetUserPlatformCons(userID string, platform int) ([]*Client, bool, bool)
Validate(s any) error Validate(s any) error
SetDiscoveryRegistry(client discovery.SvcDiscoveryRegistry, config *Config) SetDiscoveryRegistry(ctx context.Context, client discovery.SvcDiscoveryRegistry, config *Config) error
KickUserConn(client *Client) error KickUserConn(client *Client) error
UnRegister(c *Client) UnRegister(c *Client)
SetKickHandlerInfo(i *kickHandler) SetKickHandlerInfo(i *kickHandler)
@@ -56,13 +56,13 @@ type WsServer struct {
handshakeTimeout time.Duration handshakeTimeout time.Duration
writeBufferSize int writeBufferSize int
validate *validator.Validate validate *validator.Validate
userClient *rpcclient.UserRpcClient
authClient *rpcclient.Auth
disCov discovery.SvcDiscoveryRegistry disCov discovery.SvcDiscoveryRegistry
Compressor Compressor
//Encoder //Encoder
MessageHandler MessageHandler
webhookClient *webhook.Client webhookClient *webhook.Client
userClient *rpcli.UserClient
authClient *rpcli.AuthClient
} }
type kickHandler struct { type kickHandler struct {
@@ -71,12 +71,28 @@ type kickHandler struct {
newClient *Client newClient *Client
} }
func (ws *WsServer) SetDiscoveryRegistry(disCov discovery.SvcDiscoveryRegistry, config *Config) { func (ws *WsServer) SetDiscoveryRegistry(ctx context.Context, disCov discovery.SvcDiscoveryRegistry, config *Config) error {
ws.MessageHandler = NewGrpcHandler(ws.validate, disCov, &config.Share.RpcRegisterName) userConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.User)
u := rpcclient.NewUserRpcClient(disCov, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID) if err != nil {
ws.authClient = rpcclient.NewAuth(disCov, config.Share.RpcRegisterName.Auth) return err
ws.userClient = &u }
pushConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.Push)
if err != nil {
return err
}
authConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.Auth)
if err != nil {
return err
}
msgConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return err
}
ws.userClient = rpcli.NewUserClient(userConn)
ws.authClient = rpcli.NewAuthClient(authConn)
ws.MessageHandler = NewGrpcHandler(ws.validate, rpcli.NewMsgClient(msgConn), rpcli.NewPushMsgServiceClient(pushConn))
ws.disCov = disCov ws.disCov = disCov
return nil
} }
//func (ws *WsServer) SetUserOnlineStatus(ctx context.Context, client *Client, status int32) { //func (ws *WsServer) SetUserOnlineStatus(ctx context.Context, client *Client, status int32) {
@@ -311,7 +327,7 @@ func (ws *WsServer) multiTerminalLoginChecker(clientOK bool, oldClients []*Clien
[]string{newClient.ctx.GetOperationID(), newClient.ctx.GetUserID(), []string{newClient.ctx.GetOperationID(), newClient.ctx.GetUserID(),
constant.PlatformIDToName(newClient.PlatformID), newClient.ctx.GetConnID()}, constant.PlatformIDToName(newClient.PlatformID), newClient.ctx.GetConnID()},
) )
if _, err := ws.authClient.KickTokens(ctx, kickTokens); err != nil { if err := ws.authClient.KickTokens(ctx, kickTokens); err != nil {
log.ZWarn(newClient.ctx, "kickTokens err", err) log.ZWarn(newClient.ctx, "kickTokens err", err)
} }
} }
@@ -338,7 +354,12 @@ func (ws *WsServer) multiTerminalLoginChecker(clientOK bool, oldClients []*Clien
[]string{newClient.ctx.GetOperationID(), newClient.ctx.GetUserID(), []string{newClient.ctx.GetOperationID(), newClient.ctx.GetUserID(),
constant.PlatformIDToName(newClient.PlatformID), newClient.ctx.GetConnID()}, constant.PlatformIDToName(newClient.PlatformID), newClient.ctx.GetConnID()},
) )
if _, err := ws.authClient.InvalidateToken(ctx, newClient.token, newClient.UserID, newClient.PlatformID); err != nil { req := &pbAuth.InvalidateTokenReq{
PreservedToken: newClient.token,
UserID: newClient.UserID,
PlatformID: int32(newClient.PlatformID),
}
if err := ws.authClient.InvalidateToken(ctx, req); err != nil {
log.ZWarn(newClient.ctx, "InvalidateToken err", err, "userID", newClient.UserID, log.ZWarn(newClient.ctx, "InvalidateToken err", err, "userID", newClient.UserID,
"platformID", newClient.PlatformID) "platformID", newClient.PlatformID)
} }
+76 -25
View File
@@ -18,11 +18,17 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"net"
"net/http" "net/http"
"os" "os"
"os/signal" "os/signal"
"strconv"
"syscall" "syscall"
"github.com/openimsdk/tools/discovery/etcd"
"github.com/openimsdk/tools/utils/jsonutil"
"github.com/openimsdk/tools/utils/network"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics" "github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/database/mgo" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/database/mgo"
@@ -30,10 +36,10 @@ import (
"github.com/openimsdk/tools/db/redisutil" "github.com/openimsdk/tools/db/redisutil"
"github.com/openimsdk/tools/utils/datautil" "github.com/openimsdk/tools/utils/datautil"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" conf "github.com/openimsdk/open-im-server/v3/pkg/common/config"
discRegister "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister" discRegister "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/tools/errs" "github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log" "github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mw" "github.com/openimsdk/tools/mw"
@@ -54,16 +60,17 @@ type MsgTransfer struct {
} }
type Config struct { type Config struct {
MsgTransfer config.MsgTransfer MsgTransfer conf.MsgTransfer
RedisConfig config.Redis RedisConfig conf.Redis
MongodbConfig config.Mongo MongodbConfig conf.Mongo
KafkaConfig config.Kafka KafkaConfig conf.Kafka
Share config.Share Share conf.Share
WebhooksConfig config.Webhooks WebhooksConfig conf.Webhooks
Discovery config.Discovery Discovery conf.Discovery
} }
func Start(ctx context.Context, index int, config *Config) error { func Start(ctx context.Context, index int, config *Config) error {
log.CInfo(ctx, "MSG-TRANSFER server is initializing", "prometheusPorts", log.CInfo(ctx, "MSG-TRANSFER server is initializing", "prometheusPorts",
config.MsgTransfer.Prometheus.Ports, "index", index) config.MsgTransfer.Prometheus.Ports, "index", index)
@@ -75,18 +82,18 @@ func Start(ctx context.Context, index int, config *Config) error {
if err != nil { if err != nil {
return err return err
} }
client, err := discRegister.NewDiscoveryRegister(&config.Discovery, &config.Share) client, err := discRegister.NewDiscoveryRegister(&config.Discovery, &config.Share, nil)
if err != nil { if err != nil {
return err return err
} }
client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()), client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin"))) grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin")))
msgModel := redis.NewMsgCache(rdb)
msgDocModel, err := mgo.NewMsgMongo(mgocli.GetDB()) msgDocModel, err := mgo.NewMsgMongo(mgocli.GetDB())
if err != nil { if err != nil {
return err return err
} }
msgModel := redis.NewMsgCache(rdb, msgDocModel)
seqConversation, err := mgo.NewSeqConversationMongo(mgocli.GetDB()) seqConversation, err := mgo.NewSeqConversationMongo(mgocli.GetDB())
if err != nil { if err != nil {
return err return err
@@ -101,9 +108,7 @@ func Start(ctx context.Context, index int, config *Config) error {
if err != nil { if err != nil {
return err return err
} }
conversationRpcClient := rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation) historyCH, err := NewOnlineHistoryRedisConsumerHandler(ctx, client, config, msgTransferDatabase)
groupRpcClient := rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group)
historyCH, err := NewOnlineHistoryRedisConsumerHandler(&config.KafkaConfig, msgTransferDatabase, &conversationRpcClient, &groupRpcClient)
if err != nil { if err != nil {
return err return err
} }
@@ -119,7 +124,7 @@ func Start(ctx context.Context, index int, config *Config) error {
return msgTransfer.Start(index, config) return msgTransfer.Start(index, config)
} }
func (m *MsgTransfer) Start(index int, config *Config) error { func (m *MsgTransfer) Start(index int, cfg *Config) error {
m.ctx, m.cancel = context.WithCancel(context.Background()) m.ctx, m.cancel = context.WithCancel(context.Background())
var ( var (
netDone = make(chan struct{}, 1) netDone = make(chan struct{}, 1)
@@ -134,21 +139,67 @@ func (m *MsgTransfer) Start(index int, config *Config) error {
return err return err
} }
if config.MsgTransfer.Prometheus.Enable { client, err := kdisc.NewDiscoveryRegister(&cfg.Discovery, &cfg.Share, nil)
if err != nil {
return errs.WrapMsg(err, "failed to register discovery service")
}
registerIP, err := network.GetRpcRegisterIP("")
if err != nil {
return err
}
getAutoPort := func() (net.Listener, int, error) {
registerAddr := net.JoinHostPort(registerIP, "0")
listener, err := net.Listen("tcp", registerAddr)
if err != nil {
return nil, 0, errs.WrapMsg(err, "listen err", "registerAddr", registerAddr)
}
_, portStr, _ := net.SplitHostPort(listener.Addr().String())
port, _ := strconv.Atoi(portStr)
return listener, port, nil
}
if cfg.MsgTransfer.Prometheus.AutoSetPorts && cfg.Discovery.Enable != conf.ETCD {
return errs.New("only etcd support autoSetPorts", "RegisterName", "api").Wrap()
}
if cfg.MsgTransfer.Prometheus.Enable {
var (
listener net.Listener
prometheusPort int
)
if cfg.MsgTransfer.Prometheus.AutoSetPorts {
listener, prometheusPort, err = getAutoPort()
if err != nil {
return err
}
etcdClient := client.(*etcd.SvcDiscoveryRegistryImpl).GetClient()
_, err = etcdClient.Put(context.TODO(), prommetrics.BuildDiscoveryKey(prommetrics.MessageTransferKeyName), jsonutil.StructToJsonString(prommetrics.BuildDefaultTarget(registerIP, prometheusPort)))
if err != nil {
return errs.WrapMsg(err, "etcd put err")
}
} else {
prometheusPort, err = datautil.GetElemByIndex(cfg.MsgTransfer.Prometheus.Ports, index)
if err != nil {
return err
}
listener, err = net.Listen("tcp", fmt.Sprintf(":%d", prometheusPort))
if err != nil {
return errs.WrapMsg(err, "listen err", "addr", fmt.Sprintf(":%d", prometheusPort))
}
}
go func() { go func() {
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
mw.PanicStackToLog(m.ctx, r) log.ZPanic(m.ctx, "MsgTransfer Start Panic", errs.ErrPanic(r))
} }
}() }()
prometheusPort, err := datautil.GetElemByIndex(config.MsgTransfer.Prometheus.Ports, index) if err := prommetrics.TransferInit(listener); err != nil && !errors.Is(err, http.ErrServerClosed) {
if err != nil {
netErr = err
netDone <- struct{}{}
return
}
if err := prommetrics.TransferInit(prometheusPort); err != nil && !errors.Is(err, http.ErrServerClosed) {
netErr = errs.WrapMsg(err, "prometheus start error", "prometheusPort", prometheusPort) netErr = errs.WrapMsg(err, "prometheus start error", "prometheusPort", prometheusPort)
netDone <- struct{}{} netDone <- struct{}{}
} }
@@ -18,21 +18,22 @@ import (
"context" "context"
"encoding/json" "encoding/json"
"errors" "errors"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics" "github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/tools/mw" "github.com/openimsdk/tools/discovery"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/IBM/sarama" "github.com/IBM/sarama"
"github.com/go-redis/redis" "github.com/go-redis/redis"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor" "github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/tools/batcher" "github.com/openimsdk/open-im-server/v3/pkg/tools/batcher"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
pbconv "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
"github.com/openimsdk/tools/errs" "github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log" "github.com/openimsdk/tools/log"
@@ -69,21 +70,32 @@ type OnlineHistoryRedisConsumerHandler struct {
redisMessageBatches *batcher.Batcher[sarama.ConsumerMessage] redisMessageBatches *batcher.Batcher[sarama.ConsumerMessage]
msgTransferDatabase controller.MsgTransferDatabase msgTransferDatabase controller.MsgTransferDatabase
conversationRpcClient *rpcclient.ConversationRpcClient
groupRpcClient *rpcclient.GroupRpcClient
conversationUserHasReadChan chan *userHasReadSeq conversationUserHasReadChan chan *userHasReadSeq
wg sync.WaitGroup wg sync.WaitGroup
groupClient *rpcli.GroupClient
conversationClient *rpcli.ConversationClient
} }
func NewOnlineHistoryRedisConsumerHandler(kafkaConf *config.Kafka, database controller.MsgTransferDatabase, func NewOnlineHistoryRedisConsumerHandler(ctx context.Context, client discovery.SvcDiscoveryRegistry, config *Config, database controller.MsgTransferDatabase) (*OnlineHistoryRedisConsumerHandler, error) {
conversationRpcClient *rpcclient.ConversationRpcClient, groupRpcClient *rpcclient.GroupRpcClient) (*OnlineHistoryRedisConsumerHandler, error) { kafkaConf := config.KafkaConfig
historyConsumerGroup, err := kafka.NewMConsumerGroup(kafkaConf.Build(), kafkaConf.ToRedisGroupID, []string{kafkaConf.ToRedisTopic}, false) historyConsumerGroup, err := kafka.NewMConsumerGroup(kafkaConf.Build(), kafkaConf.ToRedisGroupID, []string{kafkaConf.ToRedisTopic}, false)
if err != nil { if err != nil {
return nil, err return nil, err
} }
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return nil, err
}
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return nil, err
}
var och OnlineHistoryRedisConsumerHandler var och OnlineHistoryRedisConsumerHandler
och.msgTransferDatabase = database och.msgTransferDatabase = database
och.conversationUserHasReadChan = make(chan *userHasReadSeq, hasReadChanBuffer) och.conversationUserHasReadChan = make(chan *userHasReadSeq, hasReadChanBuffer)
och.groupClient = rpcli.NewGroupClient(groupConn)
och.conversationClient = rpcli.NewConversationClient(conversationConn)
och.wg.Add(1) och.wg.Add(1)
b := batcher.New[sarama.ConsumerMessage]( b := batcher.New[sarama.ConsumerMessage](
@@ -103,25 +115,21 @@ func NewOnlineHistoryRedisConsumerHandler(kafkaConf *config.Kafka, database cont
} }
b.Do = och.do b.Do = och.do
och.redisMessageBatches = b och.redisMessageBatches = b
och.conversationRpcClient = conversationRpcClient
och.groupRpcClient = groupRpcClient
och.historyConsumerGroup = historyConsumerGroup och.historyConsumerGroup = historyConsumerGroup
return &och, err return &och, nil
} }
func (och *OnlineHistoryRedisConsumerHandler) do(ctx context.Context, channelID int, val *batcher.Msg[sarama.ConsumerMessage]) { func (och *OnlineHistoryRedisConsumerHandler) do(ctx context.Context, channelID int, val *batcher.Msg[sarama.ConsumerMessage]) {
ctx = mcontext.WithTriggerIDContext(ctx, val.TriggerID()) ctx = mcontext.WithTriggerIDContext(ctx, val.TriggerID())
ctxMessages := och.parseConsumerMessages(ctx, val.Val()) ctxMessages := och.parseConsumerMessages(ctx, val.Val())
ctx = withAggregationCtx(ctx, ctxMessages) ctx = withAggregationCtx(ctx, ctxMessages)
log.ZInfo(ctx, "msg arrived channel", "channel id", channelID, "msgList length", len(ctxMessages), log.ZInfo(ctx, "msg arrived channel", "channel id", channelID, "msgList length", len(ctxMessages), "key", val.Key())
"key", val.Key())
och.doSetReadSeq(ctx, ctxMessages) och.doSetReadSeq(ctx, ctxMessages)
storageMsgList, notStorageMsgList, storageNotificationList, notStorageNotificationList := storageMsgList, notStorageMsgList, storageNotificationList, notStorageNotificationList :=
och.categorizeMessageLists(ctxMessages) och.categorizeMessageLists(ctxMessages)
log.ZDebug(ctx, "number of categorized messages", "storageMsgList", len(storageMsgList), "notStorageMsgList", log.ZDebug(ctx, "number of categorized messages", "storageMsgList", len(storageMsgList), "notStorageMsgList",
len(notStorageMsgList), "storageNotificationList", len(storageNotificationList), "notStorageNotificationList", len(notStorageMsgList), "storageNotificationList", len(storageNotificationList), "notStorageNotificationList", len(notStorageNotificationList))
len(notStorageNotificationList))
conversationIDMsg := msgprocessor.GetChatConversationIDByMsg(ctxMessages[0].message) conversationIDMsg := msgprocessor.GetChatConversationIDByMsg(ctxMessages[0].message)
conversationIDNotification := msgprocessor.GetNotificationConversationIDByMsg(ctxMessages[0].message) conversationIDNotification := msgprocessor.GetNotificationConversationIDByMsg(ctxMessages[0].message)
@@ -285,22 +293,27 @@ func (och *OnlineHistoryRedisConsumerHandler) handleMsg(ctx context.Context, key
case constant.ReadGroupChatType: case constant.ReadGroupChatType:
log.ZDebug(ctx, "group chat first create conversation", "conversationID", log.ZDebug(ctx, "group chat first create conversation", "conversationID",
conversationID) conversationID)
userIDs, err := och.groupRpcClient.GetGroupMemberIDs(ctx, msg.GroupID)
userIDs, err := och.groupClient.GetGroupMemberUserIDs(ctx, msg.GroupID)
if err != nil { if err != nil {
log.ZWarn(ctx, "get group member ids error", err, "conversationID", log.ZWarn(ctx, "get group member ids error", err, "conversationID",
conversationID) conversationID)
} else { } else {
log.ZInfo(ctx, "GetGroupMemberIDs end") log.ZInfo(ctx, "GetGroupMemberIDs end")
if err := och.conversationRpcClient.GroupChatFirstCreateConversation(ctx, if err := och.conversationClient.CreateGroupChatConversations(ctx, msg.GroupID, userIDs); err != nil {
msg.GroupID, userIDs); err != nil {
log.ZWarn(ctx, "single chat first create conversation error", err, log.ZWarn(ctx, "single chat first create conversation error", err,
"conversationID", conversationID) "conversationID", conversationID)
} }
} }
case constant.SingleChatType, constant.NotificationChatType: case constant.SingleChatType, constant.NotificationChatType:
if err := och.conversationRpcClient.SingleChatFirstCreateConversation(ctx, msg.RecvID, req := &pbconv.CreateSingleChatConversationsReq{
msg.SendID, conversationID, msg.SessionType); err != nil { RecvID: msg.RecvID,
SendID: msg.SendID,
ConversationID: conversationID,
ConversationType: msg.SessionType,
}
if err := och.conversationClient.CreateSingleChatConversations(ctx, req); err != nil {
log.ZWarn(ctx, "single chat or notification first create conversation error", err, log.ZWarn(ctx, "single chat or notification first create conversation error", err,
"conversationID", conversationID, "sessionType", msg.SessionType) "conversationID", conversationID, "sessionType", msg.SessionType)
} }
@@ -349,7 +362,7 @@ func (och *OnlineHistoryRedisConsumerHandler) handleNotification(ctx context.Con
func (och *OnlineHistoryRedisConsumerHandler) HandleUserHasReadSeqMessages(ctx context.Context) { func (och *OnlineHistoryRedisConsumerHandler) HandleUserHasReadSeqMessages(ctx context.Context) {
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
mw.PanicStackToLog(ctx, r) log.ZPanic(ctx, "HandleUserHasReadSeqMessages Panic", errs.ErrPanic(r))
} }
}() }()
@@ -77,27 +77,13 @@ func (mc *OnlineHistoryMongoConsumerHandler) handleChatWs2Mongo(ctx context.Cont
for _, msg := range msgFromMQ.MsgData { for _, msg := range msgFromMQ.MsgData {
seqs = append(seqs, msg.Seq) seqs = append(seqs, msg.Seq)
} }
err = mc.msgTransferDatabase.DeleteMessagesFromCache(ctx, msgFromMQ.ConversationID, seqs)
if err != nil {
log.ZError(
ctx,
"remove cache msg from redis err",
err,
"msg",
msgFromMQ.MsgData,
"conversationID",
msgFromMQ.ConversationID,
)
}
} }
func (*OnlineHistoryMongoConsumerHandler) Setup(_ sarama.ConsumerGroupSession) error { return nil } func (*OnlineHistoryMongoConsumerHandler) Setup(_ sarama.ConsumerGroupSession) error { return nil }
func (*OnlineHistoryMongoConsumerHandler) Cleanup(_ sarama.ConsumerGroupSession) error { return nil } func (*OnlineHistoryMongoConsumerHandler) Cleanup(_ sarama.ConsumerGroupSession) error { return nil }
func (mc *OnlineHistoryMongoConsumerHandler) ConsumeClaim( func (mc *OnlineHistoryMongoConsumerHandler) ConsumeClaim(sess sarama.ConsumerGroupSession, claim sarama.ConsumerGroupClaim) error { // an instance in the consumer group
sess sarama.ConsumerGroupSession,
claim sarama.ConsumerGroupClaim,
) error { // an instance in the consumer group
log.ZDebug(context.Background(), "online new session msg come", "highWaterMarkOffset", log.ZDebug(context.Background(), "online new session msg come", "highWaterMarkOffset",
claim.HighWaterMarkOffset(), "topic", claim.Topic(), "partition", claim.Partition()) claim.HighWaterMarkOffset(), "topic", claim.Topic(), "partition", claim.Partition())
for msg := range claim.Messages() { for msg := range claim.Messages() {
+5 -2
View File
@@ -18,6 +18,7 @@ import (
"context" "context"
"github.com/openimsdk/open-im-server/v3/internal/push/offlinepush/options" "github.com/openimsdk/open-im-server/v3/internal/push/offlinepush/options"
"github.com/openimsdk/tools/log" "github.com/openimsdk/tools/log"
"sync/atomic"
) )
func NewClient() *Dummy { func NewClient() *Dummy {
@@ -25,10 +26,12 @@ func NewClient() *Dummy {
} }
type Dummy struct { type Dummy struct {
v atomic.Bool
} }
func (d *Dummy) Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error { func (d *Dummy) Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error {
log.ZDebug(ctx, "dummy push") if d.v.CompareAndSwap(false, true) {
log.ZWarn(ctx, "Dummy push", nil, "ps", "The offline push is not configured. To configure it, please go to config/openim-push.yml.") log.ZWarn(ctx, "dummy push", nil, "ps", "the offline push is not configured. to configure it, please go to config/openim-push.yml")
}
return nil return nil
} }
+5 -3
View File
@@ -16,12 +16,14 @@ package fcm
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"github.com/openimsdk/open-im-server/v3/internal/push/offlinepush/options"
"github.com/openimsdk/tools/utils/httputil"
"path/filepath" "path/filepath"
"strings" "strings"
"github.com/openimsdk/open-im-server/v3/internal/push/offlinepush/options"
"github.com/openimsdk/tools/utils/httputil"
firebase "firebase.google.com/go/v4" firebase "firebase.google.com/go/v4"
"firebase.google.com/go/v4/messaging" "firebase.google.com/go/v4/messaging"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
@@ -133,7 +135,7 @@ func (f *Fcm) Push(ctx context.Context, userIDs []string, title, content string,
unreadCountSum, err := f.cache.GetUserBadgeUnreadCountSum(ctx, userID) unreadCountSum, err := f.cache.GetUserBadgeUnreadCountSum(ctx, userID)
if err == nil && unreadCountSum != 0 { if err == nil && unreadCountSum != 0 {
apns.Payload.Aps.Badge = &unreadCountSum apns.Payload.Aps.Badge = &unreadCountSum
} else if err == redis.Nil || unreadCountSum == 0 { } else if errors.Is(err, redis.Nil) || unreadCountSum == 0 {
zero := 1 zero := 1
apns.Payload.Aps.Badge = &zero apns.Payload.Aps.Badge = &zero
} else { } else {
+36 -2
View File
@@ -18,6 +18,16 @@ import (
"fmt" "fmt"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/tools/utils/datautil"
)
var (
incOne = datautil.ToPtr("+1")
addNum = "1"
defaultStrategy = strategy{
Default: 1,
}
msgCategory = "CATEGORY_MESSAGE"
) )
type Resp struct { type Resp struct {
@@ -58,7 +68,24 @@ type TaskResp struct {
} }
type Settings struct { type Settings struct {
TTL *int64 `json:"ttl"` TTL *int64 `json:"ttl"`
Strategy strategy `json:"strategy"`
}
type strategy struct {
Default int64 `json:"default"`
//IOS int64 `json:"ios"`
//St int64 `json:"st"`
//Hw int64 `json:"hw"`
//Ho int64 `json:"ho"`
//XM int64 `json:"xm"`
//XMG int64 `json:"xmg"`
//VV int64 `json:"vv"`
//Op int64 `json:"op"`
//OpG int64 `json:"opg"`
//MZ int64 `json:"mz"`
//HosHw int64 `json:"hoshw"`
//WX int64 `json:"wx"`
} }
type Audience struct { type Audience struct {
@@ -112,6 +139,8 @@ type Notification struct {
ChannelID string `json:"channelID"` ChannelID string `json:"channelID"`
ChannelName string `json:"ChannelName"` ChannelName string `json:"ChannelName"`
ClickType string `json:"click_type"` ClickType string `json:"click_type"`
BadgeAddNum string `json:"badge_add_num"`
Category string `json:"category"`
} }
type Options struct { type Options struct {
@@ -120,6 +149,7 @@ type Options struct {
ChannelID string `json:"/message/android/notification/channel_id"` ChannelID string `json:"/message/android/notification/channel_id"`
Sound string `json:"/message/android/notification/sound"` Sound string `json:"/message/android/notification/sound"`
Importance string `json:"/message/android/notification/importance"` Importance string `json:"/message/android/notification/importance"`
Category string `json:"/message/android/category"`
} `json:"HW"` } `json:"HW"`
XM struct { XM struct {
ChannelID string `json:"/extra.channel_id"` ChannelID string `json:"/extra.channel_id"`
@@ -140,6 +170,8 @@ func newPushReq(pushConf *config.Push, title, content string) PushReq {
ClickType: "startapp", ClickType: "startapp",
ChannelID: pushConf.GeTui.ChannelID, ChannelID: pushConf.GeTui.ChannelID,
ChannelName: pushConf.GeTui.ChannelName, ChannelName: pushConf.GeTui.ChannelName,
BadgeAddNum: addNum,
Category: msgCategory,
}}} }}}
return pushReq return pushReq
} }
@@ -156,6 +188,7 @@ func (pushReq *PushReq) setPushChannel(title string, body string) {
notify := "notify" notify := "notify"
pushReq.PushChannel.Ios.NotificationType = &notify pushReq.PushChannel.Ios.NotificationType = &notify
pushReq.PushChannel.Ios.Aps.Sound = "default" pushReq.PushChannel.Ios.Aps.Sound = "default"
pushReq.PushChannel.Ios.AutoBadge = incOne
pushReq.PushChannel.Ios.Aps.Alert = Alert{ pushReq.PushChannel.Ios.Aps.Alert = Alert{
Title: title, Title: title,
Body: body, Body: body,
@@ -172,7 +205,8 @@ func (pushReq *PushReq) setPushChannel(title string, body string) {
ChannelID string `json:"/message/android/notification/channel_id"` ChannelID string `json:"/message/android/notification/channel_id"`
Sound string `json:"/message/android/notification/sound"` Sound string `json:"/message/android/notification/sound"`
Importance string `json:"/message/android/notification/importance"` Importance string `json:"/message/android/notification/importance"`
}{ChannelID: "RingRing4", Sound: "/raw/ring001", Importance: "NORMAL"}, Category string `json:"/message/android/category"`
}{ChannelID: "RingRing4", Sound: "/raw/ring001", Importance: "NORMAL", Category: "IM"},
XM: struct { XM: struct {
ChannelID string `json:"/extra.channel_id"` ChannelID string `json:"/extra.channel_id"`
}{ChannelID: "high_system"}, }{ChannelID: "high_system"},
+5 -3
View File
@@ -18,6 +18,7 @@ import (
"context" "context"
"crypto/sha256" "crypto/sha256"
"encoding/hex" "encoding/hex"
"errors"
"strconv" "strconv"
"sync" "sync"
"time" "time"
@@ -70,7 +71,7 @@ func NewClient(pushConf *config.Push, cache cache.ThirdCache) *Client {
func (g *Client) Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error { func (g *Client) Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error {
token, err := g.cache.GetGetuiToken(ctx) token, err := g.cache.GetGetuiToken(ctx)
if err != nil { if err != nil {
if errs.Unwrap(err) == redis.Nil { if errors.Is(err, redis.Nil) {
log.ZDebug(ctx, "getui token not exist in redis") log.ZDebug(ctx, "getui token not exist in redis")
token, err = g.getTokenAndSave2Redis(ctx) token, err = g.getTokenAndSave2Redis(ctx)
if err != nil { if err != nil {
@@ -144,7 +145,7 @@ func (g *Client) Auth(ctx context.Context, timeStamp int64) (token string, expir
func (g *Client) GetTaskID(ctx context.Context, token string, pushReq PushReq) (string, error) { func (g *Client) GetTaskID(ctx context.Context, token string, pushReq PushReq) (string, error) {
respTask := TaskResp{} respTask := TaskResp{}
ttl := int64(1000 * 60 * 5) ttl := int64(1000 * 60 * 5)
pushReq.Settings = &Settings{TTL: &ttl} pushReq.Settings = &Settings{TTL: &ttl, Strategy: defaultStrategy}
err := g.request(ctx, taskURL, pushReq, token, &respTask) err := g.request(ctx, taskURL, pushReq, token, &respTask)
if err != nil { if err != nil {
return "", errs.Wrap(err) return "", errs.Wrap(err)
@@ -188,6 +189,7 @@ func (g *Client) postReturn(
if err != nil { if err != nil {
return err return err
} }
log.ZDebug(ctx, "postReturn", "url", url, "header", header, "input", input, "timeout", timeout, "output", output)
return output.parseError() return output.parseError()
} }
@@ -204,7 +206,7 @@ func (g *Client) getTokenAndSave2Redis(ctx context.Context) (token string, err e
} }
func (g *Client) GetTaskIDAndSave2Redis(ctx context.Context, token string, pushReq PushReq) (taskID string, err error) { func (g *Client) GetTaskIDAndSave2Redis(ctx context.Context, token string, pushReq PushReq) (taskID string, err error) {
pushReq.Settings = &Settings{TTL: &g.taskIDTTL} pushReq.Settings = &Settings{TTL: &g.taskIDTTL, Strategy: defaultStrategy}
taskID, err = g.GetTaskID(ctx, token, pushReq) taskID, err = g.GetTaskID(ctx, token, pushReq)
if err != nil { if err != nil {
return return
+3 -5
View File
@@ -14,6 +14,7 @@ import (
) )
type pushServer struct { type pushServer struct {
pbpush.UnimplementedPushMsgServiceServer
database controller.PushDatabase database controller.PushDatabase
disCov discovery.SvcDiscoveryRegistry disCov discovery.SvcDiscoveryRegistry
offlinePusher offlinepush.OfflinePusher offlinePusher offlinepush.OfflinePusher
@@ -31,11 +32,8 @@ type Config struct {
LocalCacheConfig config.LocalCache LocalCacheConfig config.LocalCache
Discovery config.Discovery Discovery config.Discovery
FcmConfigPath string FcmConfigPath string
}
func (p pushServer) PushMsg(ctx context.Context, req *pbpush.PushMsgReq) (*pbpush.PushMsgResp, error) { runTimeEnv string
//todo reserved Interface
return nil, nil
} }
func (p pushServer) DelUserPushToken(ctx context.Context, func (p pushServer) DelUserPushToken(ctx context.Context,
@@ -59,7 +57,7 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
database := controller.NewPushDatabase(cacheModel, &config.KafkaConfig) database := controller.NewPushDatabase(cacheModel, &config.KafkaConfig)
consumer, err := NewConsumerHandler(config, database, offlinePusher, rdb, client) consumer, err := NewConsumerHandler(ctx, config, database, offlinePusher, rdb, client)
if err != nil { if err != nil {
return err return err
} }
+33 -22
View File
@@ -3,7 +3,7 @@ package push
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"math/rand" "math/rand"
"strconv" "strconv"
"time" "time"
@@ -16,7 +16,6 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/webhook" "github.com/openimsdk/open-im-server/v3/pkg/common/webhook"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor" "github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache" "github.com/openimsdk/open-im-server/v3/pkg/rpccache"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/util/conversationutil" "github.com/openimsdk/open-im-server/v3/pkg/util/conversationutil"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway" "github.com/openimsdk/protocol/msggateway"
@@ -41,14 +40,15 @@ type ConsumerHandler struct {
onlineCache *rpccache.OnlineCache onlineCache *rpccache.OnlineCache
groupLocalCache *rpccache.GroupLocalCache groupLocalCache *rpccache.GroupLocalCache
conversationLocalCache *rpccache.ConversationLocalCache conversationLocalCache *rpccache.ConversationLocalCache
msgRpcClient rpcclient.MessageRpcClient
conversationRpcClient rpcclient.ConversationRpcClient
groupRpcClient rpcclient.GroupRpcClient
webhookClient *webhook.Client webhookClient *webhook.Client
config *Config config *Config
userClient *rpcli.UserClient
groupClient *rpcli.GroupClient
msgClient *rpcli.MsgClient
conversationClient *rpcli.ConversationClient
} }
func NewConsumerHandler(config *Config, database controller.PushDatabase, offlinePusher offlinepush.OfflinePusher, rdb redis.UniversalClient, func NewConsumerHandler(ctx context.Context, config *Config, database controller.PushDatabase, offlinePusher offlinepush.OfflinePusher, rdb redis.UniversalClient,
client discovery.SvcDiscoveryRegistry) (*ConsumerHandler, error) { client discovery.SvcDiscoveryRegistry) (*ConsumerHandler, error) {
var consumerHandler ConsumerHandler var consumerHandler ConsumerHandler
var err error var err error
@@ -57,20 +57,35 @@ func NewConsumerHandler(config *Config, database controller.PushDatabase, offlin
if err != nil { if err != nil {
return nil, err return nil, err
} }
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID) if err != nil {
return nil, err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return nil, err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return nil, err
}
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return nil, err
}
consumerHandler.userClient = rpcli.NewUserClient(userConn)
consumerHandler.groupClient = rpcli.NewGroupClient(groupConn)
consumerHandler.msgClient = rpcli.NewMsgClient(msgConn)
consumerHandler.conversationClient = rpcli.NewConversationClient(conversationConn)
consumerHandler.offlinePusher = offlinePusher consumerHandler.offlinePusher = offlinePusher
consumerHandler.onlinePusher = NewOnlinePusher(client, config) consumerHandler.onlinePusher = NewOnlinePusher(client, config)
consumerHandler.groupRpcClient = rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group) consumerHandler.groupLocalCache = rpccache.NewGroupLocalCache(consumerHandler.groupClient, &config.LocalCacheConfig, rdb)
consumerHandler.groupLocalCache = rpccache.NewGroupLocalCache(consumerHandler.groupRpcClient, &config.LocalCacheConfig, rdb) consumerHandler.conversationLocalCache = rpccache.NewConversationLocalCache(consumerHandler.conversationClient, &config.LocalCacheConfig, rdb)
consumerHandler.msgRpcClient = rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg)
consumerHandler.conversationRpcClient = rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation)
consumerHandler.conversationLocalCache = rpccache.NewConversationLocalCache(consumerHandler.conversationRpcClient, &config.LocalCacheConfig, rdb)
consumerHandler.webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL) consumerHandler.webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
consumerHandler.config = config consumerHandler.config = config
consumerHandler.pushDatabase = database consumerHandler.pushDatabase = database
consumerHandler.onlineCache, err = rpccache.NewOnlineCache(userRpcClient, consumerHandler.groupLocalCache, rdb, config.RpcConfig.FullUserCache, nil) consumerHandler.onlineCache, err = rpccache.NewOnlineCache(consumerHandler.userClient, consumerHandler.groupLocalCache, rdb, config.RpcConfig.FullUserCache, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -327,7 +342,7 @@ func (c *ConsumerHandler) groupMessagesHandler(ctx context.Context, groupID stri
ctx = mcontext.WithOpUserIDContext(ctx, c.config.Share.IMAdminUserID[0]) ctx = mcontext.WithOpUserIDContext(ctx, c.config.Share.IMAdminUserID[0])
} }
defer func(groupID string) { defer func(groupID string) {
if err = c.groupRpcClient.DismissGroup(ctx, groupID); err != nil { if err := c.groupClient.DismissGroup(ctx, groupID, true); err != nil {
log.ZError(ctx, "DismissGroup Notification clear members", err, "groupID", groupID) log.ZError(ctx, "DismissGroup Notification clear members", err, "groupID", groupID)
} }
}(groupID) }(groupID)
@@ -353,10 +368,7 @@ func (c *ConsumerHandler) offlinePushMsg(ctx context.Context, msg *sdkws.MsgData
func (c *ConsumerHandler) filterGroupMessageOfflinePush(ctx context.Context, groupID string, msg *sdkws.MsgData, func (c *ConsumerHandler) filterGroupMessageOfflinePush(ctx context.Context, groupID string, msg *sdkws.MsgData,
offlinePushUserIDs []string) (userIDs []string, err error) { offlinePushUserIDs []string) (userIDs []string, err error) {
needOfflinePushUserIDs, err := c.conversationClient.GetConversationOfflinePushUserIDs(ctx, conversationutil.GenGroupConversationID(groupID), offlinePushUserIDs)
//todo local cache Obtain the difference set through local comparison.
needOfflinePushUserIDs, err := c.conversationRpcClient.GetConversationOfflinePushUserIDs(
ctx, conversationutil.GenGroupConversationID(groupID), offlinePushUserIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -410,11 +422,11 @@ func (c *ConsumerHandler) getOfflinePushInfos(msg *sdkws.MsgData) (title, conten
func (c *ConsumerHandler) DeleteMemberAndSetConversationSeq(ctx context.Context, groupID string, userIDs []string) error { func (c *ConsumerHandler) DeleteMemberAndSetConversationSeq(ctx context.Context, groupID string, userIDs []string) error {
conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID) conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
maxSeq, err := c.msgRpcClient.GetConversationMaxSeq(ctx, conversationID) maxSeq, err := c.msgClient.GetConversationMaxSeq(ctx, conversationID)
if err != nil { if err != nil {
return err return err
} }
return c.conversationRpcClient.SetConversationMaxSeq(ctx, userIDs, conversationID, maxSeq) return c.conversationClient.SetConversationMaxSeq(ctx, conversationID, userIDs, maxSeq)
} }
func unmarshalNotificationElem(bytes []byte, t any) error { func unmarshalNotificationElem(bytes []byte, t any) error {
@@ -422,6 +434,5 @@ func unmarshalNotificationElem(bytes []byte, t any) error {
if err := json.Unmarshal(bytes, &notification); err != nil { if err := json.Unmarshal(bytes, &notification); err != nil {
return err return err
} }
return json.Unmarshal([]byte(notification.Detail), t) return json.Unmarshal([]byte(notification.Detail), t)
} }
+20 -14
View File
@@ -18,6 +18,8 @@ import (
"context" "context"
"errors" "errors"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
redis2 "github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis" redis2 "github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
"github.com/openimsdk/tools/db/redisutil" "github.com/openimsdk/tools/db/redisutil"
@@ -28,7 +30,6 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics" "github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
pbauth "github.com/openimsdk/protocol/auth" pbauth "github.com/openimsdk/protocol/auth"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway" "github.com/openimsdk/protocol/msggateway"
@@ -40,10 +41,11 @@ import (
) )
type authServer struct { type authServer struct {
pbauth.UnimplementedAuthServer
authDatabase controller.AuthDatabase authDatabase controller.AuthDatabase
userRpcClient *rpcclient.UserRpcClient
RegisterCenter discovery.SvcDiscoveryRegistry RegisterCenter discovery.SvcDiscoveryRegistry
config *Config config *Config
userClient *rpcli.UserClient
} }
type Config struct { type Config struct {
@@ -58,9 +60,11 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil { if err != nil {
return err return err
} }
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID) userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
pbauth.RegisterAuthServer(server, &authServer{ pbauth.RegisterAuthServer(server, &authServer{
userRpcClient: &userRpcClient,
RegisterCenter: client, RegisterCenter: client,
authDatabase: controller.NewAuthDatabase( authDatabase: controller.NewAuthDatabase(
redis2.NewTokenCacheModel(rdb, config.RpcConfig.TokenPolicy.Expire), redis2.NewTokenCacheModel(rdb, config.RpcConfig.TokenPolicy.Expire),
@@ -69,7 +73,8 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
config.Share.MultiLogin, config.Share.MultiLogin,
config.Share.IMAdminUserID, config.Share.IMAdminUserID,
), ),
config: config, config: config,
userClient: rpcli.NewUserClient(userConn),
}) })
return nil return nil
} }
@@ -85,7 +90,7 @@ func (s *authServer) GetAdminToken(ctx context.Context, req *pbauth.GetAdminToke
} }
if _, err := s.userRpcClient.GetUserInfo(ctx, req.UserID); err != nil { if err := s.userClient.CheckUser(ctx, []string{req.UserID}); err != nil {
return nil, err return nil, err
} }
@@ -114,9 +119,13 @@ func (s *authServer) GetUserToken(ctx context.Context, req *pbauth.GetUserTokenR
if authverify.IsManagerUserID(req.UserID, s.config.Share.IMAdminUserID) { if authverify.IsManagerUserID(req.UserID, s.config.Share.IMAdminUserID) {
return nil, errs.ErrNoPermission.WrapMsg("don't get Admin token") return nil, errs.ErrNoPermission.WrapMsg("don't get Admin token")
} }
if _, err := s.userRpcClient.GetUserInfo(ctx, req.UserID); err != nil { user, err := s.userClient.GetUserInfo(ctx, req.UserID)
if err != nil {
return nil, err return nil, err
} }
if user.AppMangerLevel >= constant.AppNotificationAdmin {
return nil, errs.ErrArgs.WrapMsg("app account can`t get token")
}
token, err := s.authDatabase.CreateToken(ctx, req.UserID, int(req.PlatformID)) token, err := s.authDatabase.CreateToken(ctx, req.UserID, int(req.PlatformID))
if err != nil { if err != nil {
return nil, err return nil, err
@@ -129,7 +138,7 @@ func (s *authServer) GetUserToken(ctx context.Context, req *pbauth.GetUserTokenR
func (s *authServer) parseToken(ctx context.Context, tokensString string) (claims *tokenverify.Claims, err error) { func (s *authServer) parseToken(ctx context.Context, tokensString string) (claims *tokenverify.Claims, err error) {
claims, err = tokenverify.GetClaimFromToken(tokensString, authverify.Secret(s.config.Share.Secret)) claims, err = tokenverify.GetClaimFromToken(tokensString, authverify.Secret(s.config.Share.Secret))
if err != nil { if err != nil {
return nil, errs.Wrap(err) return nil, err
} }
isAdmin := authverify.IsManagerUserID(claims.UserID, s.config.Share.IMAdminUserID) isAdmin := authverify.IsManagerUserID(claims.UserID, s.config.Share.IMAdminUserID)
if isAdmin { if isAdmin {
@@ -155,10 +164,7 @@ func (s *authServer) parseToken(ctx context.Context, tokensString string) (claim
return nil, servererrs.ErrTokenNotExist.Wrap() return nil, servererrs.ErrTokenNotExist.Wrap()
} }
func (s *authServer) ParseToken( func (s *authServer) ParseToken(ctx context.Context, req *pbauth.ParseTokenReq) (resp *pbauth.ParseTokenResp, err error) {
ctx context.Context,
req *pbauth.ParseTokenReq,
) (resp *pbauth.ParseTokenResp, err error) {
resp = &pbauth.ParseTokenResp{} resp = &pbauth.ParseTokenResp{}
claims, err := s.parseToken(ctx, req.Token) claims, err := s.parseToken(ctx, req.Token)
if err != nil { if err != nil {
@@ -196,7 +202,7 @@ func (s *authServer) forceKickOff(ctx context.Context, userID string, platformID
} }
m, err := s.authDatabase.GetTokensWithoutError(ctx, userID, int(platformID)) m, err := s.authDatabase.GetTokensWithoutError(ctx, userID, int(platformID))
if err != nil && errors.Is(err, redis.Nil) { if err != nil && !errors.Is(err, redis.Nil) {
return err return err
} }
for k := range m { for k := range m {
@@ -214,7 +220,7 @@ func (s *authServer) forceKickOff(ctx context.Context, userID string, platformID
func (s *authServer) InvalidateToken(ctx context.Context, req *pbauth.InvalidateTokenReq) (*pbauth.InvalidateTokenResp, error) { func (s *authServer) InvalidateToken(ctx context.Context, req *pbauth.InvalidateTokenReq) (*pbauth.InvalidateTokenResp, error) {
m, err := s.authDatabase.GetTokensWithoutError(ctx, req.UserID, int(req.PlatformID)) m, err := s.authDatabase.GetTokensWithoutError(ctx, req.UserID, int(req.PlatformID))
if err != nil && errors.Is(err, redis.Nil) { if err != nil && !errors.Is(err, redis.Nil) {
return nil, err return nil, err
} }
if m == nil { if m == nil {
+105 -31
View File
@@ -16,6 +16,7 @@ package conversation
import ( import (
"context" "context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"sort" "sort"
"time" "time"
@@ -25,12 +26,12 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
dbModel "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model" dbModel "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/localcache" "github.com/openimsdk/open-im-server/v3/pkg/localcache"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/tools/db/redisutil" "github.com/openimsdk/tools/db/redisutil"
"github.com/openimsdk/open-im-server/v3/pkg/common/convert" "github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
pbconversation "github.com/openimsdk/protocol/conversation" pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
@@ -43,13 +44,15 @@ import (
) )
type conversationServer struct { type conversationServer struct {
msgRpcClient *rpcclient.MessageRpcClient pbconversation.UnimplementedConversationServer
user *rpcclient.UserRpcClient
groupRpcClient *rpcclient.GroupRpcClient
conversationDatabase controller.ConversationDatabase conversationDatabase controller.ConversationDatabase
conversationNotificationSender *ConversationNotificationSender conversationNotificationSender *ConversationNotificationSender
config *Config config *Config
userClient *rpcli.UserClient
msgClient *rpcli.MsgClient
groupClient *rpcli.GroupClient
} }
type Config struct { type Config struct {
@@ -75,17 +78,27 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil { if err != nil {
return err return err
} }
groupRpcClient := rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group) userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg) if err != nil {
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID) return err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return err
}
msgClient := rpcli.NewMsgClient(msgConn)
localcache.InitLocalCache(&config.LocalCacheConfig) localcache.InitLocalCache(&config.LocalCacheConfig)
pbconversation.RegisterConversationServer(server, &conversationServer{ pbconversation.RegisterConversationServer(server, &conversationServer{
msgRpcClient: &msgRpcClient, conversationNotificationSender: NewConversationNotificationSender(&config.NotificationConfig, msgClient),
user: &userRpcClient,
conversationNotificationSender: NewConversationNotificationSender(&config.NotificationConfig, &msgRpcClient),
groupRpcClient: &groupRpcClient,
conversationDatabase: controller.NewConversationDatabase(conversationDB, conversationDatabase: controller.NewConversationDatabase(conversationDB,
redis.NewConversationRedis(rdb, &config.LocalCacheConfig, redis.GetRocksCacheOptions(), conversationDB), mgocli.GetTx()), redis.NewConversationRedis(rdb, &config.LocalCacheConfig, redis.GetRocksCacheOptions(), conversationDB), mgocli.GetTx()),
userClient: rpcli.NewUserClient(userConn),
groupClient: rpcli.NewGroupClient(groupConn),
msgClient: msgClient,
}) })
return nil return nil
} }
@@ -122,13 +135,12 @@ func (c *conversationServer) GetSortedConversationList(ctx context.Context, req
if len(conversations) == 0 { if len(conversations) == 0 {
return nil, errs.ErrRecordNotFound.Wrap() return nil, errs.ErrRecordNotFound.Wrap()
} }
maxSeqs, err := c.msgClient.GetMaxSeqs(ctx, conversationIDs)
maxSeqs, err := c.msgRpcClient.GetMaxSeqs(ctx, conversationIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
chatLogs, err := c.msgRpcClient.GetMsgByConversationIDs(ctx, conversationIDs, maxSeqs) chatLogs, err := c.msgClient.GetMsgByConversationIDs(ctx, conversationIDs, maxSeqs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -138,7 +150,7 @@ func (c *conversationServer) GetSortedConversationList(ctx context.Context, req
return nil, err return nil, err
} }
hasReadSeqs, err := c.msgRpcClient.GetHasReadSeqs(ctx, req.UserID, conversationIDs) hasReadSeqs, err := c.msgClient.GetHasReadSeqs(ctx, conversationIDs, req.UserID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -227,7 +239,7 @@ func (c *conversationServer) SetConversations(ctx context.Context, req *pbconver
} }
if req.Conversation.ConversationType == constant.WriteGroupChatType { if req.Conversation.ConversationType == constant.WriteGroupChatType {
groupInfo, err := c.groupRpcClient.GetGroupInfo(ctx, req.Conversation.GroupID) groupInfo, err := c.groupClient.GetGroupInfo(ctx, req.Conversation.GroupID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -351,7 +363,15 @@ func (c *conversationServer) SetConversations(ctx context.Context, req *pbconver
needUpdateUsersList = append(needUpdateUsersList, userID) needUpdateUsersList = append(needUpdateUsersList, userID)
} }
} }
if len(m) != 0 && len(needUpdateUsersList) != 0 {
if err := c.conversationDatabase.SetUsersConversationFieldTx(ctx, needUpdateUsersList, &conversation, m); err != nil {
return nil, err
}
for _, v := range needUpdateUsersList {
c.conversationNotificationSender.ConversationChangeNotification(ctx, v, []string{req.Conversation.ConversationID})
}
}
if req.Conversation.IsPrivateChat != nil && req.Conversation.ConversationType != constant.ReadGroupChatType { if req.Conversation.IsPrivateChat != nil && req.Conversation.ConversationType != constant.ReadGroupChatType {
var conversations []*dbModel.Conversation var conversations []*dbModel.Conversation
for _, ownerUserID := range req.UserIDs { for _, ownerUserID := range req.UserIDs {
@@ -369,16 +389,6 @@ func (c *conversationServer) SetConversations(ctx context.Context, req *pbconver
c.conversationNotificationSender.ConversationSetPrivateNotification(ctx, userID, req.Conversation.UserID, c.conversationNotificationSender.ConversationSetPrivateNotification(ctx, userID, req.Conversation.UserID,
req.Conversation.IsPrivateChat.Value, req.Conversation.ConversationID) req.Conversation.IsPrivateChat.Value, req.Conversation.ConversationID)
} }
} else {
if len(m) != 0 && len(needUpdateUsersList) != 0 {
if err := c.conversationDatabase.SetUsersConversationFieldTx(ctx, needUpdateUsersList, &conversation, m); err != nil {
return nil, err
}
for _, v := range needUpdateUsersList {
c.conversationNotificationSender.ConversationChangeNotification(ctx, v, []string{req.Conversation.ConversationID})
}
}
} }
return &pbconversation.SetConversationsResp{}, nil return &pbconversation.SetConversationsResp{}, nil
@@ -432,22 +442,38 @@ func (c *conversationServer) CreateGroupChatConversations(ctx context.Context, r
if err != nil { if err != nil {
return nil, err return nil, err
} }
conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, req.GroupID)
if err := c.msgClient.SetUserConversationMaxSeq(ctx, conversationID, req.UserIDs, 0); err != nil {
return nil, err
}
return &pbconversation.CreateGroupChatConversationsResp{}, nil return &pbconversation.CreateGroupChatConversationsResp{}, nil
} }
func (c *conversationServer) SetConversationMaxSeq(ctx context.Context, req *pbconversation.SetConversationMaxSeqReq) (*pbconversation.SetConversationMaxSeqResp, error) { func (c *conversationServer) SetConversationMaxSeq(ctx context.Context, req *pbconversation.SetConversationMaxSeqReq) (*pbconversation.SetConversationMaxSeqResp, error) {
if err := c.msgClient.SetUserConversationMaxSeq(ctx, req.ConversationID, req.OwnerUserID, req.MaxSeq); err != nil {
return nil, err
}
if err := c.conversationDatabase.UpdateUsersConversationField(ctx, req.OwnerUserID, req.ConversationID, if err := c.conversationDatabase.UpdateUsersConversationField(ctx, req.OwnerUserID, req.ConversationID,
map[string]any{"max_seq": req.MaxSeq}); err != nil { map[string]any{"max_seq": req.MaxSeq}); err != nil {
return nil, err return nil, err
} }
for _, userID := range req.OwnerUserID {
c.conversationNotificationSender.ConversationChangeNotification(ctx, userID, []string{req.ConversationID})
}
return &pbconversation.SetConversationMaxSeqResp{}, nil return &pbconversation.SetConversationMaxSeqResp{}, nil
} }
func (c *conversationServer) SetConversationMinSeq(ctx context.Context, req *pbconversation.SetConversationMinSeqReq) (*pbconversation.SetConversationMinSeqResp, error) { func (c *conversationServer) SetConversationMinSeq(ctx context.Context, req *pbconversation.SetConversationMinSeqReq) (*pbconversation.SetConversationMinSeqResp, error) {
if err := c.msgClient.SetUserConversationMin(ctx, req.ConversationID, req.OwnerUserID, req.MinSeq); err != nil {
return nil, err
}
if err := c.conversationDatabase.UpdateUsersConversationField(ctx, req.OwnerUserID, req.ConversationID, if err := c.conversationDatabase.UpdateUsersConversationField(ctx, req.OwnerUserID, req.ConversationID,
map[string]any{"min_seq": req.MinSeq}); err != nil { map[string]any{"min_seq": req.MinSeq}); err != nil {
return nil, err return nil, err
} }
for _, userID := range req.OwnerUserID {
c.conversationNotificationSender.ConversationChangeNotification(ctx, userID, []string{req.ConversationID})
}
return &pbconversation.SetConversationMinSeqResp{}, nil return &pbconversation.SetConversationMinSeqResp{}, nil
} }
@@ -548,7 +574,7 @@ func (c *conversationServer) getConversationInfo(
} }
} }
if len(sendIDs) != 0 { if len(sendIDs) != 0 {
sendInfos, err := c.user.GetUsersInfo(ctx, sendIDs) sendInfos, err := c.userClient.GetUsersInfo(ctx, sendIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -557,7 +583,7 @@ func (c *conversationServer) getConversationInfo(
} }
} }
if len(groupIDs) != 0 { if len(groupIDs) != 0 {
groupInfos, err := c.groupRpcClient.GetGroupInfos(ctx, groupIDs, false) groupInfos, err := c.groupClient.GetGroupsInfo(ctx, groupIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -669,7 +695,7 @@ func (c *conversationServer) GetOwnerConversation(ctx context.Context, req *pbco
}, nil }, nil
} }
func (c *conversationServer) GetConversationsNeedDestructMsgs(ctx context.Context, _ *pbconversation.GetConversationsNeedDestructMsgsReq) (*pbconversation.GetConversationsNeedDestructMsgsResp, error) { func (c *conversationServer) GetConversationsNeedClearMsg(ctx context.Context, _ *pbconversation.GetConversationsNeedClearMsgReq) (*pbconversation.GetConversationsNeedClearMsgResp, error) {
num, err := c.conversationDatabase.GetAllConversationIDsNumber(ctx) num, err := c.conversationDatabase.GetAllConversationIDsNumber(ctx)
if err != nil { if err != nil {
log.ZError(ctx, "GetAllConversationIDsNumber failed", err) log.ZError(ctx, "GetAllConversationIDsNumber failed", err)
@@ -693,7 +719,7 @@ func (c *conversationServer) GetConversationsNeedDestructMsgs(ctx context.Contex
conversationIDs, err := c.conversationDatabase.PageConversationIDs(ctx, pagination) conversationIDs, err := c.conversationDatabase.PageConversationIDs(ctx, pagination)
if err != nil { if err != nil {
// log.ZError(ctx, "PageConversationIDs failed", err, "pageNumber", pageNumber) log.ZError(ctx, "PageConversationIDs failed", err, "pageNumber", pageNumber)
continue continue
} }
@@ -716,7 +742,7 @@ func (c *conversationServer) GetConversationsNeedDestructMsgs(ctx context.Contex
} }
} }
return &pbconversation.GetConversationsNeedDestructMsgsResp{Conversations: convert.ConversationsDB2Pb(temp)}, nil return &pbconversation.GetConversationsNeedClearMsgResp{Conversations: convert.ConversationsDB2Pb(temp)}, nil
} }
func (c *conversationServer) GetNotNotifyConversationIDs(ctx context.Context, req *pbconversation.GetNotNotifyConversationIDsReq) (*pbconversation.GetNotNotifyConversationIDsResp, error) { func (c *conversationServer) GetNotNotifyConversationIDs(ctx context.Context, req *pbconversation.GetNotNotifyConversationIDsReq) (*pbconversation.GetNotNotifyConversationIDsResp, error) {
@@ -734,3 +760,51 @@ func (c *conversationServer) GetPinnedConversationIDs(ctx context.Context, req *
} }
return &pbconversation.GetPinnedConversationIDsResp{ConversationIDs: conversationIDs}, nil return &pbconversation.GetPinnedConversationIDsResp{ConversationIDs: conversationIDs}, nil
} }
func (c *conversationServer) ClearUserConversationMsg(ctx context.Context, req *pbconversation.ClearUserConversationMsgReq) (*pbconversation.ClearUserConversationMsgResp, error) {
conversations, err := c.conversationDatabase.FindRandConversation(ctx, req.Timestamp, int(req.Limit))
if err != nil {
return nil, err
}
latestMsgDestructTime := time.UnixMilli(req.Timestamp)
for i, conversation := range conversations {
if conversation.IsMsgDestruct == false || conversation.MsgDestructTime == 0 {
continue
}
seq, err := c.msgClient.GetLastMessageSeqByTime(ctx, conversation.ConversationID, req.Timestamp-conversation.MsgDestructTime)
if err != nil {
return nil, err
}
if seq <= 0 {
log.ZDebug(ctx, "ClearUserConversationMsg GetLastMessageSeqByTime seq <= 0", "index", i, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID, "msgDestructTime", conversation.MsgDestructTime, "seq", seq)
if err := c.setConversationMinSeqAndLatestMsgDestructTime(ctx, conversation.ConversationID, conversation.OwnerUserID, -1, latestMsgDestructTime); err != nil {
return nil, err
}
continue
}
seq++
if err := c.setConversationMinSeqAndLatestMsgDestructTime(ctx, conversation.ConversationID, conversation.OwnerUserID, seq, latestMsgDestructTime); err != nil {
return nil, err
}
log.ZDebug(ctx, "ClearUserConversationMsg set min seq", "index", i, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID, "seq", seq, "msgDestructTime", conversation.MsgDestructTime)
}
return &pbconversation.ClearUserConversationMsgResp{Count: int32(len(conversations))}, nil
}
func (c *conversationServer) setConversationMinSeqAndLatestMsgDestructTime(ctx context.Context, conversationID string, ownerUserID string, minSeq int64, latestMsgDestructTime time.Time) error {
update := map[string]any{
"latest_msg_destruct_time": latestMsgDestructTime,
}
if minSeq >= 0 {
if err := c.msgClient.SetUserConversationMin(ctx, conversationID, []string{ownerUserID}, minSeq); err != nil {
return err
}
update["min_seq"] = minSeq
}
if err := c.conversationDatabase.UpdateUsersConversationField(ctx, []string{ownerUserID}, conversationID, update); err != nil {
return err
}
c.conversationNotificationSender.ConversationChangeNotification(ctx, ownerUserID, []string{conversationID})
return nil
}
+7 -3
View File
@@ -16,9 +16,11 @@ package conversation
import ( import (
"context" "context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
) )
@@ -27,8 +29,10 @@ type ConversationNotificationSender struct {
*rpcclient.NotificationSender *rpcclient.NotificationSender
} }
func NewConversationNotificationSender(conf *config.Notification, msgRpcClient *rpcclient.MessageRpcClient) *ConversationNotificationSender { func NewConversationNotificationSender(conf *config.Notification, msgClient *rpcli.MsgClient) *ConversationNotificationSender {
return &ConversationNotificationSender{rpcclient.NewNotificationSender(conf, rpcclient.WithRpcClient(msgRpcClient))} return &ConversationNotificationSender{rpcclient.NewNotificationSender(conf, rpcclient.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
return msgClient.SendMsg(ctx, req)
}))}
} }
// SetPrivate invote. // SetPrivate invote.
+1 -1
View File
@@ -110,7 +110,7 @@ func UpdateGroupMemberMap(req *pbgroup.SetGroupMemberInfo) map[string]any {
m["nickname"] = req.Nickname.Value m["nickname"] = req.Nickname.Value
} }
if req.FaceURL != nil { if req.FaceURL != nil {
m["user_group_face_url"] = req.FaceURL.Value m["face_url"] = req.FaceURL.Value
} }
if req.RoleLevel != nil { if req.RoleLevel != nil {
m["role_level"] = req.RoleLevel.Value m["role_level"] = req.RoleLevel.Value
+50 -64
View File
@@ -17,6 +17,7 @@ package group
import ( import (
"context" "context"
"fmt" "fmt"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"math/big" "math/big"
"math/rand" "math/rand"
"strconv" "strconv"
@@ -36,9 +37,7 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor" "github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/notification/grouphash"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/grouphash"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
pbconversation "github.com/openimsdk/protocol/conversation" pbconversation "github.com/openimsdk/protocol/conversation"
pbgroup "github.com/openimsdk/protocol/group" pbgroup "github.com/openimsdk/protocol/group"
@@ -57,13 +56,14 @@ import (
) )
type groupServer struct { type groupServer struct {
db controller.GroupDatabase pbgroup.UnimplementedGroupServer
user rpcclient.UserRpcClient db controller.GroupDatabase
notification *GroupNotificationSender notification *NotificationSender
conversationRpcClient rpcclient.ConversationRpcClient config *Config
msgRpcClient rpcclient.MessageRpcClient webhookClient *webhook.Client
config *Config userClient *rpcli.UserClient
webhookClient *webhook.Client msgClient *rpcli.MsgClient
conversationClient *rpcli.ConversationClient
} }
type Config struct { type Config struct {
@@ -98,32 +98,33 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil { if err != nil {
return err return err
} }
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg) //userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
conversationRpcClient := rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation) //msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg)
var gs groupServer //conversationRpcClient := rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation)
database := controller.NewGroupDatabase(rdb, &config.LocalCacheConfig, groupDB, groupMemberDB, groupRequestDB, mgocli.GetTx(), grouphash.NewGroupHashFromGroupServer(&gs))
gs.db = database userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
gs.user = userRpcClient if err != nil {
gs.notification = NewGroupNotificationSender( return err
database, }
&msgRpcClient, msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
&userRpcClient, if err != nil {
&conversationRpcClient, return err
config, }
func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error) { conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
users, err := userRpcClient.GetUsersInfo(ctx, userIDs) if err != nil {
if err != nil { return err
return nil, err }
} gs := groupServer{
return datautil.Slice(users, func(e *sdkws.UserInfo) notification.CommonUser { return e }), nil config: config,
}, webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
) userClient: rpcli.NewUserClient(userConn),
msgClient: rpcli.NewMsgClient(msgConn),
conversationClient: rpcli.NewConversationClient(conversationConn),
}
gs.db = controller.NewGroupDatabase(rdb, &config.LocalCacheConfig, groupDB, groupMemberDB, groupRequestDB, mgocli.GetTx(), grouphash.NewGroupHashFromGroupServer(&gs))
gs.notification = NewNotificationSender(gs.db, config, gs.userClient, gs.msgClient, gs.conversationClient)
localcache.InitLocalCache(&config.LocalCacheConfig) localcache.InitLocalCache(&config.LocalCacheConfig)
gs.conversationRpcClient = conversationRpcClient
gs.msgRpcClient = msgRpcClient
gs.config = config
gs.webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
pbgroup.RegisterGroupServer(server, &gs) pbgroup.RegisterGroupServer(server, &gs)
return nil return nil
} }
@@ -167,19 +168,6 @@ func (g *groupServer) CheckGroupAdmin(ctx context.Context, groupID string) error
return nil return nil
} }
func (g *groupServer) GetPublicUserInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.PublicUserInfo, error) {
if len(userIDs) == 0 {
return map[string]*sdkws.PublicUserInfo{}, nil
}
users, err := g.user.GetPublicUserInfos(ctx, userIDs)
if err != nil {
return nil, err
}
return datautil.SliceToMapAny(users, func(e *sdkws.PublicUserInfo) (string, *sdkws.PublicUserInfo) {
return e.UserID, e
}), nil
}
func (g *groupServer) IsNotFound(err error) bool { func (g *groupServer) IsNotFound(err error) bool {
return errs.ErrRecordNotFound.Is(specialerror.ErrCode(errs.Unwrap(err))) return errs.ErrRecordNotFound.Is(specialerror.ErrCode(errs.Unwrap(err)))
} }
@@ -221,7 +209,6 @@ func (g *groupServer) CreateGroup(ctx context.Context, req *pbgroup.CreateGroupR
return nil, errs.ErrArgs.WrapMsg("no group owner") return nil, errs.ErrArgs.WrapMsg("no group owner")
} }
if err := authverify.CheckAccessV3(ctx, req.OwnerUserID, g.config.Share.IMAdminUserID); err != nil { if err := authverify.CheckAccessV3(ctx, req.OwnerUserID, g.config.Share.IMAdminUserID); err != nil {
return nil, err return nil, err
} }
userIDs := append(append(req.MemberUserIDs, req.AdminUserIDs...), req.OwnerUserID) userIDs := append(append(req.MemberUserIDs, req.AdminUserIDs...), req.OwnerUserID)
@@ -234,7 +221,7 @@ func (g *groupServer) CreateGroup(ctx context.Context, req *pbgroup.CreateGroupR
return nil, errs.ErrArgs.WrapMsg("group member repeated") return nil, errs.ErrArgs.WrapMsg("group member repeated")
} }
userMap, err := g.user.GetUsersInfoMap(ctx, userIDs) userMap, err := g.userClient.GetUsersInfoMap(ctx, userIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -385,7 +372,7 @@ func (g *groupServer) InviteUserToGroup(ctx context.Context, req *pbgroup.Invite
return nil, servererrs.ErrDismissedAlready.WrapMsg("group dismissed checking group status found it dismissed") return nil, servererrs.ErrDismissedAlready.WrapMsg("group dismissed checking group status found it dismissed")
} }
userMap, err := g.user.GetUsersInfoMap(ctx, req.InvitedUserIDs) userMap, err := g.userClient.GetUsersInfoMap(ctx, req.InvitedUserIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -696,7 +683,7 @@ func (g *groupServer) GetGroupApplicationList(ctx context.Context, req *pbgroup.
userIDs = append(userIDs, gr.UserID) userIDs = append(userIDs, gr.UserID)
} }
userIDs = datautil.Distinct(userIDs) userIDs = datautil.Distinct(userIDs)
userMap, err := g.user.GetPublicUserInfoMap(ctx, userIDs) userMap, err := g.userClient.GetUsersInfoMap(ctx, userIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -808,7 +795,7 @@ func (g *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup
} else if !g.IsNotFound(err) { } else if !g.IsNotFound(err) {
return nil, err return nil, err
} }
if _, err := g.user.GetPublicUserInfo(ctx, req.FromUserID); err != nil { if err := g.userClient.CheckUser(ctx, []string{req.FromUserID}); err != nil {
return nil, err return nil, err
} }
var member *model.GroupMember var member *model.GroupMember
@@ -852,7 +839,7 @@ func (g *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup
} }
func (g *groupServer) JoinGroup(ctx context.Context, req *pbgroup.JoinGroupReq) (*pbgroup.JoinGroupResp, error) { func (g *groupServer) JoinGroup(ctx context.Context, req *pbgroup.JoinGroupReq) (*pbgroup.JoinGroupResp, error) {
user, err := g.user.GetUserInfo(ctx, req.InviterUserID) user, err := g.userClient.GetUserInfo(ctx, req.InviterUserID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -958,12 +945,12 @@ func (g *groupServer) QuitGroup(ctx context.Context, req *pbgroup.QuitGroupReq)
} }
func (g *groupServer) deleteMemberAndSetConversationSeq(ctx context.Context, groupID string, userIDs []string) error { func (g *groupServer) deleteMemberAndSetConversationSeq(ctx context.Context, groupID string, userIDs []string) error {
conevrsationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID) conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
maxSeq, err := g.msgRpcClient.GetConversationMaxSeq(ctx, conevrsationID) maxSeq, err := g.msgClient.GetConversationMaxSeq(ctx, conversationID)
if err != nil { if err != nil {
return err return err
} }
return g.conversationRpcClient.SetConversationMaxSeq(ctx, userIDs, conevrsationID, maxSeq) return g.conversationClient.SetConversationMaxSeq(ctx, conversationID, userIDs, maxSeq)
} }
func (g *groupServer) SetGroupInfo(ctx context.Context, req *pbgroup.SetGroupInfoReq) (*pbgroup.SetGroupInfoResp, error) { func (g *groupServer) SetGroupInfo(ctx context.Context, req *pbgroup.SetGroupInfoReq) (*pbgroup.SetGroupInfoResp, error) {
@@ -1039,7 +1026,7 @@ func (g *groupServer) SetGroupInfo(ctx context.Context, req *pbgroup.SetGroupInf
return return
} }
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.GroupNotification} conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.GroupNotification}
if err := g.conversationRpcClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil { if err := g.conversationClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
log.ZWarn(ctx, "SetConversations", err, "UserIDs", resp.UserIDs, "conversation", conversation) log.ZWarn(ctx, "SetConversations", err, "UserIDs", resp.UserIDs, "conversation", conversation)
} }
}() }()
@@ -1152,8 +1139,7 @@ func (g *groupServer) SetGroupInfoEx(ctx context.Context, req *pbgroup.SetGroupI
} }
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.GroupNotification} conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.GroupNotification}
if err := g.conversationClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
if err := g.conversationRpcClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
log.ZWarn(ctx, "SetConversations", err, "UserIDs", resp.UserIDs, "conversation", conversation) log.ZWarn(ctx, "SetConversations", err, "UserIDs", resp.UserIDs, "conversation", conversation)
} }
}() }()
@@ -1305,7 +1291,7 @@ func (g *groupServer) GetGroupMembersCMS(ctx context.Context, req *pbgroup.GetGr
} }
func (g *groupServer) GetUserReqApplicationList(ctx context.Context, req *pbgroup.GetUserReqApplicationListReq) (*pbgroup.GetUserReqApplicationListResp, error) { func (g *groupServer) GetUserReqApplicationList(ctx context.Context, req *pbgroup.GetUserReqApplicationListReq) (*pbgroup.GetUserReqApplicationListResp, error) {
user, err := g.user.GetPublicUserInfo(ctx, req.UserID) user, err := g.userClient.GetUserInfo(ctx, req.UserID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1761,7 +1747,7 @@ func (g *groupServer) GetGroupUsersReqApplicationList(ctx context.Context, req *
return nil, servererrs.ErrGroupIDNotFound.WrapMsg(strings.Join(ids, ",")) return nil, servererrs.ErrGroupIDNotFound.WrapMsg(strings.Join(ids, ","))
} }
userMap, err := g.user.GetPublicUserInfoMap(ctx, req.UserIDs) userMap, err := g.userClient.GetUsersInfoMap(ctx, req.UserIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1792,7 +1778,7 @@ func (g *groupServer) GetGroupUsersReqApplicationList(ctx context.Context, req *
ownerUserID = owner.UserID ownerUserID = owner.UserID
} }
var userInfo *sdkws.PublicUserInfo var userInfo *sdkws.UserInfo
if user, ok := userMap[e.UserID]; !ok { if user, ok := userMap[e.UserID]; !ok {
userInfo = user userInfo = user
} }
@@ -1838,7 +1824,7 @@ func (g *groupServer) GetSpecifiedUserGroupRequestInfo(ctx context.Context, req
return nil, err return nil, err
} }
userInfos, err := g.user.GetPublicUserInfos(ctx, []string{req.UserID}) userInfos, err := g.userClient.GetUsersInfo(ctx, []string{req.UserID})
if err != nil { if err != nil {
return nil, err return nil, err
} }
+69 -75
View File
@@ -18,6 +18,10 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/authverify" "github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/convert" "github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
@@ -26,8 +30,8 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/versionctx" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/versionctx"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor" "github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification" "github.com/openimsdk/open-im-server/v3/pkg/notification/common_user"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
pbgroup "github.com/openimsdk/protocol/group" pbgroup "github.com/openimsdk/protocol/group"
"github.com/openimsdk/protocol/msg" "github.com/openimsdk/protocol/msg"
@@ -38,7 +42,6 @@ import (
"github.com/openimsdk/tools/utils/datautil" "github.com/openimsdk/tools/utils/datautil"
"github.com/openimsdk/tools/utils/stringutil" "github.com/openimsdk/tools/utils/stringutil"
"go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo"
"time"
) )
// GroupApplicationReceiver // GroupApplicationReceiver
@@ -47,36 +50,38 @@ const (
adminReceiver adminReceiver
) )
func NewGroupNotificationSender( func NewNotificationSender(db controller.GroupDatabase, config *Config, userClient *rpcli.UserClient, msgClient *rpcli.MsgClient, conversationClient *rpcli.ConversationClient) *NotificationSender {
db controller.GroupDatabase, return &NotificationSender{
msgRpcClient *rpcclient.MessageRpcClient, NotificationSender: rpcclient.NewNotificationSender(&config.NotificationConfig,
userRpcClient *rpcclient.UserRpcClient, rpcclient.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
conversationRpcClient *rpcclient.ConversationRpcClient, return msgClient.SendMsg(ctx, req)
config *Config, }),
fn func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error), rpcclient.WithUserRpcClient(userClient.GetUserInfo),
) *GroupNotificationSender { ),
return &GroupNotificationSender{ getUsersInfo: func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error) {
NotificationSender: rpcclient.NewNotificationSender(&config.NotificationConfig, rpcclient.WithRpcClient(msgRpcClient), rpcclient.WithUserRpcClient(userRpcClient)), users, err := userClient.GetUsersInfo(ctx, userIDs)
getUsersInfo: fn, if err != nil {
return nil, err
}
return datautil.Slice(users, func(e *sdkws.UserInfo) common_user.CommonUser { return e }), nil
},
db: db, db: db,
config: config, config: config,
msgClient: msgClient,
conversationRpcClient: conversationRpcClient, conversationClient: conversationClient,
msgRpcClient: msgRpcClient,
} }
} }
type GroupNotificationSender struct { type NotificationSender struct {
*rpcclient.NotificationSender *rpcclient.NotificationSender
getUsersInfo func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error) getUsersInfo func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error)
db controller.GroupDatabase db controller.GroupDatabase
config *Config config *Config
msgClient *rpcli.MsgClient
conversationRpcClient *rpcclient.ConversationRpcClient conversationClient *rpcli.ConversationClient
msgRpcClient *rpcclient.MessageRpcClient
} }
func (g *GroupNotificationSender) PopulateGroupMember(ctx context.Context, members ...*model.GroupMember) error { func (g *NotificationSender) PopulateGroupMember(ctx context.Context, members ...*model.GroupMember) error {
if len(members) == 0 { if len(members) == 0 {
return nil return nil
} }
@@ -91,7 +96,7 @@ func (g *GroupNotificationSender) PopulateGroupMember(ctx context.Context, membe
if err != nil { if err != nil {
return err return err
} }
userMap := make(map[string]notification.CommonUser) userMap := make(map[string]common_user.CommonUser)
for i, user := range users { for i, user := range users {
userMap[user.GetUserID()] = users[i] userMap[user.GetUserID()] = users[i]
} }
@@ -111,7 +116,7 @@ func (g *GroupNotificationSender) PopulateGroupMember(ctx context.Context, membe
return nil return nil
} }
func (g *GroupNotificationSender) getUser(ctx context.Context, userID string) (*sdkws.PublicUserInfo, error) { func (g *NotificationSender) getUser(ctx context.Context, userID string) (*sdkws.PublicUserInfo, error) {
users, err := g.getUsersInfo(ctx, []string{userID}) users, err := g.getUsersInfo(ctx, []string{userID})
if err != nil { if err != nil {
return nil, err return nil, err
@@ -127,7 +132,7 @@ func (g *GroupNotificationSender) getUser(ctx context.Context, userID string) (*
}, nil }, nil
} }
func (g *GroupNotificationSender) getGroupInfo(ctx context.Context, groupID string) (*sdkws.GroupInfo, error) { func (g *NotificationSender) getGroupInfo(ctx context.Context, groupID string) (*sdkws.GroupInfo, error) {
gm, err := g.db.TakeGroup(ctx, groupID) gm, err := g.db.TakeGroup(ctx, groupID)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -148,7 +153,7 @@ func (g *GroupNotificationSender) getGroupInfo(ctx context.Context, groupID stri
return convert.Db2PbGroupInfo(gm, ownerUserID, num), nil return convert.Db2PbGroupInfo(gm, ownerUserID, num), nil
} }
func (g *GroupNotificationSender) getGroupMembers(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) { func (g *NotificationSender) getGroupMembers(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
members, err := g.db.FindGroupMembers(ctx, groupID, userIDs) members, err := g.db.FindGroupMembers(ctx, groupID, userIDs)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -164,7 +169,7 @@ func (g *GroupNotificationSender) getGroupMembers(ctx context.Context, groupID s
return res, nil return res, nil
} }
func (g *GroupNotificationSender) getGroupMemberMap(ctx context.Context, groupID string, userIDs []string) (map[string]*sdkws.GroupMemberFullInfo, error) { func (g *NotificationSender) getGroupMemberMap(ctx context.Context, groupID string, userIDs []string) (map[string]*sdkws.GroupMemberFullInfo, error) {
members, err := g.getGroupMembers(ctx, groupID, userIDs) members, err := g.getGroupMembers(ctx, groupID, userIDs)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -176,7 +181,7 @@ func (g *GroupNotificationSender) getGroupMemberMap(ctx context.Context, groupID
return m, nil return m, nil
} }
func (g *GroupNotificationSender) getGroupMember(ctx context.Context, groupID string, userID string) (*sdkws.GroupMemberFullInfo, error) { func (g *NotificationSender) getGroupMember(ctx context.Context, groupID string, userID string) (*sdkws.GroupMemberFullInfo, error) {
members, err := g.getGroupMembers(ctx, groupID, []string{userID}) members, err := g.getGroupMembers(ctx, groupID, []string{userID})
if err != nil { if err != nil {
return nil, err return nil, err
@@ -187,7 +192,7 @@ func (g *GroupNotificationSender) getGroupMember(ctx context.Context, groupID st
return members[0], nil return members[0], nil
} }
func (g *GroupNotificationSender) getGroupOwnerAndAdminUserID(ctx context.Context, groupID string) ([]string, error) { func (g *NotificationSender) getGroupOwnerAndAdminUserID(ctx context.Context, groupID string) ([]string, error) {
members, err := g.db.FindGroupMemberRoleLevels(ctx, groupID, []int32{constant.GroupOwner, constant.GroupAdmin}) members, err := g.db.FindGroupMemberRoleLevels(ctx, groupID, []int32{constant.GroupOwner, constant.GroupAdmin})
if err != nil { if err != nil {
return nil, err return nil, err
@@ -199,7 +204,7 @@ func (g *GroupNotificationSender) getGroupOwnerAndAdminUserID(ctx context.Contex
return datautil.Slice(members, fn), nil return datautil.Slice(members, fn), nil
} }
func (g *GroupNotificationSender) groupMemberDB2PB(member *model.GroupMember, appMangerLevel int32) *sdkws.GroupMemberFullInfo { func (g *NotificationSender) groupMemberDB2PB(member *model.GroupMember, appMangerLevel int32) *sdkws.GroupMemberFullInfo {
return &sdkws.GroupMemberFullInfo{ return &sdkws.GroupMemberFullInfo{
GroupID: member.GroupID, GroupID: member.GroupID,
UserID: member.UserID, UserID: member.UserID,
@@ -216,7 +221,7 @@ func (g *GroupNotificationSender) groupMemberDB2PB(member *model.GroupMember, ap
} }
} }
/* func (g *GroupNotificationSender) getUsersInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error) { /* func (g *NotificationSender) getUsersInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error) {
users, err := g.getUsersInfo(ctx, userIDs) users, err := g.getUsersInfo(ctx, userIDs)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -228,11 +233,11 @@ func (g *GroupNotificationSender) groupMemberDB2PB(member *model.GroupMember, ap
return result, nil return result, nil
} */ } */
func (g *GroupNotificationSender) fillOpUser(ctx context.Context, opUser **sdkws.GroupMemberFullInfo, groupID string) (err error) { func (g *NotificationSender) fillOpUser(ctx context.Context, opUser **sdkws.GroupMemberFullInfo, groupID string) (err error) {
return g.fillOpUserByUserID(ctx, mcontext.GetOpUserID(ctx), opUser, groupID) return g.fillOpUserByUserID(ctx, mcontext.GetOpUserID(ctx), opUser, groupID)
} }
func (g *GroupNotificationSender) fillOpUserByUserID(ctx context.Context, userID string, opUser **sdkws.GroupMemberFullInfo, groupID string) error { func (g *NotificationSender) fillOpUserByUserID(ctx context.Context, userID string, opUser **sdkws.GroupMemberFullInfo, groupID string) error {
if opUser == nil { if opUser == nil {
return errs.ErrInternalServer.WrapMsg("**sdkws.GroupMemberFullInfo is nil") return errs.ErrInternalServer.WrapMsg("**sdkws.GroupMemberFullInfo is nil")
} }
@@ -276,7 +281,7 @@ func (g *GroupNotificationSender) fillOpUserByUserID(ctx context.Context, userID
return nil return nil
} }
func (g *GroupNotificationSender) setVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string) { func (g *NotificationSender) setVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string) {
versions := versionctx.GetVersionLog(ctx).Get() versions := versionctx.GetVersionLog(ctx).Get()
for _, coll := range versions { for _, coll := range versions {
if coll.Name == collName && coll.Doc.DID == id { if coll.Name == collName && coll.Doc.DID == id {
@@ -287,7 +292,7 @@ func (g *GroupNotificationSender) setVersion(ctx context.Context, version *uint6
} }
} }
func (g *GroupNotificationSender) setSortVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string, sortVersion *uint64) { func (g *NotificationSender) setSortVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string, sortVersion *uint64) {
versions := versionctx.GetVersionLog(ctx).Get() versions := versionctx.GetVersionLog(ctx).Get()
for _, coll := range versions { for _, coll := range versions {
if coll.Name == collName && coll.Doc.DID == id { if coll.Name == collName && coll.Doc.DID == id {
@@ -302,7 +307,7 @@ func (g *GroupNotificationSender) setSortVersion(ctx context.Context, version *u
} }
} }
func (g *GroupNotificationSender) GroupCreatedNotification(ctx context.Context, tips *sdkws.GroupCreatedTips) { func (g *NotificationSender) GroupCreatedNotification(ctx context.Context, tips *sdkws.GroupCreatedTips) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -316,7 +321,7 @@ func (g *GroupNotificationSender) GroupCreatedNotification(ctx context.Context,
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupCreatedNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupCreatedNotification, tips)
} }
func (g *GroupNotificationSender) GroupInfoSetNotification(ctx context.Context, tips *sdkws.GroupInfoSetTips) { func (g *NotificationSender) GroupInfoSetNotification(ctx context.Context, tips *sdkws.GroupInfoSetTips) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -330,7 +335,7 @@ func (g *GroupNotificationSender) GroupInfoSetNotification(ctx context.Context,
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetNotification, tips, rpcclient.WithRpcGetUserName()) g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetNotification, tips, rpcclient.WithRpcGetUserName())
} }
func (g *GroupNotificationSender) GroupInfoSetNameNotification(ctx context.Context, tips *sdkws.GroupInfoSetNameTips) { func (g *NotificationSender) GroupInfoSetNameNotification(ctx context.Context, tips *sdkws.GroupInfoSetNameTips) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -344,7 +349,7 @@ func (g *GroupNotificationSender) GroupInfoSetNameNotification(ctx context.Conte
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetNameNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetNameNotification, tips)
} }
func (g *GroupNotificationSender) GroupInfoSetAnnouncementNotification(ctx context.Context, tips *sdkws.GroupInfoSetAnnouncementTips) { func (g *NotificationSender) GroupInfoSetAnnouncementNotification(ctx context.Context, tips *sdkws.GroupInfoSetAnnouncementTips) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -358,7 +363,7 @@ func (g *GroupNotificationSender) GroupInfoSetAnnouncementNotification(ctx conte
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetAnnouncementNotification, tips, rpcclient.WithRpcGetUserName()) g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetAnnouncementNotification, tips, rpcclient.WithRpcGetUserName())
} }
func (g *GroupNotificationSender) JoinGroupApplicationNotification(ctx context.Context, req *pbgroup.JoinGroupReq) { func (g *NotificationSender) JoinGroupApplicationNotification(ctx context.Context, req *pbgroup.JoinGroupReq) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -386,7 +391,7 @@ func (g *GroupNotificationSender) JoinGroupApplicationNotification(ctx context.C
} }
} }
func (g *GroupNotificationSender) MemberQuitNotification(ctx context.Context, member *sdkws.GroupMemberFullInfo) { func (g *NotificationSender) MemberQuitNotification(ctx context.Context, member *sdkws.GroupMemberFullInfo) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -403,7 +408,7 @@ func (g *GroupNotificationSender) MemberQuitNotification(ctx context.Context, me
g.Notification(ctx, mcontext.GetOpUserID(ctx), member.GroupID, constant.MemberQuitNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), member.GroupID, constant.MemberQuitNotification, tips)
} }
func (g *GroupNotificationSender) GroupApplicationAcceptedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) { func (g *NotificationSender) GroupApplicationAcceptedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -436,7 +441,7 @@ func (g *GroupNotificationSender) GroupApplicationAcceptedNotification(ctx conte
} }
} }
func (g *GroupNotificationSender) GroupApplicationRejectedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) { func (g *NotificationSender) GroupApplicationRejectedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -469,7 +474,7 @@ func (g *GroupNotificationSender) GroupApplicationRejectedNotification(ctx conte
} }
} }
func (g *GroupNotificationSender) GroupOwnerTransferredNotification(ctx context.Context, req *pbgroup.TransferGroupOwnerReq) { func (g *NotificationSender) GroupOwnerTransferredNotification(ctx context.Context, req *pbgroup.TransferGroupOwnerReq) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -500,7 +505,7 @@ func (g *GroupNotificationSender) GroupOwnerTransferredNotification(ctx context.
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupOwnerTransferredNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupOwnerTransferredNotification, tips)
} }
func (g *GroupNotificationSender) MemberKickedNotification(ctx context.Context, tips *sdkws.MemberKickedTips) { func (g *NotificationSender) MemberKickedNotification(ctx context.Context, tips *sdkws.MemberKickedTips) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -514,7 +519,7 @@ func (g *GroupNotificationSender) MemberKickedNotification(ctx context.Context,
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.MemberKickedNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.MemberKickedNotification, tips)
} }
func (g *GroupNotificationSender) GroupApplicationAgreeMemberEnterNotification(ctx context.Context, groupID string, invitedOpUserID string, entrantUserID ...string) error { func (g *NotificationSender) GroupApplicationAgreeMemberEnterNotification(ctx context.Context, groupID string, invitedOpUserID string, entrantUserID ...string) error {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -524,20 +529,15 @@ func (g *GroupNotificationSender) GroupApplicationAgreeMemberEnterNotification(c
if !g.config.RpcConfig.EnableHistoryForNewMembers { if !g.config.RpcConfig.EnableHistoryForNewMembers {
conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID) conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
maxSeq, err := g.msgRpcClient.GetConversationMaxSeq(ctx, conversationID) maxSeq, err := g.msgClient.GetConversationMaxSeq(ctx, conversationID)
if err != nil { if err != nil {
return err return err
} }
if _, err = g.msgRpcClient.SetUserConversationsMinSeq(ctx, &msg.SetUserConversationsMinSeqReq{ if err := g.msgClient.SetUserConversationsMinSeq(ctx, conversationID, entrantUserID, maxSeq+1); err != nil {
UserIDs: entrantUserID,
ConversationID: conversationID,
Seq: maxSeq,
}); err != nil {
return err return err
} }
} }
if err := g.conversationClient.CreateGroupChatConversations(ctx, groupID, entrantUserID); err != nil {
if err := g.conversationRpcClient.GroupChatFirstCreateConversation(ctx, groupID, entrantUserID); err != nil {
return err return err
} }
@@ -573,7 +573,7 @@ func (g *GroupNotificationSender) GroupApplicationAgreeMemberEnterNotification(c
return nil return nil
} }
func (g *GroupNotificationSender) MemberEnterNotification(ctx context.Context, groupID string, entrantUserID string) error { func (g *NotificationSender) MemberEnterNotification(ctx context.Context, groupID string, entrantUserID string) error {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -583,23 +583,17 @@ func (g *GroupNotificationSender) MemberEnterNotification(ctx context.Context, g
if !g.config.RpcConfig.EnableHistoryForNewMembers { if !g.config.RpcConfig.EnableHistoryForNewMembers {
conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID) conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
maxSeq, err := g.msgRpcClient.GetConversationMaxSeq(ctx, conversationID) maxSeq, err := g.msgClient.GetConversationMaxSeq(ctx, conversationID)
if err != nil { if err != nil {
return err return err
} }
if _, err = g.msgRpcClient.SetUserConversationsMinSeq(ctx, &msg.SetUserConversationsMinSeqReq{ if err := g.msgClient.SetUserConversationsMinSeq(ctx, conversationID, []string{entrantUserID}, maxSeq+1); err != nil {
UserIDs: []string{entrantUserID},
ConversationID: conversationID,
Seq: maxSeq,
}); err != nil {
return err return err
} }
} }
if err := g.conversationClient.CreateGroupChatConversations(ctx, groupID, []string{entrantUserID}); err != nil {
if err := g.conversationRpcClient.GroupChatFirstCreateConversation(ctx, groupID, []string{entrantUserID}); err != nil {
return err return err
} }
var group *sdkws.GroupInfo var group *sdkws.GroupInfo
group, err = g.getGroupInfo(ctx, groupID) group, err = g.getGroupInfo(ctx, groupID)
if err != nil { if err != nil {
@@ -620,7 +614,7 @@ func (g *GroupNotificationSender) MemberEnterNotification(ctx context.Context, g
return nil return nil
} }
func (g *GroupNotificationSender) GroupDismissedNotification(ctx context.Context, tips *sdkws.GroupDismissedTips) { func (g *NotificationSender) GroupDismissedNotification(ctx context.Context, tips *sdkws.GroupDismissedTips) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -633,7 +627,7 @@ func (g *GroupNotificationSender) GroupDismissedNotification(ctx context.Context
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupDismissedNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupDismissedNotification, tips)
} }
func (g *GroupNotificationSender) GroupMemberMutedNotification(ctx context.Context, groupID, groupMemberUserID string, mutedSeconds uint32) { func (g *NotificationSender) GroupMemberMutedNotification(ctx context.Context, groupID, groupMemberUserID string, mutedSeconds uint32) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -661,7 +655,7 @@ func (g *GroupNotificationSender) GroupMemberMutedNotification(ctx context.Conte
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberMutedNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberMutedNotification, tips)
} }
func (g *GroupNotificationSender) GroupMemberCancelMutedNotification(ctx context.Context, groupID, groupMemberUserID string) { func (g *NotificationSender) GroupMemberCancelMutedNotification(ctx context.Context, groupID, groupMemberUserID string) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -686,7 +680,7 @@ func (g *GroupNotificationSender) GroupMemberCancelMutedNotification(ctx context
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberCancelMutedNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberCancelMutedNotification, tips)
} }
func (g *GroupNotificationSender) GroupMutedNotification(ctx context.Context, groupID string) { func (g *NotificationSender) GroupMutedNotification(ctx context.Context, groupID string) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -714,7 +708,7 @@ func (g *GroupNotificationSender) GroupMutedNotification(ctx context.Context, gr
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMutedNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMutedNotification, tips)
} }
func (g *GroupNotificationSender) GroupCancelMutedNotification(ctx context.Context, groupID string) { func (g *NotificationSender) GroupCancelMutedNotification(ctx context.Context, groupID string) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -742,7 +736,7 @@ func (g *GroupNotificationSender) GroupCancelMutedNotification(ctx context.Conte
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupCancelMutedNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupCancelMutedNotification, tips)
} }
func (g *GroupNotificationSender) GroupMemberInfoSetNotification(ctx context.Context, groupID, groupMemberUserID string) { func (g *NotificationSender) GroupMemberInfoSetNotification(ctx context.Context, groupID, groupMemberUserID string) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -767,7 +761,7 @@ func (g *GroupNotificationSender) GroupMemberInfoSetNotification(ctx context.Con
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberInfoSetNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberInfoSetNotification, tips)
} }
func (g *GroupNotificationSender) GroupMemberSetToAdminNotification(ctx context.Context, groupID, groupMemberUserID string) { func (g *NotificationSender) GroupMemberSetToAdminNotification(ctx context.Context, groupID, groupMemberUserID string) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
@@ -791,7 +785,7 @@ func (g *GroupNotificationSender) GroupMemberSetToAdminNotification(ctx context.
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberSetToAdminNotification, tips) g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberSetToAdminNotification, tips)
} }
func (g *GroupNotificationSender) GroupMemberSetToOrdinaryUserNotification(ctx context.Context, groupID, groupMemberUserID string) { func (g *NotificationSender) GroupMemberSetToOrdinaryUserNotification(ctx context.Context, groupID, groupMemberUserID string) {
var err error var err error
defer func() { defer func() {
if err != nil { if err != nil {
+1 -1
View File
@@ -137,7 +137,7 @@ func (m *msgServer) MarkConversationAsRead(ctx context.Context, req *msg.MarkCon
return nil, err return nil, err
} }
hasReadSeq, err := m.MsgDatabase.GetHasReadSeq(ctx, req.UserID, req.ConversationID) hasReadSeq, err := m.MsgDatabase.GetHasReadSeq(ctx, req.UserID, req.ConversationID)
if err != nil && errors.Is(err, redis.Nil) { if err != nil && !errors.Is(err, redis.Nil) {
return nil, err return nil, err
} }
var seqs []int64 var seqs []int64
+39 -104
View File
@@ -2,124 +2,59 @@ package msg
import ( import (
"context" "context"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/authverify" "github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/convert"
pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/msg" "github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/wrapperspb"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log" "github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mcontext" "strings"
"github.com/openimsdk/tools/utils/idutil"
"github.com/openimsdk/tools/utils/stringutil"
"golang.org/x/sync/errgroup"
) )
// hard delete in Database. // DestructMsgs hard delete in Database.
func (m *msgServer) ClearMsg(ctx context.Context, req *msg.ClearMsgReq) (_ *msg.ClearMsgResp, err error) { func (m *msgServer) DestructMsgs(ctx context.Context, req *msg.DestructMsgsReq) (*msg.DestructMsgsResp, error) {
if err := authverify.CheckAdmin(ctx, m.config.Share.IMAdminUserID); err != nil { if err := authverify.CheckAdmin(ctx, m.config.Share.IMAdminUserID); err != nil {
return nil, err return nil, err
} }
if req.Timestamp > time.Now().UnixMilli() { docs, err := m.MsgDatabase.GetRandBeforeMsg(ctx, req.Timestamp, int(req.Limit))
return nil, errs.ErrArgs.WrapMsg("request millisecond timestamp error")
}
var (
docNum int
msgNum int
start = time.Now()
)
clearMsg := func(ctx context.Context) (bool, error) {
docIDs, err := m.MsgDatabase.GetDocIDs(ctx)
if err != nil {
return false, err
}
msgs, err := m.MsgDatabase.GetBeforeMsg(ctx, req.Timestamp, docIDs, 5000)
if err != nil {
return false, err
}
if len(msgs) == 0 {
return false, nil
}
for _, msg := range msgs {
index, err := m.MsgDatabase.DeleteDocMsgBefore(ctx, req.Timestamp, msg)
if err != nil {
return false, err
}
if len(index) == 0 {
return false, errs.ErrInternalServer.WrapMsg("delete doc msg failed")
}
docNum++
msgNum += len(index)
}
return true, nil
}
_, err = clearMsg(ctx)
if err != nil { if err != nil {
log.ZError(ctx, "clear msg failed", err, "docNum", docNum, "msgNum", msgNum, "cost", time.Since(start))
return nil, err return nil, err
} }
for i, doc := range docs {
log.ZDebug(ctx, "clearing message", "docNum", docNum, "msgNum", msgNum, "cost", time.Since(start)) if err := m.MsgDatabase.DeleteDoc(ctx, doc.DocID); err != nil {
return nil, err
return &msg.ClearMsgResp{}, nil }
} log.ZDebug(ctx, "DestructMsgs delete doc", "index", i, "docID", doc.DocID)
index := strings.LastIndex(doc.DocID, ":")
// soft delete for self if index < 0 {
func (m *msgServer) DestructMsgs(ctx context.Context, req *msg.DestructMsgsReq) (_ *msg.DestructMsgsResp, err error) { continue
temp := convert.ConversationsPb2DB(req.Conversations) }
var minSeq int64
batchNum := 100 for _, model := range doc.Msg {
if model.Msg == nil {
errg, _ := errgroup.WithContext(ctx) continue
errg.SetLimit(100)
for i := 0; i < len(temp); i += batchNum {
batch := temp[i:min(i+batchNum, len(temp))]
errg.Go(func() error {
for _, conversation := range batch {
handleCtx := mcontext.NewCtx(stringutil.GetSelfFuncName() + "-" + idutil.OperationIDGenerator() + "-" + conversation.ConversationID + "-" + conversation.OwnerUserID)
log.ZDebug(handleCtx, "User MsgsDestruct",
"conversationID", conversation.ConversationID,
"ownerUserID", conversation.OwnerUserID,
"msgDestructTime", conversation.MsgDestructTime,
"lastMsgDestructTime", conversation.LatestMsgDestructTime)
seqs, err := m.MsgDatabase.UserMsgsDestruct(handleCtx, conversation.OwnerUserID, conversation.ConversationID, conversation.MsgDestructTime, conversation.LatestMsgDestructTime)
if err != nil {
log.ZError(handleCtx, "user msg destruct failed", err, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID)
continue
}
if len(seqs) > 0 {
if err := m.Conversation.UpdateConversation(handleCtx,
&pbconversation.UpdateConversationReq{
UserIDs: []string{conversation.OwnerUserID},
ConversationID: conversation.ConversationID,
LatestMsgDestructTime: wrapperspb.Int64(time.Now().UnixMilli())}); err != nil {
log.ZError(handleCtx, "updateUsersConversationField failed", err, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID)
continue
}
// if you need Notify SDK client userseq is update.
// m.msgNotificationSender.UserDeleteMsgsNotification(handleCtx, conversation.OwnerUserID, conversation.ConversationID, seqs)
}
} }
return nil if model.Msg.Seq > minSeq {
}) minSeq = model.Msg.Seq
}
}
if minSeq <= 0 {
continue
}
conversationID := doc.DocID[:index]
if conversationID == "" {
continue
}
minSeq++
if err := m.MsgDatabase.SetMinSeq(ctx, conversationID, minSeq); err != nil {
return nil, err
}
log.ZDebug(ctx, "DestructMsgs delete doc set min seq", "index", i, "docID", doc.DocID, "conversationID", conversationID, "setMinSeq", minSeq)
} }
return &msg.DestructMsgsResp{Count: int32(len(docs))}, nil
}
if err := errg.Wait(); err != nil { func (m *msgServer) GetLastMessageSeqByTime(ctx context.Context, req *msg.GetLastMessageSeqByTimeReq) (*msg.GetLastMessageSeqByTimeResp, error) {
seq, err := m.MsgDatabase.GetLastMessageSeqByTime(ctx, req.ConversationID, req.Time)
if err != nil {
return nil, err return nil, err
} }
return &msg.GetLastMessageSeqByTimeResp{Seq: seq}, nil
return nil, nil
} }
+14 -15
View File
@@ -74,19 +74,13 @@ func (m *msgServer) DeleteMsgs(ctx context.Context, req *msg.DeleteMsgsReq) (*ms
if err := m.MsgDatabase.DeleteMsgsPhysicalBySeqs(ctx, req.ConversationID, req.Seqs); err != nil { if err := m.MsgDatabase.DeleteMsgsPhysicalBySeqs(ctx, req.ConversationID, req.Seqs); err != nil {
return nil, err return nil, err
} }
conversations, err := m.Conversation.GetConversationsByConversationID(ctx, []string{req.ConversationID}) conv, err := m.conversationClient.GetConversationsByConversationID(ctx, req.ConversationID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
tips := &sdkws.DeleteMsgsTips{UserID: req.UserID, ConversationID: req.ConversationID, Seqs: req.Seqs} tips := &sdkws.DeleteMsgsTips{UserID: req.UserID, ConversationID: req.ConversationID, Seqs: req.Seqs}
m.notificationSender.NotificationWithSessionType( m.notificationSender.NotificationWithSessionType(ctx, req.UserID, m.conversationAndGetRecvID(conv, req.UserID),
ctx, constant.DeleteMsgsNotification, conv.ConversationType, tips)
req.UserID,
m.conversationAndGetRecvID(conversations[0], req.UserID),
constant.DeleteMsgsNotification,
conversations[0].ConversationType,
tips,
)
} else { } else {
if err := m.MsgDatabase.DeleteUserMsgsBySeqs(ctx, req.UserID, req.ConversationID, req.Seqs); err != nil { if err := m.MsgDatabase.DeleteUserMsgsBySeqs(ctx, req.UserID, req.ConversationID, req.Seqs); err != nil {
return nil, err return nil, err
@@ -112,16 +106,14 @@ func (m *msgServer) DeleteMsgPhysical(ctx context.Context, req *msg.DeleteMsgPhy
return nil, err return nil, err
} }
remainTime := timeutil.GetCurrentTimestampBySecond() - req.Timestamp remainTime := timeutil.GetCurrentTimestampBySecond() - req.Timestamp
for _, conversationID := range req.ConversationIDs { if _, err := m.DestructMsgs(ctx, &msg.DestructMsgsReq{Timestamp: remainTime, Limit: 9999}); err != nil {
if err := m.MsgDatabase.DeleteConversationMsgsAndSetMinSeq(ctx, conversationID, remainTime); err != nil { return nil, err
log.ZWarn(ctx, "DeleteConversationMsgsAndSetMinSeq error", err, "conversationID", conversationID, "err", err)
}
} }
return &msg.DeleteMsgPhysicalResp{}, nil return &msg.DeleteMsgPhysicalResp{}, nil
} }
func (m *msgServer) clearConversation(ctx context.Context, conversationIDs []string, userID string, deleteSyncOpt *msg.DeleteSyncOpt) error { func (m *msgServer) clearConversation(ctx context.Context, conversationIDs []string, userID string, deleteSyncOpt *msg.DeleteSyncOpt) error {
conversations, err := m.Conversation.GetConversationsByConversationID(ctx, conversationIDs) conversations, err := m.conversationClient.GetConversationsByConversationIDs(ctx, conversationIDs)
if err != nil { if err != nil {
return err return err
} }
@@ -138,9 +130,16 @@ func (m *msgServer) clearConversation(ctx context.Context, conversationIDs []str
} }
isSyncSelf, isSyncOther := m.validateDeleteSyncOpt(deleteSyncOpt) isSyncSelf, isSyncOther := m.validateDeleteSyncOpt(deleteSyncOpt)
if !isSyncOther { if !isSyncOther {
if err := m.MsgDatabase.SetUserConversationsMinSeqs(ctx, userID, m.getMinSeqs(maxSeqs)); err != nil { setSeqs := m.getMinSeqs(maxSeqs)
if err := m.MsgDatabase.SetUserConversationsMinSeqs(ctx, userID, setSeqs); err != nil {
return err return err
} }
ownerUserIDs := []string{userID}
for conversationID, seq := range setSeqs {
if err := m.conversationClient.SetConversationMinSeq(ctx, conversationID, ownerUserIDs, seq); err != nil {
return err
}
}
// notification 2 self // notification 2 self
if isSyncSelf { if isSyncSelf {
tips := &sdkws.ClearConversationTips{UserID: userID, ConversationIDs: existConversationIDs} tips := &sdkws.ClearConversationTips{UserID: userID, ConversationIDs: existConversationIDs}
+1 -1
View File
@@ -17,7 +17,7 @@ package msg
import ( import (
"context" "context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
) )
+3 -2
View File
@@ -63,7 +63,8 @@ func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.
log.ZDebug(ctx, "GetMsgBySeqs", "conversationID", req.ConversationID, "seq", req.Seq, "msg", string(data)) log.ZDebug(ctx, "GetMsgBySeqs", "conversationID", req.ConversationID, "seq", req.Seq, "msg", string(data))
var role int32 var role int32
if !authverify.IsAppManagerUid(ctx, m.config.Share.IMAdminUserID) { if !authverify.IsAppManagerUid(ctx, m.config.Share.IMAdminUserID) {
switch msgs[0].SessionType { sessionType := msgs[0].SessionType
switch sessionType {
case constant.SingleChatType: case constant.SingleChatType:
if err := authverify.CheckAccessV3(ctx, msgs[0].SendID, m.config.Share.IMAdminUserID); err != nil { if err := authverify.CheckAccessV3(ctx, msgs[0].SendID, m.config.Share.IMAdminUserID); err != nil {
return nil, err return nil, err
@@ -89,7 +90,7 @@ func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.
role = member.RoleLevel role = member.RoleLevel
} }
default: default:
return nil, errs.ErrInternalServer.WrapMsg("msg sessionType not supported") return nil, errs.ErrInternalServer.WrapMsg("msg sessionType not supported", "sessionType", sessionType)
} }
} }
now := time.Now().UnixMilli() now := time.Now().UnixMilli()
+16 -17
View File
@@ -16,7 +16,6 @@ package msg
import ( import (
"context" "context"
"github.com/openimsdk/tools/mw"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics" "github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor" "github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
@@ -84,7 +83,7 @@ func (m *msgServer) setConversationAtInfo(nctx context.Context, msg *sdkws.MsgDa
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
mw.PanicStackToLog(nctx, r) log.ZPanic(nctx, "setConversationAtInfo Panic", errs.ErrPanic(r))
} }
}() }()
@@ -97,14 +96,14 @@ func (m *msgServer) setConversationAtInfo(nctx context.Context, msg *sdkws.MsgDa
ConversationType: msg.SessionType, ConversationType: msg.SessionType,
GroupID: msg.GroupID, GroupID: msg.GroupID,
} }
memberUserIDList, err := m.GroupLocalCache.GetGroupMemberIDs(ctx, msg.GroupID)
if err != nil {
log.ZWarn(ctx, "GetGroupMemberIDs", err)
return
}
tagAll := datautil.Contain(constant.AtAllString, msg.AtUserIDList...) tagAll := datautil.Contain(constant.AtAllString, msg.AtUserIDList...)
if tagAll { if tagAll {
memberUserIDList, err := m.GroupLocalCache.GetGroupMemberIDs(ctx, msg.GroupID)
if err != nil {
log.ZWarn(ctx, "GetGroupMemberIDs", err)
return
}
memberUserIDList = datautil.DeleteElems(memberUserIDList, msg.SendID) memberUserIDList = datautil.DeleteElems(memberUserIDList, msg.SendID)
@@ -114,29 +113,29 @@ func (m *msgServer) setConversationAtInfo(nctx context.Context, msg *sdkws.MsgDa
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAll} conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAll}
} else { // @Everyone and @other people } else { // @Everyone and @other people
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAllAtMe} conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAllAtMe}
atUserID = datautil.SliceIntersectFuncs(atUserID, memberUserIDList, func(a string) string { return a }, func(b string) string {
err = m.Conversation.SetConversations(ctx, atUserID, conversation) return b
if err != nil { })
if err := m.conversationClient.SetConversations(ctx, atUserID, conversation); err != nil {
log.ZWarn(ctx, "SetConversations", err, "userID", atUserID, "conversation", conversation) log.ZWarn(ctx, "SetConversations", err, "userID", atUserID, "conversation", conversation)
} }
memberUserIDList = datautil.Single(atUserID, memberUserIDList) memberUserIDList = datautil.Single(atUserID, memberUserIDList)
} }
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAll} conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAll}
if err := m.conversationClient.SetConversations(ctx, memberUserIDList, conversation); err != nil {
err = m.Conversation.SetConversations(ctx, memberUserIDList, conversation)
if err != nil {
log.ZWarn(ctx, "SetConversations", err, "userID", memberUserIDList, "conversation", conversation) log.ZWarn(ctx, "SetConversations", err, "userID", memberUserIDList, "conversation", conversation)
} }
return return
} }
atUserID = datautil.SliceIntersectFuncs(msg.AtUserIDList, memberUserIDList, func(a string) string { return a }, func(b string) string {
return b
})
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtMe} conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtMe}
err := m.Conversation.SetConversations(ctx, msg.AtUserIDList, conversation) if err := m.conversationClient.SetConversations(ctx, atUserID, conversation); err != nil {
if err != nil { log.ZWarn(ctx, "SetConversations", err, atUserID, conversation)
log.ZWarn(ctx, "SetConversations", err, msg.AtUserIDList, conversation)
} }
} }
+18
View File
@@ -84,3 +84,21 @@ func (m *msgServer) GetActiveConversation(ctx context.Context, req *pbmsg.GetAct
} }
return &pbmsg.GetActiveConversationResp{Conversations: conversations}, nil return &pbmsg.GetActiveConversationResp{Conversations: conversations}, nil
} }
func (m *msgServer) SetUserConversationMaxSeq(ctx context.Context, req *pbmsg.SetUserConversationMaxSeqReq) (*pbmsg.SetUserConversationMaxSeqResp, error) {
for _, userID := range req.OwnerUserID {
if err := m.MsgDatabase.SetUserConversationsMaxSeq(ctx, req.ConversationID, userID, req.MaxSeq); err != nil {
return nil, err
}
}
return &pbmsg.SetUserConversationMaxSeqResp{}, nil
}
func (m *msgServer) SetUserConversationMinSeq(ctx context.Context, req *pbmsg.SetUserConversationMinSeqReq) (*pbmsg.SetUserConversationMinSeqResp, error) {
for _, userID := range req.OwnerUserID {
if err := m.MsgDatabase.SetUserConversationsMinSeq(ctx, req.ConversationID, userID, req.MinSeq); err != nil {
return nil, err
}
}
return &pbmsg.SetUserConversationMinSeqResp{}, nil
}
+54 -48
View File
@@ -16,6 +16,7 @@ package msg
import ( import (
"context" "context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
@@ -26,8 +27,8 @@ import (
"github.com/openimsdk/tools/db/redisutil" "github.com/openimsdk/tools/db/redisutil"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache" "github.com/openimsdk/open-im-server/v3/pkg/rpccache"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/conversation" "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/msg" "github.com/openimsdk/protocol/msg"
@@ -36,38 +37,38 @@ import (
) )
type MessageInterceptorFunc func(ctx context.Context, globalConfig *Config, req *msg.SendMsgReq) (*sdkws.MsgData, error) type MessageInterceptorFunc func(ctx context.Context, globalConfig *Config, req *msg.SendMsgReq) (*sdkws.MsgData, error)
type (
// MessageInterceptorChain defines a chain of message interceptor functions.
MessageInterceptorChain []MessageInterceptorFunc
// MsgServer encapsulates dependencies required for message handling. // MessageInterceptorChain defines a chain of message interceptor functions.
msgServer struct { type MessageInterceptorChain []MessageInterceptorFunc
RegisterCenter discovery.SvcDiscoveryRegistry // Service discovery registry for service registration.
MsgDatabase controller.CommonMsgDatabase // Interface for message database operations.
Conversation *rpcclient.ConversationRpcClient // RPC client for conversation service.
UserLocalCache *rpccache.UserLocalCache // Local cache for user data.
FriendLocalCache *rpccache.FriendLocalCache // Local cache for friend data.
GroupLocalCache *rpccache.GroupLocalCache // Local cache for group data.
ConversationLocalCache *rpccache.ConversationLocalCache // Local cache for conversation data.
Handlers MessageInterceptorChain // Chain of handlers for processing messages.
notificationSender *rpcclient.NotificationSender // RPC client for sending notifications.
msgNotificationSender *MsgNotificationSender // RPC client for sending msg notifications.
config *Config // Global configuration settings.
webhookClient *webhook.Client
}
Config struct { type Config struct {
RpcConfig config.Msg RpcConfig config.Msg
RedisConfig config.Redis RedisConfig config.Redis
MongodbConfig config.Mongo MongodbConfig config.Mongo
KafkaConfig config.Kafka KafkaConfig config.Kafka
NotificationConfig config.Notification NotificationConfig config.Notification
Share config.Share Share config.Share
WebhooksConfig config.Webhooks WebhooksConfig config.Webhooks
LocalCacheConfig config.LocalCache LocalCacheConfig config.LocalCache
Discovery config.Discovery Discovery config.Discovery
} }
)
// MsgServer encapsulates dependencies required for message handling.
type msgServer struct {
msg.UnimplementedMsgServer
RegisterCenter discovery.SvcDiscoveryRegistry // Service discovery registry for service registration.
MsgDatabase controller.CommonMsgDatabase // Interface for message database operations.
UserLocalCache *rpccache.UserLocalCache // Local cache for user data.
FriendLocalCache *rpccache.FriendLocalCache // Local cache for friend data.
GroupLocalCache *rpccache.GroupLocalCache // Local cache for group data.
ConversationLocalCache *rpccache.ConversationLocalCache // Local cache for conversation data.
Handlers MessageInterceptorChain // Chain of handlers for processing messages.
notificationSender *rpcclient.NotificationSender // RPC client for sending notifications.
msgNotificationSender *MsgNotificationSender // RPC client for sending msg notifications.
config *Config // Global configuration settings.
webhookClient *webhook.Client
conversationClient *rpcli.ConversationClient
}
func (m *msgServer) addInterceptorHandler(interceptorFunc ...MessageInterceptorFunc) { func (m *msgServer) addInterceptorHandler(interceptorFunc ...MessageInterceptorFunc) {
m.Handlers = append(m.Handlers, interceptorFunc...) m.Handlers = append(m.Handlers, interceptorFunc...)
@@ -87,11 +88,7 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil { if err != nil {
return err return err
} }
msgModel := redis.NewMsgCache(rdb) msgModel := redis.NewMsgCache(rdb, msgDocModel)
conversationClient := rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation)
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
groupRpcClient := rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group)
friendRpcClient := rpcclient.NewFriendRpcClient(client, config.Share.RpcRegisterName.Friend)
seqConversation, err := mgo.NewSeqConversationMongo(mgocli.GetDB()) seqConversation, err := mgo.NewSeqConversationMongo(mgocli.GetDB())
if err != nil { if err != nil {
return err return err
@@ -106,16 +103,33 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil { if err != nil {
return err return err
} }
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return err
}
friendConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Friend)
if err != nil {
return err
}
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return err
}
conversationClient := rpcli.NewConversationClient(conversationConn)
s := &msgServer{ s := &msgServer{
Conversation: &conversationClient,
MsgDatabase: msgDatabase, MsgDatabase: msgDatabase,
RegisterCenter: client, RegisterCenter: client,
UserLocalCache: rpccache.NewUserLocalCache(userRpcClient, &config.LocalCacheConfig, rdb), UserLocalCache: rpccache.NewUserLocalCache(rpcli.NewUserClient(userConn), &config.LocalCacheConfig, rdb),
GroupLocalCache: rpccache.NewGroupLocalCache(groupRpcClient, &config.LocalCacheConfig, rdb), GroupLocalCache: rpccache.NewGroupLocalCache(rpcli.NewGroupClient(groupConn), &config.LocalCacheConfig, rdb),
ConversationLocalCache: rpccache.NewConversationLocalCache(conversationClient, &config.LocalCacheConfig, rdb), ConversationLocalCache: rpccache.NewConversationLocalCache(conversationClient, &config.LocalCacheConfig, rdb),
FriendLocalCache: rpccache.NewFriendLocalCache(friendRpcClient, &config.LocalCacheConfig, rdb), FriendLocalCache: rpccache.NewFriendLocalCache(rpcli.NewRelationClient(friendConn), &config.LocalCacheConfig, rdb),
config: config, config: config,
webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL), webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
conversationClient: conversationClient,
} }
s.notificationSender = rpcclient.NewNotificationSender(&config.NotificationConfig, rpcclient.WithLocalSendMsg(s.SendMsg)) s.notificationSender = rpcclient.NewNotificationSender(&config.NotificationConfig, rpcclient.WithLocalSendMsg(s.SendMsg))
@@ -139,11 +153,3 @@ func (m *msgServer) conversationAndGetRecvID(conversation *conversation.Conversa
} }
return "" return ""
} }
func (m *msgServer) AppendStreamMsg(ctx context.Context, req *msg.AppendStreamMsgReq) (*msg.AppendStreamMsgResp, error) {
return nil, nil
}
func (m *msgServer) GetStreamMsg(ctx context.Context, req *msg.GetStreamMsgReq) (*msg.GetStreamMsgResp, error) {
return nil, nil
}
+11 -1
View File
@@ -92,7 +92,7 @@ func (m *msgServer) GetSeqMessage(ctx context.Context, req *msg.GetSeqMessageReq
NotificationMsgs: make(map[string]*sdkws.PullMsgs), NotificationMsgs: make(map[string]*sdkws.PullMsgs),
} }
for _, conv := range req.Conversations { for _, conv := range req.Conversations {
_, _, msgs, err := m.MsgDatabase.GetMsgBySeqs(ctx, req.UserID, conv.ConversationID, conv.Seqs) isEnd, endSeq, msgs, err := m.MsgDatabase.GetMessagesBySeqWithBounds(ctx, req.UserID, conv.ConversationID, conv.Seqs, req.GetOrder())
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -111,6 +111,8 @@ func (m *msgServer) GetSeqMessage(ctx context.Context, req *msg.GetSeqMessageReq
} }
} }
pullMsgs.Msgs = append(pullMsgs.Msgs, msgs...) pullMsgs.Msgs = append(pullMsgs.Msgs, msgs...)
pullMsgs.IsEnd = isEnd
pullMsgs.EndSeq = endSeq
} }
return resp, nil return resp, nil
} }
@@ -243,3 +245,11 @@ func (m *msgServer) SearchMessage(ctx context.Context, req *msg.SearchMessageReq
func (m *msgServer) GetServerTime(ctx context.Context, _ *msg.GetServerTimeReq) (*msg.GetServerTimeResp, error) { func (m *msgServer) GetServerTime(ctx context.Context, _ *msg.GetServerTimeReq) (*msg.GetServerTimeResp, error) {
return &msg.GetServerTimeResp{ServerTime: timeutil.GetCurrentTimestampByMill()}, nil return &msg.GetServerTimeResp{ServerTime: timeutil.GetCurrentTimestampByMill()}, nil
} }
func (m *msgServer) GetLastMessage(ctx context.Context, req *msg.GetLastMessageReq) (*msg.GetLastMessageResp, error) {
msgs, err := m.MsgDatabase.GetLastMessage(ctx, req.ConversationIDs, req.UserID)
if err != nil {
return nil, err
}
return &msg.GetLastMessageResp{Msgs: msgs}, nil
}
+17 -6
View File
@@ -39,7 +39,7 @@ func (s *friendServer) GetPaginationBlacks(ctx context.Context, req *relation.Ge
return nil, err return nil, err
} }
resp = &relation.GetPaginationBlacksResp{} resp = &relation.GetPaginationBlacksResp{}
resp.Blacks, err = convert.BlackDB2Pb(ctx, blacks, s.userRpcClient.GetUsersInfoMap) resp.Blacks, err = convert.BlackDB2Pb(ctx, blacks, s.userClient.GetUsersInfoMap)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -81,9 +81,7 @@ func (s *friendServer) AddBlack(ctx context.Context, req *relation.AddBlackReq)
if err := s.webhookBeforeAddBlack(ctx, &s.config.WebhooksConfig.BeforeAddBlack, req); err != nil { if err := s.webhookBeforeAddBlack(ctx, &s.config.WebhooksConfig.BeforeAddBlack, req); err != nil {
return nil, err return nil, err
} }
if err := s.userClient.CheckUser(ctx, []string{req.OwnerUserID, req.BlackUserID}); err != nil {
_, err := s.userRpcClient.GetUsersInfo(ctx, []string{req.OwnerUserID, req.BlackUserID})
if err != nil {
return nil, err return nil, err
} }
black := model.Black{ black := model.Black{
@@ -114,7 +112,7 @@ func (s *friendServer) GetSpecifiedBlacks(ctx context.Context, req *relation.Get
return nil, errs.ErrArgs.WrapMsg("userIDList repeated") return nil, errs.ErrArgs.WrapMsg("userIDList repeated")
} }
userMap, err := s.userRpcClient.GetPublicUserInfoMap(ctx, req.UserIDList) userMap, err := s.userClient.GetUsersInfoMap(ctx, req.UserIDList)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -132,13 +130,26 @@ func (s *friendServer) GetSpecifiedBlacks(ctx context.Context, req *relation.Get
Blacks: make([]*sdkws.BlackInfo, 0, len(req.UserIDList)), Blacks: make([]*sdkws.BlackInfo, 0, len(req.UserIDList)),
} }
toPublcUser := func(userID string) *sdkws.PublicUserInfo {
v, ok := userMap[userID]
if !ok {
return nil
}
return &sdkws.PublicUserInfo{
UserID: v.UserID,
Nickname: v.Nickname,
FaceURL: v.FaceURL,
Ex: v.Ex,
}
}
for _, userID := range req.UserIDList { for _, userID := range req.UserIDList {
if black := blackMap[userID]; black != nil { if black := blackMap[userID]; black != nil {
resp.Blacks = append(resp.Blacks, resp.Blacks = append(resp.Blacks,
&sdkws.BlackInfo{ &sdkws.BlackInfo{
OwnerUserID: black.OwnerUserID, OwnerUserID: black.OwnerUserID,
CreateTime: black.CreateTime.UnixMilli(), CreateTime: black.CreateTime.UnixMilli(),
BlackUserInfo: userMap[userID], BlackUserInfo: toPublcUser(userID),
AddSource: black.AddSource, AddSource: black.AddSource,
OperatorUserID: black.OperatorUserID, OperatorUserID: black.OperatorUserID,
Ex: black.Ex, Ex: black.Ex,
+36 -45
View File
@@ -16,6 +16,7 @@ package relation
import ( import (
"context" "context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/tools/mq/memamq" "github.com/openimsdk/tools/mq/memamq"
@@ -31,7 +32,6 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/convert" "github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/relation" "github.com/openimsdk/protocol/relation"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
@@ -43,15 +43,15 @@ import (
) )
type friendServer struct { type friendServer struct {
db controller.FriendDatabase relation.UnimplementedFriendServer
blackDatabase controller.BlackDatabase db controller.FriendDatabase
userRpcClient *rpcclient.UserRpcClient blackDatabase controller.BlackDatabase
notificationSender *FriendNotificationSender notificationSender *FriendNotificationSender
conversationRpcClient rpcclient.ConversationRpcClient RegisterCenter discovery.SvcDiscoveryRegistry
RegisterCenter discovery.SvcDiscoveryRegistry config *Config
config *Config webhookClient *webhook.Client
webhookClient *webhook.Client queue *memamq.MemoryQueue
queue *memamq.MemoryQueue userClient *rpcli.UserClient
} }
type Config struct { type Config struct {
@@ -91,15 +91,21 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
return err return err
} }
// Initialize RPC clients userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID) if err != nil {
msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg) return err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return err
}
userClient := rpcli.NewUserClient(userConn)
// Initialize notification sender // Initialize notification sender
notificationSender := NewFriendNotificationSender( notificationSender := NewFriendNotificationSender(
&config.NotificationConfig, &config.NotificationConfig,
&msgRpcClient, rpcli.NewMsgClient(msgConn),
WithRpcFunc(userRpcClient.GetUsersInfo), WithRpcFunc(userClient.GetUsersInfo),
) )
localcache.InitLocalCache(&config.LocalCacheConfig) localcache.InitLocalCache(&config.LocalCacheConfig)
@@ -115,13 +121,12 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
blackMongoDB, blackMongoDB,
redis.NewBlackCacheRedis(rdb, &config.LocalCacheConfig, blackMongoDB, redis.GetRocksCacheOptions()), redis.NewBlackCacheRedis(rdb, &config.LocalCacheConfig, blackMongoDB, redis.GetRocksCacheOptions()),
), ),
userRpcClient: &userRpcClient, notificationSender: notificationSender,
notificationSender: notificationSender, RegisterCenter: client,
RegisterCenter: client, config: config,
conversationRpcClient: rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation), webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
config: config, queue: memamq.NewMemoryQueue(16, 1024*1024),
webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL), userClient: userClient,
queue: memamq.NewMemoryQueue(16, 1024*1024),
}) })
return nil return nil
} }
@@ -138,7 +143,7 @@ func (s *friendServer) ApplyToAddFriend(ctx context.Context, req *relation.Apply
if err = s.webhookBeforeAddFriend(ctx, &s.config.WebhooksConfig.BeforeAddFriend, req); err != nil && err != servererrs.ErrCallbackContinue { if err = s.webhookBeforeAddFriend(ctx, &s.config.WebhooksConfig.BeforeAddFriend, req); err != nil && err != servererrs.ErrCallbackContinue {
return nil, err return nil, err
} }
if _, err := s.userRpcClient.GetUsersInfoMap(ctx, []string{req.ToUserID, req.FromUserID}); err != nil { if err := s.userClient.CheckUser(ctx, []string{req.ToUserID, req.FromUserID}); err != nil {
return nil, err return nil, err
} }
@@ -162,7 +167,8 @@ func (s *friendServer) ImportFriends(ctx context.Context, req *relation.ImportFr
if err := authverify.CheckAdmin(ctx, s.config.Share.IMAdminUserID); err != nil { if err := authverify.CheckAdmin(ctx, s.config.Share.IMAdminUserID); err != nil {
return nil, err return nil, err
} }
if _, err := s.userRpcClient.GetUsersInfo(ctx, append([]string{req.OwnerUserID}, req.FriendUserIDs...)); err != nil {
if err := s.userClient.CheckUser(ctx, append([]string{req.OwnerUserID}, req.FriendUserIDs...)); err != nil {
return nil, err return nil, err
} }
if datautil.Contain(req.OwnerUserID, req.FriendUserIDs...) { if datautil.Contain(req.OwnerUserID, req.FriendUserIDs...) {
@@ -303,7 +309,7 @@ func (s *friendServer) getFriend(ctx context.Context, ownerUserID string, friend
if err != nil { if err != nil {
return nil, err return nil, err
} }
return convert.FriendsDB2Pb(ctx, friends, s.userRpcClient.GetUsersInfoMap) return convert.FriendsDB2Pb(ctx, friends, s.userClient.GetUsersInfoMap)
} }
// Get the list of friend requests sent out proactively. // Get the list of friend requests sent out proactively.
@@ -315,7 +321,7 @@ func (s *friendServer) GetDesignatedFriendsApply(ctx context.Context,
return nil, err return nil, err
} }
resp = &relation.GetDesignatedFriendsApplyResp{} resp = &relation.GetDesignatedFriendsApplyResp{}
resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userRpcClient.GetUsersInfoMap) resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userClient.GetUsersInfoMap)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -334,7 +340,7 @@ func (s *friendServer) GetPaginationFriendsApplyTo(ctx context.Context, req *rel
} }
resp = &relation.GetPaginationFriendsApplyToResp{} resp = &relation.GetPaginationFriendsApplyToResp{}
resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userRpcClient.GetUsersInfoMap) resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userClient.GetUsersInfoMap)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -356,7 +362,7 @@ func (s *friendServer) GetPaginationFriendsApplyFrom(ctx context.Context, req *r
return nil, err return nil, err
} }
resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userRpcClient.GetUsersInfoMap) resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userClient.GetUsersInfoMap)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -387,7 +393,7 @@ func (s *friendServer) GetPaginationFriends(ctx context.Context, req *relation.G
} }
resp = &relation.GetPaginationFriendsResp{} resp = &relation.GetPaginationFriendsResp{}
resp.FriendsInfo, err = convert.FriendsDB2Pb(ctx, friends, s.userRpcClient.GetUsersInfoMap) resp.FriendsInfo, err = convert.FriendsDB2Pb(ctx, friends, s.userClient.GetUsersInfoMap)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -420,7 +426,7 @@ func (s *friendServer) GetSpecifiedFriendsInfo(ctx context.Context, req *relatio
return nil, errs.ErrArgs.WrapMsg("userIDList repeated") return nil, errs.ErrArgs.WrapMsg("userIDList repeated")
} }
userMap, err := s.userRpcClient.GetUsersInfoMap(ctx, req.UserIDList) userMap, err := s.userClient.GetUsersInfoMap(ctx, req.UserIDList)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -524,18 +530,3 @@ func (s *friendServer) UpdateFriends(
s.notificationSender.FriendsInfoUpdateNotification(ctx, req.OwnerUserID, req.FriendUserIDs) s.notificationSender.FriendsInfoUpdateNotification(ctx, req.OwnerUserID, req.FriendUserIDs)
return resp, nil return resp, nil
} }
func (s *friendServer) GetIncrementalFriendsApplyTo(ctx context.Context, req *relation.GetIncrementalFriendsApplyToReq) (*relation.GetIncrementalFriendsApplyToResp, error) {
// TODO implement me
return nil, nil
}
func (s *friendServer) GetIncrementalFriendsApplyFrom(ctx context.Context, req *relation.GetIncrementalFriendsApplyFromReq) (*relation.GetIncrementalFriendsApplyFromResp, error) {
// TODO implement me
return nil, nil
}
func (s *friendServer) GetIncrementalBlacks(ctx context.Context, req *relation.GetIncrementalBlacksReq) (*relation.GetIncrementalBlacksResp, error) {
// TODO implement me
return nil, nil
}
+12 -11
View File
@@ -16,6 +16,9 @@ package relation
import ( import (
"context" "context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/database" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/database"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/versionctx" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/versionctx"
@@ -24,8 +27,8 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/convert" "github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification" "github.com/openimsdk/open-im-server/v3/pkg/notification/common_user"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/relation" "github.com/openimsdk/protocol/relation"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
@@ -35,7 +38,7 @@ import (
type FriendNotificationSender struct { type FriendNotificationSender struct {
*rpcclient.NotificationSender *rpcclient.NotificationSender
// Target not found err // Target not found err
getUsersInfo func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error) getUsersInfo func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error)
// db controller // db controller
db controller.FriendDatabase db controller.FriendDatabase
} }
@@ -52,7 +55,7 @@ func WithDBFunc(
fn func(ctx context.Context, userIDs []string) (users []*relationtb.User, err error), fn func(ctx context.Context, userIDs []string) (users []*relationtb.User, err error),
) friendNotificationSenderOptions { ) friendNotificationSenderOptions {
return func(s *FriendNotificationSender) { return func(s *FriendNotificationSender) {
f := func(ctx context.Context, userIDs []string) (result []notification.CommonUser, err error) { f := func(ctx context.Context, userIDs []string) (result []common_user.CommonUser, err error) {
users, err := fn(ctx, userIDs) users, err := fn(ctx, userIDs)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -70,7 +73,7 @@ func WithRpcFunc(
fn func(ctx context.Context, userIDs []string) ([]*sdkws.UserInfo, error), fn func(ctx context.Context, userIDs []string) ([]*sdkws.UserInfo, error),
) friendNotificationSenderOptions { ) friendNotificationSenderOptions {
return func(s *FriendNotificationSender) { return func(s *FriendNotificationSender) {
f := func(ctx context.Context, userIDs []string) (result []notification.CommonUser, err error) { f := func(ctx context.Context, userIDs []string) (result []common_user.CommonUser, err error) {
users, err := fn(ctx, userIDs) users, err := fn(ctx, userIDs)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -84,13 +87,11 @@ func WithRpcFunc(
} }
} }
func NewFriendNotificationSender( func NewFriendNotificationSender(conf *config.Notification, msgClient *rpcli.MsgClient, opts ...friendNotificationSenderOptions) *FriendNotificationSender {
conf *config.Notification,
msgRpcClient *rpcclient.MessageRpcClient,
opts ...friendNotificationSenderOptions,
) *FriendNotificationSender {
f := &FriendNotificationSender{ f := &FriendNotificationSender{
NotificationSender: rpcclient.NewNotificationSender(conf, rpcclient.WithRpcClient(msgRpcClient)), NotificationSender: rpcclient.NewNotificationSender(conf, rpcclient.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
return msgClient.SendMsg(ctx, req)
})),
} }
for _, opt := range opts { for _, opt := range opts {
opt(f) opt(f)
+10 -11
View File
@@ -19,10 +19,9 @@ import (
"crypto/rand" "crypto/rand"
"time" "time"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/authverify" "github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/third" "github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/errs" "github.com/openimsdk/tools/errs"
@@ -50,14 +49,14 @@ func (t *thirdServer) UploadLogs(ctx context.Context, req *third.UploadLogsReq)
platform := constant.PlatformID2Name[int(req.Platform)] platform := constant.PlatformID2Name[int(req.Platform)]
for _, fileURL := range req.FileURLs { for _, fileURL := range req.FileURLs {
log := relationtb.Log{ log := relationtb.Log{
Platform: platform, Platform: platform,
UserID: userID, UserID: userID,
CreateTime: time.Now(), CreateTime: time.Now(),
Url: fileURL.URL, Url: fileURL.URL,
FileName: fileURL.Filename, FileName: fileURL.Filename,
SystemType: req.SystemType, AppFramework: req.AppFramework,
Version: req.Version, Version: req.Version,
Ex: req.Ex, Ex: req.Ex,
} }
for i := 0; i < 20; i++ { for i := 0; i < 20; i++ {
id := genLogID() id := genLogID()
@@ -149,7 +148,7 @@ func (t *thirdServer) SearchLogs(ctx context.Context, req *third.SearchLogsReq)
for _, log := range logs { for _, log := range logs {
userIDs = append(userIDs, log.UserID) userIDs = append(userIDs, log.UserID)
} }
userMap, err := t.userRpcClient.GetUsersInfoMap(ctx, userIDs) userMap, err := t.userClient.GetUsersInfoMap(ctx, userIDs)
if err != nil { if err != nil {
return nil, err return nil, err
} }
+24 -42
View File
@@ -19,17 +19,14 @@ import (
"encoding/base64" "encoding/base64"
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
"path" "path"
"strconv" "strconv"
"time" "time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"go.mongodb.org/mongo-driver/mongo"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/protocol/third" "github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/errs" "github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log" "github.com/openimsdk/tools/log"
@@ -288,50 +285,35 @@ func (t *thirdServer) apiAddress(prefix, name string) string {
} }
func (t *thirdServer) DeleteOutdatedData(ctx context.Context, req *third.DeleteOutdatedDataReq) (*third.DeleteOutdatedDataResp, error) { func (t *thirdServer) DeleteOutdatedData(ctx context.Context, req *third.DeleteOutdatedDataReq) (*third.DeleteOutdatedDataResp, error) {
var conf config.Third if err := authverify.CheckAdmin(ctx, t.config.Share.IMAdminUserID); err != nil {
expireTime := time.UnixMilli(req.ExpireTime) return nil, err
var deltotal int
findPagination := &sdkws.RequestPagination{
PageNumber: 1,
ShowNumber: 1000,
} }
for { engine := t.config.RpcConfig.Object.Enable
total, models, err := t.s3dataBase.FindByExpires(ctx, expireTime, findPagination) expireTime := time.UnixMilli(req.ExpireTime)
if err != nil && errs.Unwrap(err) != mongo.ErrNoDocuments { // Find all expired data in S3 database
models, err := t.s3dataBase.FindExpirationObject(ctx, engine, expireTime, req.ObjectGroup, int64(req.Limit))
if err != nil {
return nil, err
}
for i, obj := range models {
if err := t.s3dataBase.DeleteSpecifiedData(ctx, engine, []string{obj.Name}); err != nil {
return nil, errs.Wrap(err) return nil, errs.Wrap(err)
} }
needDelObjectKeys := make([]string, 0) if err := t.s3dataBase.DelS3Key(ctx, engine, obj.Name); err != nil {
for _, model := range models { return nil, err
needDelObjectKeys = append(needDelObjectKeys, model.Key)
} }
count, err := t.s3dataBase.GetKeyCount(ctx, engine, obj.Key)
needDelObjectKeys = datautil.Distinct(needDelObjectKeys) if err != nil {
for _, key := range needDelObjectKeys { return nil, err
count, err := t.s3dataBase.FindNotDelByS3(ctx, key, expireTime) }
if err != nil && errs.Unwrap(err) != mongo.ErrNoDocuments { log.ZDebug(ctx, "delete s3 object record", "index", i, "s3", obj, "count", count)
return nil, errs.Wrap(err) if count == 0 {
} if err := t.s3.DeleteObject(ctx, obj.Key); err != nil {
if int(count) < 1 && t.minio != nil { return nil, err
thumbnailKey, _ := t.getMinioImageThumbnailKey(ctx, key)
t.s3dataBase.DeleteObject(ctx, thumbnailKey)
t.s3dataBase.DelS3Key(ctx, conf.Object.Enable, needDelObjectKeys...)
t.s3dataBase.DeleteObject(ctx, key)
} }
} }
for _, model := range models {
err := t.s3dataBase.DeleteSpecifiedData(ctx, model.Engine, model.Name)
if err != nil {
return nil, errs.Wrap(err)
}
}
if total < int64(findPagination.ShowNumber) {
break
}
deltotal += int(total)
} }
log.ZDebug(ctx, "DeleteOutdatedData", "delete Total", deltotal) return &third.DeleteOutdatedDataResp{Count: int32(len(models))}, nil
return &third.DeleteOutdatedDataResp{}, nil
} }
type FormDataMate struct { type FormDataMate struct {
+16 -17
View File
@@ -17,14 +17,14 @@ package third
import ( import (
"context" "context"
"fmt" "fmt"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/database/mgo"
"github.com/openimsdk/open-im-server/v3/pkg/localcache"
"time" "time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/database/mgo"
"github.com/openimsdk/open-im-server/v3/pkg/localcache"
"github.com/openimsdk/protocol/third" "github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/db/mongoutil" "github.com/openimsdk/tools/db/mongoutil"
"github.com/openimsdk/tools/db/redisutil" "github.com/openimsdk/tools/db/redisutil"
@@ -38,12 +38,13 @@ import (
) )
type thirdServer struct { type thirdServer struct {
third.UnimplementedThirdServer
thirdDatabase controller.ThirdDatabase thirdDatabase controller.ThirdDatabase
s3dataBase controller.S3Database s3dataBase controller.S3Database
userRpcClient rpcclient.UserRpcClient
defaultExpire time.Duration defaultExpire time.Duration
config *Config config *Config
minio *minio.Minio s3 s3.Interface
userClient *rpcli.UserClient
} }
type Config struct { type Config struct {
@@ -78,13 +79,11 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
// Select the oss method according to the profile policy // Select the oss method according to the profile policy
enable := config.RpcConfig.Object.Enable enable := config.RpcConfig.Object.Enable
var ( var (
o s3.Interface o s3.Interface
minioCli *minio.Minio
) )
switch enable { switch enable {
case "minio": case "minio":
minioCli, err = minio.NewMinio(ctx, redis.NewMinioCache(rdb), *config.MinioConfig.Build()) o, err = minio.NewMinio(ctx, redis.NewMinioCache(rdb), *config.MinioConfig.Build())
o = minioCli
case "cos": case "cos":
o, err = cos.NewCos(*config.RpcConfig.Object.Cos.Build()) o, err = cos.NewCos(*config.RpcConfig.Object.Cos.Build())
case "oss": case "oss":
@@ -97,22 +96,22 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil { if err != nil {
return err return err
} }
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
localcache.InitLocalCache(&config.LocalCacheConfig) localcache.InitLocalCache(&config.LocalCacheConfig)
third.RegisterThirdServer(server, &thirdServer{ third.RegisterThirdServer(server, &thirdServer{
thirdDatabase: controller.NewThirdDatabase(redis.NewThirdCache(rdb), logdb), thirdDatabase: controller.NewThirdDatabase(redis.NewThirdCache(rdb), logdb),
userRpcClient: rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID),
s3dataBase: controller.NewS3Database(rdb, o, s3db), s3dataBase: controller.NewS3Database(rdb, o, s3db),
defaultExpire: time.Hour * 24 * 7, defaultExpire: time.Hour * 24 * 7,
config: config, config: config,
minio: minioCli, s3: o,
userClient: rpcli.NewUserClient(userConn),
}) })
return nil return nil
} }
func (t *thirdServer) getMinioImageThumbnailKey(ctx context.Context, name string) (string, error) {
return t.minio.GetImageThumbnailKey(ctx, name)
}
func (t *thirdServer) FcmUpdateToken(ctx context.Context, req *third.FcmUpdateTokenReq) (resp *third.FcmUpdateTokenResp, err error) { func (t *thirdServer) FcmUpdateToken(ctx context.Context, req *third.FcmUpdateTokenReq) (resp *third.FcmUpdateTokenResp, err error) {
err = t.thirdDatabase.FcmUpdateToken(ctx, req.Account, int(req.PlatformID), req.FcmToken, req.ExpireTime) err = t.thirdDatabase.FcmUpdateToken(ctx, req.Account, int(req.PlatformID), req.FcmToken, req.ExpireTime)
if err != nil { if err != nil {
+11 -6
View File
@@ -16,18 +16,21 @@ package user
import ( import (
"context" "context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/protocol/msg"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model" relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification" "github.com/openimsdk/open-im-server/v3/pkg/notification/common_user"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient" "github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
) )
type UserNotificationSender struct { type UserNotificationSender struct {
*rpcclient.NotificationSender *rpcclient.NotificationSender
getUsersInfo func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error) getUsersInfo func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error)
// db controller // db controller
db controller.UserDatabase db controller.UserDatabase
} }
@@ -44,7 +47,7 @@ func WithUserFunc(
fn func(ctx context.Context, userIDs []string) (users []*relationtb.User, err error), fn func(ctx context.Context, userIDs []string) (users []*relationtb.User, err error),
) userNotificationSenderOptions { ) userNotificationSenderOptions {
return func(u *UserNotificationSender) { return func(u *UserNotificationSender) {
f := func(ctx context.Context, userIDs []string) (result []notification.CommonUser, err error) { f := func(ctx context.Context, userIDs []string) (result []common_user.CommonUser, err error) {
users, err := fn(ctx, userIDs) users, err := fn(ctx, userIDs)
if err != nil { if err != nil {
return nil, err return nil, err
@@ -58,9 +61,11 @@ func WithUserFunc(
} }
} }
func NewUserNotificationSender(config *Config, msgRpcClient *rpcclient.MessageRpcClient, opts ...userNotificationSenderOptions) *UserNotificationSender { func NewUserNotificationSender(config *Config, msgClient *rpcli.MsgClient, opts ...userNotificationSenderOptions) *UserNotificationSender {
f := &UserNotificationSender{ f := &UserNotificationSender{
NotificationSender: rpcclient.NewNotificationSender(&config.NotificationConfig, rpcclient.WithRpcClient(msgRpcClient)), NotificationSender: rpcclient.NewNotificationSender(&config.NotificationConfig, rpcclient.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
return msgClient.SendMsg(ctx, req)
})),
} }
for _, opt := range opts { for _, opt := range opts {
opt(f) opt(f)
+44 -23
View File
@@ -31,6 +31,7 @@ import (
tablerelation "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model" tablerelation "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/common/webhook" "github.com/openimsdk/open-im-server/v3/pkg/common/webhook"
"github.com/openimsdk/open-im-server/v3/pkg/localcache" "github.com/openimsdk/open-im-server/v3/pkg/localcache"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/protocol/group" "github.com/openimsdk/protocol/group"
friendpb "github.com/openimsdk/protocol/relation" friendpb "github.com/openimsdk/protocol/relation"
"github.com/openimsdk/tools/db/redisutil" "github.com/openimsdk/tools/db/redisutil"
@@ -39,7 +40,6 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/convert" "github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs" "github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller" "github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant" "github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/sdkws" "github.com/openimsdk/protocol/sdkws"
pbuser "github.com/openimsdk/protocol/user" pbuser "github.com/openimsdk/protocol/user"
@@ -52,15 +52,16 @@ import (
) )
type userServer struct { type userServer struct {
pbuser.UnimplementedUserServer
online cache.OnlineCache online cache.OnlineCache
db controller.UserDatabase db controller.UserDatabase
friendNotificationSender *relation.FriendNotificationSender friendNotificationSender *relation.FriendNotificationSender
userNotificationSender *UserNotificationSender userNotificationSender *UserNotificationSender
friendRpcClient *rpcclient.FriendRpcClient
groupRpcClient *rpcclient.GroupRpcClient
RegisterCenter registry.SvcDiscoveryRegistry RegisterCenter registry.SvcDiscoveryRegistry
config *Config config *Config
webhookClient *webhook.Client webhookClient *webhook.Client
groupClient *rpcli.GroupClient
relationClient *rpcli.RelationClient
} }
type Config struct { type Config struct {
@@ -93,22 +94,33 @@ func Start(ctx context.Context, config *Config, client registry.SvcDiscoveryRegi
if err != nil { if err != nil {
return err return err
} }
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return err
}
friendConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Friend)
if err != nil {
return err
}
msgClient := rpcli.NewMsgClient(msgConn)
userCache := redis.NewUserCacheRedis(rdb, &config.LocalCacheConfig, userDB, redis.GetRocksCacheOptions()) userCache := redis.NewUserCacheRedis(rdb, &config.LocalCacheConfig, userDB, redis.GetRocksCacheOptions())
database := controller.NewUserDatabase(userDB, userCache, mgocli.GetTx()) database := controller.NewUserDatabase(userDB, userCache, mgocli.GetTx())
friendRpcClient := rpcclient.NewFriendRpcClient(client, config.Share.RpcRegisterName.Friend)
groupRpcClient := rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group)
msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg)
localcache.InitLocalCache(&config.LocalCacheConfig) localcache.InitLocalCache(&config.LocalCacheConfig)
u := &userServer{ u := &userServer{
online: redis.NewUserOnline(rdb), online: redis.NewUserOnline(rdb),
db: database, db: database,
RegisterCenter: client, RegisterCenter: client,
friendRpcClient: &friendRpcClient, friendNotificationSender: relation.NewFriendNotificationSender(&config.NotificationConfig, msgClient, relation.WithDBFunc(database.FindWithError)),
groupRpcClient: &groupRpcClient, userNotificationSender: NewUserNotificationSender(config, msgClient, WithUserFunc(database.FindWithError)),
friendNotificationSender: relation.NewFriendNotificationSender(&config.NotificationConfig, &msgRpcClient, relation.WithDBFunc(database.FindWithError)),
userNotificationSender: NewUserNotificationSender(config, &msgRpcClient, WithUserFunc(database.FindWithError)),
config: config, config: config,
webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL), webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
groupClient: rpcli.NewGroupClient(groupConn),
relationClient: rpcli.NewRelationClient(friendConn),
} }
pbuser.RegisterUserServer(server, u) pbuser.RegisterUserServer(server, u)
return u.db.InitOnce(context.Background(), users) return u.db.InitOnce(context.Background(), users)
@@ -468,7 +480,9 @@ func (s *userServer) AddNotificationAccount(ctx context.Context, req *pbuser.Add
if err := authverify.CheckAdmin(ctx, s.config.Share.IMAdminUserID); err != nil { if err := authverify.CheckAdmin(ctx, s.config.Share.IMAdminUserID); err != nil {
return nil, err return nil, err
} }
if req.AppMangerLevel < constant.AppNotificationAdmin {
return nil, errs.ErrArgs.WithDetail("app level not supported")
}
if req.UserID == "" { if req.UserID == "" {
for i := 0; i < 20; i++ { for i := 0; i < 20; i++ {
userId := s.genUserID() userId := s.genUserID()
@@ -494,16 +508,17 @@ func (s *userServer) AddNotificationAccount(ctx context.Context, req *pbuser.Add
Nickname: req.NickName, Nickname: req.NickName,
FaceURL: req.FaceURL, FaceURL: req.FaceURL,
CreateTime: time.Now(), CreateTime: time.Now(),
AppMangerLevel: constant.AppNotificationAdmin, AppMangerLevel: req.AppMangerLevel,
} }
if err := s.db.Create(ctx, []*tablerelation.User{user}); err != nil { if err := s.db.Create(ctx, []*tablerelation.User{user}); err != nil {
return nil, err return nil, err
} }
return &pbuser.AddNotificationAccountResp{ return &pbuser.AddNotificationAccountResp{
UserID: req.UserID, UserID: req.UserID,
NickName: req.NickName, NickName: req.NickName,
FaceURL: req.FaceURL, FaceURL: req.FaceURL,
AppMangerLevel: req.AppMangerLevel,
}, nil }, nil
} }
@@ -583,8 +598,13 @@ func (s *userServer) GetNotificationAccount(ctx context.Context, req *pbuser.Get
if err != nil { if err != nil {
return nil, servererrs.ErrUserIDNotFound.Wrap() return nil, servererrs.ErrUserIDNotFound.Wrap()
} }
if user.AppMangerLevel == constant.AppAdmin || user.AppMangerLevel == constant.AppNotificationAdmin { if user.AppMangerLevel == constant.AppAdmin || user.AppMangerLevel >= constant.AppNotificationAdmin {
return &pbuser.GetNotificationAccountResp{}, nil return &pbuser.GetNotificationAccountResp{Account: &pbuser.NotificationAccountInfo{
UserID: user.UserID,
FaceURL: user.FaceURL,
NickName: user.Nickname,
AppMangerLevel: user.AppMangerLevel,
}}, nil
} }
return nil, errs.ErrNoPermission.WrapMsg("notification messages cannot be sent for this ID") return nil, errs.ErrNoPermission.WrapMsg("notification messages cannot be sent for this ID")
@@ -609,11 +629,12 @@ func (s *userServer) userModelToResp(users []*tablerelation.User, pagination pag
accounts := make([]*pbuser.NotificationAccountInfo, 0) accounts := make([]*pbuser.NotificationAccountInfo, 0)
var total int64 var total int64
for _, v := range users { for _, v := range users {
if v.AppMangerLevel == constant.AppNotificationAdmin && !datautil.Contain(v.UserID, s.config.Share.IMAdminUserID...) { if v.AppMangerLevel >= constant.AppNotificationAdmin && !datautil.Contain(v.UserID, s.config.Share.IMAdminUserID...) {
temp := &pbuser.NotificationAccountInfo{ temp := &pbuser.NotificationAccountInfo{
UserID: v.UserID, UserID: v.UserID,
FaceURL: v.FaceURL, FaceURL: v.FaceURL,
NickName: v.Nickname, NickName: v.Nickname,
AppMangerLevel: v.AppMangerLevel,
} }
accounts = append(accounts, temp) accounts = append(accounts, temp)
total += 1 total += 1
@@ -640,7 +661,7 @@ func (s *userServer) NotificationUserInfoUpdate(ctx context.Context, userID stri
wg.Add(len(es)) wg.Add(len(es))
go func() { go func() {
defer wg.Done() defer wg.Done()
_, es[0] = s.groupRpcClient.Client.NotificationUserInfoUpdate(ctx, &group.NotificationUserInfoUpdateReq{ _, es[0] = s.groupClient.NotificationUserInfoUpdate(ctx, &group.NotificationUserInfoUpdateReq{
UserID: userID, UserID: userID,
OldUserInfo: oldUserInfo, OldUserInfo: oldUserInfo,
NewUserInfo: newUserInfo, NewUserInfo: newUserInfo,
@@ -649,7 +670,7 @@ func (s *userServer) NotificationUserInfoUpdate(ctx context.Context, userID stri
go func() { go func() {
defer wg.Done() defer wg.Done()
_, es[1] = s.friendRpcClient.Client.NotificationUserInfoUpdate(ctx, &friendpb.NotificationUserInfoUpdateReq{ _, es[1] = s.relationClient.NotificationUserInfoUpdate(ctx, &friendpb.NotificationUserInfoUpdateReq{
UserID: userID, UserID: userID,
OldUserInfo: oldUserInfo, OldUserInfo: oldUserInfo,
NewUserInfo: newUserInfo, NewUserInfo: newUserInfo,
+53 -67
View File
@@ -16,14 +16,12 @@ package tools
import ( import (
"context" "context"
"fmt"
"os"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config" "github.com/openimsdk/open-im-server/v3/pkg/common/config"
kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister" kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
pbconversation "github.com/openimsdk/protocol/conversation" pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/msg" "github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/mcontext" "github.com/openimsdk/tools/mcontext"
"github.com/openimsdk/tools/mw" "github.com/openimsdk/tools/mw"
@@ -46,7 +44,7 @@ func Start(ctx context.Context, config *CronTaskConfig) error {
if config.CronTask.RetainChatRecords < 1 { if config.CronTask.RetainChatRecords < 1 {
return errs.New("msg destruct time must be greater than 1").Wrap() return errs.New("msg destruct time must be greater than 1").Wrap()
} }
client, err := kdisc.NewDiscoveryRegister(&config.Discovery, &config.Share) client, err := kdisc.NewDiscoveryRegister(&config.Discovery, &config.Share, nil)
if err != nil { if err != nil {
return errs.WrapMsg(err, "failed to register discovery service") return errs.WrapMsg(err, "failed to register discovery service")
} }
@@ -58,80 +56,68 @@ func Start(ctx context.Context, config *CronTaskConfig) error {
return err return err
} }
// thirdConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Third) thirdConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Third)
// if err != nil { if err != nil {
// return err return err
// } }
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation) conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil { if err != nil {
return err return err
} }
msgClient := msg.NewMsgClient(msgConn) srv := &cronServer{
conversationClient := pbconversation.NewConversationClient(conversationConn) ctx: ctx,
// thirdClient := third.NewThirdClient(thirdConn) config: config,
cron: cron.New(),
crontab := cron.New() msgClient: msg.NewMsgClient(msgConn),
conversationClient: pbconversation.NewConversationClient(conversationConn),
// scheduled hard delete outdated Msgs in specific time. thirdClient: third.NewThirdClient(thirdConn),
clearMsgFunc := func() {
now := time.Now()
deltime := now.Add(-time.Hour * 24 * time.Duration(config.CronTask.RetainChatRecords))
ctx := mcontext.SetOperationID(ctx, fmt.Sprintf("cron_%d_%d", os.Getpid(), deltime.UnixMilli()))
log.ZDebug(ctx, "clear chat records", "deltime", deltime, "timestamp", deltime.UnixMilli())
if _, err := msgClient.ClearMsg(ctx, &msg.ClearMsgReq{Timestamp: deltime.UnixMilli()}); err != nil {
log.ZError(ctx, "cron clear chat records failed", err, "deltime", deltime, "cont", time.Since(now))
return
}
log.ZDebug(ctx, "cron clear chat records success", "deltime", deltime, "cont", time.Since(now))
}
if _, err := crontab.AddFunc(config.CronTask.CronExecuteTime, clearMsgFunc); err != nil {
return errs.Wrap(err)
} }
// scheduled soft delete outdated Msgs in specific time when user set `is_msg_destruct` feature. if err := srv.registerClearS3(); err != nil {
msgDestructFunc := func() { return err
now := time.Now()
ctx := mcontext.SetOperationID(ctx, fmt.Sprintf("cron_%d_%d", os.Getpid(), now.UnixMilli()))
log.ZDebug(ctx, "msg destruct cron start", "now", now)
conversations, err := conversationClient.GetConversationsNeedDestructMsgs(ctx, &pbconversation.GetConversationsNeedDestructMsgsReq{})
if err != nil {
log.ZError(ctx, "Get conversation need Destruct msgs failed.", err)
return
} else {
_, err := msgClient.DestructMsgs(ctx, &msg.DestructMsgsReq{Conversations: conversations.Conversations})
if err != nil {
log.ZError(ctx, "Destruct Msgs failed.", err)
return
}
}
log.ZDebug(ctx, "msg destruct cron task completed", "cont", time.Since(now))
} }
if _, err := crontab.AddFunc(config.CronTask.CronExecuteTime, msgDestructFunc); err != nil { if err := srv.registerDeleteMsg(); err != nil {
return errs.Wrap(err) return err
}
if err := srv.registerClearUserMsg(); err != nil {
return err
} }
// // scheduled delete outdated file Objects and their datas in specific time.
// deleteObjectFunc := func() {
// now := time.Now()
// deleteTime := now.Add(-time.Hour * 24 * time.Duration(config.CronTask.FileExpireTime))
// ctx := mcontext.SetOperationID(ctx, fmt.Sprintf("cron_%d_%d", os.Getpid(), deleteTime.UnixMilli()))
// log.ZDebug(ctx, "deleteoutDatedData ", "deletetime", deleteTime, "timestamp", deleteTime.UnixMilli())
// if _, err := thirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{ExpireTime: deleteTime.UnixMilli()}); err != nil {
// log.ZError(ctx, "cron deleteoutDatedData failed", err, "deleteTime", deleteTime, "cont", time.Since(now))
// return
// }
// log.ZDebug(ctx, "cron deleteoutDatedData success", "deltime", deleteTime, "cont", time.Since(now))
// }
// if _, err := crontab.AddFunc(config.CronTask.CronExecuteTime, deleteObjectFunc); err != nil {
// return errs.Wrap(err)
// }
log.ZDebug(ctx, "start cron task", "CronExecuteTime", config.CronTask.CronExecuteTime) log.ZDebug(ctx, "start cron task", "CronExecuteTime", config.CronTask.CronExecuteTime)
crontab.Start() srv.cron.Start()
<-ctx.Done() <-ctx.Done()
return nil return nil
} }
type cronServer struct {
ctx context.Context
config *CronTaskConfig
cron *cron.Cron
msgClient msg.MsgClient
conversationClient pbconversation.ConversationClient
thirdClient third.ThirdClient
}
func (c *cronServer) registerClearS3() error {
if c.config.CronTask.FileExpireTime <= 0 || len(c.config.CronTask.DeleteObjectType) == 0 {
log.ZInfo(c.ctx, "disable scheduled cleanup of s3", "fileExpireTime", c.config.CronTask.FileExpireTime, "deleteObjectType", c.config.CronTask.DeleteObjectType)
return nil
}
_, err := c.cron.AddFunc(c.config.CronTask.CronExecuteTime, c.clearS3)
return errs.WrapMsg(err, "failed to register clear s3 cron task")
}
func (c *cronServer) registerDeleteMsg() error {
if c.config.CronTask.RetainChatRecords <= 0 {
log.ZInfo(c.ctx, "disable scheduled cleanup of chat records", "retainChatRecords", c.config.CronTask.RetainChatRecords)
return nil
}
_, err := c.cron.AddFunc(c.config.CronTask.CronExecuteTime, c.deleteMsg)
return errs.WrapMsg(err, "failed to register delete msg cron task")
}
func (c *cronServer) registerClearUserMsg() error {
_, err := c.cron.AddFunc(c.config.CronTask.CronExecuteTime, c.clearUserMsg)
return errs.WrapMsg(err, "failed to register clear user msg cron task")
}
+63
View File
@@ -0,0 +1,63 @@
package tools
import (
"context"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/mcontext"
"github.com/openimsdk/tools/mw"
"github.com/robfig/cron/v3"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"testing"
)
func TestName(t *testing.T) {
conf := &config.Discovery{
Enable: config.ETCD,
Etcd: config.Etcd{
RootDirectory: "openim",
Address: []string{"localhost:12379"},
},
}
client, err := kdisc.NewDiscoveryRegister(conf, "source")
if err != nil {
panic(err)
}
client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()))
ctx := mcontext.SetOpUserID(context.Background(), "imAdmin")
msgConn, err := client.GetConn(ctx, "msg-rpc-service")
if err != nil {
panic(err)
}
thirdConn, err := client.GetConn(ctx, "third-rpc-service")
if err != nil {
panic(err)
}
conversationConn, err := client.GetConn(ctx, "conversation-rpc-service")
if err != nil {
panic(err)
}
srv := &cronServer{
ctx: ctx,
config: &CronTaskConfig{
CronTask: config.CronTask{
RetainChatRecords: 1,
FileExpireTime: 1,
DeleteObjectType: []string{"msg-picture", "msg-file", "msg-voice", "msg-video", "msg-video-snapshot", "sdklog", ""},
},
},
cron: cron.New(),
msgClient: msg.NewMsgClient(msgConn),
conversationClient: pbconversation.NewConversationClient(conversationConn),
thirdClient: third.NewThirdClient(thirdConn),
}
srv.deleteMsg()
//srv.clearS3()
//srv.clearUserMsg()
}
+36
View File
@@ -0,0 +1,36 @@
package tools
import (
"fmt"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mcontext"
"os"
"time"
)
func (c *cronServer) deleteMsg() {
now := time.Now()
deltime := now.Add(-time.Hour * 24 * time.Duration(c.config.CronTask.RetainChatRecords))
operationID := fmt.Sprintf("cron_msg_%d_%d", os.Getpid(), deltime.UnixMilli())
ctx := mcontext.SetOperationID(c.ctx, operationID)
log.ZDebug(ctx, "Destruct chat records", "deltime", deltime, "timestamp", deltime.UnixMilli())
const (
deleteCount = 10000
deleteLimit = 50
)
var count int
for i := 1; i <= deleteCount; i++ {
ctx := mcontext.SetOperationID(c.ctx, fmt.Sprintf("%s_%d", operationID, i))
resp, err := c.msgClient.DestructMsgs(ctx, &msg.DestructMsgsReq{Timestamp: deltime.UnixMilli(), Limit: deleteLimit})
if err != nil {
log.ZError(ctx, "cron destruct chat records failed", err)
break
}
count += int(resp.Count)
if resp.Count < deleteLimit {
break
}
}
log.ZDebug(ctx, "cron destruct chat records end", "deltime", deltime, "cont", time.Since(now), "count", count)
}
+79
View File
@@ -0,0 +1,79 @@
package tools
import (
"fmt"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mcontext"
"os"
"time"
)
func (c *cronServer) clearS3() {
start := time.Now()
deleteTime := start.Add(-time.Hour * 24 * time.Duration(c.config.CronTask.FileExpireTime))
operationID := fmt.Sprintf("cron_s3_%d_%d", os.Getpid(), deleteTime.UnixMilli())
ctx := mcontext.SetOperationID(c.ctx, operationID)
log.ZDebug(ctx, "deleteoutDatedData", "deletetime", deleteTime, "timestamp", deleteTime.UnixMilli())
const (
deleteCount = 10000
deleteLimit = 100
)
var count int
for i := 1; i <= deleteCount; i++ {
resp, err := c.thirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{ExpireTime: deleteTime.UnixMilli(), ObjectGroup: c.config.CronTask.DeleteObjectType, Limit: deleteLimit})
if err != nil {
log.ZError(ctx, "cron deleteoutDatedData failed", err)
return
}
count += int(resp.Count)
if resp.Count < deleteLimit {
break
}
}
log.ZDebug(ctx, "cron deleteoutDatedData success", "deltime", deleteTime, "cont", time.Since(start), "count", count)
}
// var req *third.DeleteOutdatedDataReq
// count1, err := ExtractField(ctx, c.thirdClient.DeleteOutdatedData, req, (*third.DeleteOutdatedDataResp).GetCount)
//
// c.thirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{})
// msggateway.GetUsersOnlineStatusCaller.Invoke(ctx, &msggateway.GetUsersOnlineStatusReq{})
//
// var cli ThirdClient
//
// c111, err := cli.DeleteOutdatedData(ctx, 100)
//
// cli.ThirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{})
//
// cli.AuthSign(ctx, &third.AuthSignReq{})
//
// cli.SetAppBadge()
//
//}
//
//func extractField[A, B, C any](ctx context.Context, fn func(ctx context.Context, req *A, opts ...grpc.CallOption) (*B, error), req *A, get func(*B) C) (C, error) {
// resp, err := fn(ctx, req)
// if err != nil {
// var c C
// return c, err
// }
// return get(resp), nil
//}
//
//func ignore(_ any, err error) error {
// return err
//}
//
//type ThirdClient struct {
// third.ThirdClient
//}
//
//func (c *ThirdClient) DeleteOutdatedData(ctx context.Context, expireTime int64) (int32, error) {
// return extractField(ctx, c.ThirdClient.DeleteOutdatedData, &third.DeleteOutdatedDataReq{ExpireTime: expireTime}, (*third.DeleteOutdatedDataResp).GetCount)
//}
//
//func (c *ThirdClient) DeleteOutdatedData1(ctx context.Context, expireTime int64) error {
// return ignore(c.ThirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{ExpireTime: expireTime}))
//}
+34
View File
@@ -0,0 +1,34 @@
package tools
import (
"fmt"
pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mcontext"
"os"
"time"
)
func (c *cronServer) clearUserMsg() {
now := time.Now()
operationID := fmt.Sprintf("cron_user_msg_%d_%d", os.Getpid(), now.UnixMilli())
ctx := mcontext.SetOperationID(c.ctx, operationID)
log.ZDebug(ctx, "clear user msg cron start")
const (
deleteCount = 10000
deleteLimit = 100
)
var count int
for i := 1; i <= deleteCount; i++ {
resp, err := c.conversationClient.ClearUserConversationMsg(ctx, &pbconversation.ClearUserConversationMsgReq{Timestamp: now.UnixMilli(), Limit: deleteLimit})
if err != nil {
log.ZError(ctx, "ClearUserConversationMsg failed.", err)
return
}
count += int(resp.Count)
if resp.Count < deleteLimit {
break
}
}
log.ZDebug(ctx, "clear user msg cron task completed", "cont", time.Since(now), "count", count)
}
+6 -2
View File
@@ -55,6 +55,10 @@ func (a *AuthRpcCmd) Exec() error {
func (a *AuthRpcCmd) runE() error { func (a *AuthRpcCmd) runE() error {
return startrpc.Start(a.ctx, &a.authConfig.Discovery, &a.authConfig.RpcConfig.Prometheus, a.authConfig.RpcConfig.RPC.ListenIP, return startrpc.Start(a.ctx, &a.authConfig.Discovery, &a.authConfig.RpcConfig.Prometheus, a.authConfig.RpcConfig.RPC.ListenIP,
a.authConfig.RpcConfig.RPC.RegisterIP, a.authConfig.RpcConfig.RPC.Ports, a.authConfig.RpcConfig.RPC.RegisterIP, a.authConfig.RpcConfig.RPC.AutoSetPorts, a.authConfig.RpcConfig.RPC.Ports,
a.Index(), a.authConfig.Share.RpcRegisterName.Auth, &a.authConfig.Share, a.authConfig, auth.Start) a.Index(), a.authConfig.Share.RpcRegisterName.Auth, &a.authConfig.Share, a.authConfig,
[]string{
a.authConfig.Share.RpcRegisterName.MessageGateway,
},
auth.Start)
} }
+4 -2
View File
@@ -57,6 +57,8 @@ func (a *ConversationRpcCmd) Exec() error {
func (a *ConversationRpcCmd) runE() error { func (a *ConversationRpcCmd) runE() error {
return startrpc.Start(a.ctx, &a.conversationConfig.Discovery, &a.conversationConfig.RpcConfig.Prometheus, a.conversationConfig.RpcConfig.RPC.ListenIP, return startrpc.Start(a.ctx, &a.conversationConfig.Discovery, &a.conversationConfig.RpcConfig.Prometheus, a.conversationConfig.RpcConfig.RPC.ListenIP,
a.conversationConfig.RpcConfig.RPC.RegisterIP, a.conversationConfig.RpcConfig.RPC.Ports, a.conversationConfig.RpcConfig.RPC.RegisterIP, a.conversationConfig.RpcConfig.RPC.AutoSetPorts, a.conversationConfig.RpcConfig.RPC.Ports,
a.Index(), a.conversationConfig.Share.RpcRegisterName.Conversation, &a.conversationConfig.Share, a.conversationConfig, conversation.Start) a.Index(), a.conversationConfig.Share.RpcRegisterName.Conversation, &a.conversationConfig.Share, a.conversationConfig,
nil,
conversation.Start)
} }
+4 -2
View File
@@ -58,6 +58,8 @@ func (a *FriendRpcCmd) Exec() error {
func (a *FriendRpcCmd) runE() error { func (a *FriendRpcCmd) runE() error {
return startrpc.Start(a.ctx, &a.relationConfig.Discovery, &a.relationConfig.RpcConfig.Prometheus, a.relationConfig.RpcConfig.RPC.ListenIP, return startrpc.Start(a.ctx, &a.relationConfig.Discovery, &a.relationConfig.RpcConfig.Prometheus, a.relationConfig.RpcConfig.RPC.ListenIP,
a.relationConfig.RpcConfig.RPC.RegisterIP, a.relationConfig.RpcConfig.RPC.Ports, a.relationConfig.RpcConfig.RPC.RegisterIP, a.relationConfig.RpcConfig.RPC.AutoSetPorts, a.relationConfig.RpcConfig.RPC.Ports,
a.Index(), a.relationConfig.Share.RpcRegisterName.Friend, &a.relationConfig.Share, a.relationConfig, relation.Start) a.Index(), a.relationConfig.Share.RpcRegisterName.Friend, &a.relationConfig.Share, a.relationConfig,
nil,
relation.Start)
} }
+4 -2
View File
@@ -59,6 +59,8 @@ func (a *GroupRpcCmd) Exec() error {
func (a *GroupRpcCmd) runE() error { func (a *GroupRpcCmd) runE() error {
return startrpc.Start(a.ctx, &a.groupConfig.Discovery, &a.groupConfig.RpcConfig.Prometheus, a.groupConfig.RpcConfig.RPC.ListenIP, return startrpc.Start(a.ctx, &a.groupConfig.Discovery, &a.groupConfig.RpcConfig.Prometheus, a.groupConfig.RpcConfig.RPC.ListenIP,
a.groupConfig.RpcConfig.RPC.RegisterIP, a.groupConfig.RpcConfig.RPC.Ports, a.groupConfig.RpcConfig.RPC.RegisterIP, a.groupConfig.RpcConfig.RPC.AutoSetPorts, a.groupConfig.RpcConfig.RPC.Ports,
a.Index(), a.groupConfig.Share.RpcRegisterName.Group, &a.groupConfig.Share, a.groupConfig, group.Start, versionctx.EnableVersionCtx()) a.Index(), a.groupConfig.Share.RpcRegisterName.Group, &a.groupConfig.Share, a.groupConfig,
nil,
group.Start, versionctx.EnableVersionCtx())
} }

Some files were not shown because too many files have changed in this diff Show More