Compare commits

...

107 Commits

Author SHA1 Message Date
KubeSphere CI Bot
c8e131fc13 [release-3.3] adjust Pod status filter (#5488)
adjust Pod status filter

Signed-off-by: frezes <zhangjunhao@kubesphere.io>

Signed-off-by: frezes <zhangjunhao@kubesphere.io>
Co-authored-by: frezes <zhangjunhao@kubesphere.io>
2023-01-17 14:26:01 +08:00
KubeSphere CI Bot
839a31ac1d [release-3.3] Fix:Goroutine leaks when getting audit event sender times out (#5475)
* Fix:Goroutine leaks when getting audit event sender times out

* make it more readable

Co-authored-by: hzhhong <hung.z.h916@gmail.com>
2023-01-13 11:14:33 +08:00
KubeSphere CI Bot
a0ba5f6085 [release-3.3] fix Home field fault in appstore application (#5474)
fix appstore app home field

Co-authored-by: xiaoliu <978911210@qq.com>
2023-01-13 11:14:25 +08:00
KubeSphere CI Bot
658497aa0a [release-3.3] fix: ks-apiserver panic error: ServiceAccount's Secret index out of r… (#5472)
fix: ks-apiserver panic error: ServiceAccount's Secret index out of range

Co-authored-by: peng wu <2030047311@qq.com>
2023-01-13 11:14:17 +08:00
KubeSphere CI Bot
a47bf848df [release-3.3] Fix missing maintainers in helm apps (#5473)
fix missing maintainers in helm apps

Co-authored-by: qingwave <854222409@qq.com>
2023-01-13 11:07:17 +08:00
hongzhouzi
dbb3f04b9e Resolved Conflict [release-3.3] Fix failed to cache resources if group version not found #5408 (#5466)
Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>
2023-01-12 18:45:17 +08:00
hongzhouzi
705ea4af40 Resolved Conflict [release-3.3] Fix id generate error in IPv6-only environment. #5419 (#5465)
Resolved Conflict [release-3.3] Fix id generate error in IPv6-only environment. #5459

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>
Co-authored-by: isyes <isyes@foxmail.com>
2023-01-12 18:26:17 +08:00
KubeSphere CI Bot
366d1e16e4 [release-3.3] fix: concurrent map read and map write caused by reloading in ks-apiserver (#5464)
fix: concurrent map read and map write caused by reloading in ks-apiserver.

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>
Co-authored-by: hongzhouzi <hongzhouzi@kubesphere.io>
2023-01-12 17:55:17 +08:00
hongzhouzi
690d5be824 Resolved Conflict [release-3.3] fix: Resolved some data out of sync after live-reload. #5458 (#5462)
Resolved Conflict [fix: Resolved some data out of sync after live-reload.]

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>
2023-01-12 17:44:17 +08:00
KubeSphere CI Bot
c0419ddab5 [release-3.3] add dynamic options for cache (#5325)
* add dynamic options for cache

* fixed bugs based on unit-test

* add doc for cache

* make cache implements be private

* Change simpleCache name to InMemoryCache

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

* Remove fake cache and replacing to in memory cache with default parameter

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>
Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-11-03 15:55:00 +08:00
KubeSphere CI Bot
80b0301f79 [release-3.3] Fix: globalrole has cluster management right can not manage cluster (#5334)
Fix: globalrole has permision of cluster management can not manage cluster

Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-10-27 14:47:50 +08:00
KubeSphere CI Bot
7162d41310 [release-3.3] Check cluster permission for create/update workspacetemplate (#5310)
* add cluster authorization for create/update workspacetemplate

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

add cluster authorization for create/update workspacetemplate

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

* add handle forbidden err

* add forbidden error log

* allow to use clusters of public visibility

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>
Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-10-21 09:55:41 +08:00
KubeSphere CI Bot
6b10d346ca [release-3.3] fix #5267 by renaming yaml struct tag (#5275)
fix #5267 by renaming yaml struct tag

Signed-off-by: chavacava <salvadorcavadini+github@gmail.com>

Signed-off-by: chavacava <salvadorcavadini+github@gmail.com>
Co-authored-by: chavacava <salvadorcavadini+github@gmail.com>
2022-10-08 14:34:33 +08:00
KubeSphere CI Bot
6a0d5ba93c [release-3.3] Fix: Can not resolve the resource scope correctly (#5274)
Fix: can not resolve the resource scope of clusters.cluster.kubesphere.io correctly

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>
Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-10-08 13:58:57 +08:00
KubeSphere CI Bot
d87a782257 [release-3.3] Fix cluster gateway logs and resource status display exception (#5250)
Cluster gateway logs and resource status display exception

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>
Co-authored-by: hongzhouzi <hongzhouzi@kubesphere.io>
2022-09-28 00:11:23 +08:00
KubeSphere CI Bot
82e55578a8 [release-3.3] fix gateway upgrade validate error. (#5236)
gateway upgrade validate error.

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>
Co-authored-by: hongzhouzi <hongzhouzi@kubesphere.io>
2022-09-21 17:13:17 +08:00
KubeSphere CI Bot
5b9c357160 [release-3.3] Fix: when placement is empty return error (#5218)
Fix: when placement is empty return error

Co-authored-by: Wenhao Zhou <wenhaozhou@yunfiy.com>
2022-09-15 19:38:47 +08:00
KubeSphere CI Bot
c385dd92e4 [release-3.3] Add authorization control for patching workspacetemplates (#5217)
* update patch workspacetemplate for supporting patch with JsonPatchType and change the authorization processing

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

* make goimports

* Fix: Of the type is not string will lead to panic

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

* Add jsonpatchutil for handling json patch data

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

* Updated patch workspacetemplate to to make the code run more efficiently

* fix: multiple clusterrolebindings cannot autorizate

* Correct wrong spelling

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>
Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-09-15 19:32:47 +08:00
KubeSphere CI Bot
1e1b2bd594 [release-3.3] support recording disable and enable users in auditing (#5202)
support recording disable and enable users in auditing

Signed-off-by: wanjunlei <wanjunlei@kubesphere.io>

Signed-off-by: wanjunlei <wanjunlei@kubesphere.io>
Co-authored-by: wanjunlei <wanjunlei@kubesphere.io>
2022-09-08 10:25:41 +08:00
KubeSphere CI Bot
951b86648c [release-3.3] fix bug helm repo paging query (#5201)
* fix bug helmrepo paging query

* fix bug helmrepo paging query

* fix bug helm repo paging query

Co-authored-by: mayongxing <mayongxing@cmsr.chinamobile.com>
2022-09-08 10:17:41 +08:00
KubeSphere CI Bot
04433c139d [release-3.3] Fix: index out of range when merging two repo indexes (#5169)
Fix: index out of range when merging two repo indexes

Co-authored-by: LiHui <andrewli@kubesphere.io>
2022-08-25 16:06:36 +08:00
KubeSphere CI Bot
3b8c28d21e [release-3.3] Support for filtering workspace roles using labelSelector (#5162)
Support for filtering workspace roles using labelSelector

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>
Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-08-23 10:30:21 +08:00
KubeSphere CI Bot
9489718270 [release-3.3] fill field status of helmrepo in response (#5158)
fill field status of helmrepo in response

Signed-off-by: x893675 <x893675@icloud.com>

Signed-off-by: x893675 <x893675@icloud.com>
Co-authored-by: x893675 <x893675@icloud.com>
2022-08-22 16:15:00 +08:00
KubeSphere CI Bot
54df6b8c8c [release-3.3] fix cluster ready condition always true (#5137)
fix cluster ready condition always true

Signed-off-by: x893675 <x893675@icloud.com>

Signed-off-by: x893675 <x893675@icloud.com>
Co-authored-by: x893675 <x893675@icloud.com>
2022-08-16 14:12:46 +08:00
KubeSphere CI Bot
d917905529 [release-3.3] Fix ingress P95 delay time promql statement (#5132)
Fix ingress P95 delay time promql statement

Co-authored-by: Xinzhao Xu <z2d@jifangcheng.com>
2022-08-14 16:49:35 +08:00
KubeSphere CI Bot
cd6f940f1d [release-3.3] Adjust container terminal priority: bash, sh (#5076)
Adjust container terminal priority: bash, sh

Co-authored-by: tal66 <77445020+tal66@users.noreply.github.com>
2022-07-21 11:16:29 +08:00
KubeSphere CI Bot
921a8f068b [release-3.3] skip generated code when fmt code (#5079)
skip generated code when fmt code

Co-authored-by: LiHui <andrewli@kubesphere.io>
2022-07-21 11:16:14 +08:00
KubeSphere CI Bot
641aa1dfcf [release-3.3] close remote terminal.(#5023) (#5028)
close remote terminal.(kubesphere#5023)

Co-authored-by: lixueduan <li.xueduan@99cloud.net>
2022-07-06 18:08:34 +08:00
Rick
4522c841af Add the corresponding label 'kind/bug' to the issue template (#4952) 2022-06-20 10:32:52 +08:00
Calvin Yu
8e906ed3de Create SECURITY.md 2022-06-15 10:12:21 +08:00
KubeSphere CI Bot
ac36ff5752 Merge pull request #4940 from xyz-li/sa_token
create default token for service account
2022-06-09 11:32:40 +08:00
LiHui
098b77fb4c add key to queue 2022-06-09 11:13:56 +08:00
LiHui
e97f27e580 create sa token 2022-06-09 10:28:55 +08:00
KubeSphere CI Bot
bc00b67a6e Merge pull request #4938 from qingwave/typo-fix
fix some typos
2022-06-08 10:25:00 +08:00
KubeSphere CI Bot
8b0f2674bd Merge pull request #4939 from iawia002/fix-sync
Promptly handle the cluster when it is deleted
2022-06-08 10:23:43 +08:00
KubeSphere CI Bot
108963f87b Merge pull request #4941 from SinTod/master
Unified call WriteEntity func
2022-06-08 10:20:59 +08:00
KubeSphere CI Bot
6525a3c3b3 Merge pull request #4937 from zhanghw0354/master
add unit test for GetServiceTracing
2022-06-08 10:02:00 +08:00
KubeSphere CI Bot
f0cc7f6430 Merge pull request #4928 from xyz-li/gops
Add agent to report additional information.
2022-06-07 10:51:38 +08:00
LiHui
47563af08c add gops agent to ks-apiserver&&controller-manager 2022-06-07 09:45:09 +08:00
SinTod
26b871ecf4 Unified call WriteEntity func 2022-06-06 15:30:11 +08:00
Xinzhao Xu
5e02f1b86b Promptly handle the cluster when it is deleted 2022-06-06 11:31:14 +08:00
qingwave
c78ab9039a fix some typos 2022-06-06 02:43:23 +00:00
zhanghaiwen
02e99365c7 add unit test for GetServiceTracing 2022-06-02 14:46:27 +08:00
KubeSphere CI Bot
0c2a419a5e Merge pull request #4936 from xyz-li/key
Fix kubeconfig generate bug
2022-06-02 11:58:54 +08:00
LiHui
77e0373777 fix gen key type 2022-06-02 11:19:45 +08:00
KubeSphere CI Bot
04d70b1db4 Merge pull request #4921 from xyz-li/master
complete the help doc
2022-06-01 16:06:53 +08:00
KubeSphere CI Bot
86beabdb32 Merge pull request #4927 from qingwave/gateway-log-context
gateway: avoid pod log connection leak
2022-06-01 16:02:31 +08:00
LiHui
1e8cea4971 add gops 2022-06-01 15:00:33 +08:00
qingwave
107e2ec64c fix: avoid gateway pod log connection leak 2022-06-01 02:14:19 +00:00
LiHui
17b97d7ada complete the help doc 2022-05-31 10:41:25 +08:00
KubeSphere CI Bot
2758e35a4e Merge pull request #4881 from suwliang3/master
feature:test functions in package resources/v1alpha3 by building restful's re…
2022-05-29 23:23:40 +08:00
KubeSphere CI Bot
305da3c0c5 Merge pull request #4918 from anhoder/master
fix:goroutine leak when open terminal
2022-05-29 23:18:51 +08:00
KubeSphere CI Bot
e5ac3608f6 Merge pull request #4916 from ONE7live/dev_test
add some unit test for models
2022-05-29 23:17:50 +08:00
anhoder
d0933055cb fix:goroutine leak when open terminal 2022-05-27 18:25:43 +08:00
KubeSphere CI Bot
fc7cdd7300 Merge pull request #4915 from wansir/master
chore: update vendor
2022-05-27 17:47:25 +08:00
hongming
52b7fb71b2 chore: update vendor 2022-05-27 16:42:26 +08:00
ONE7live
4247387144 add some unit test for models
Signed-off-by: ONE7live <wangqi_yewu@cmss.chinamobile.com>
2022-05-27 16:10:01 +08:00
KubeSphere CI Bot
da5e4cc247 Merge pull request #4904 from xyz-li/master
add workspace to review list
2022-05-25 14:34:31 +08:00
LiHui
73852a8a4b add workspace to review list 2022-05-25 11:58:21 +08:00
suwanliang
b2be653639 run make fmt and make goimports 2022-05-24 18:37:16 +08:00
KubeSphere CI Bot
0418277b57 Merge pull request #4896 from wansir/fix-4890
fix: cluster list granted to users is incorrect
2022-05-23 17:59:52 +08:00
hongming
382be8b16b fix: cluster list granted to users is incorrect 2022-05-23 17:06:19 +08:00
KubeSphere CI Bot
32ac94a7e5 Merge pull request #4889 from xyz-li/sync
cluster not found and repo not found
2022-05-23 15:48:13 +08:00
KubeSphere CI Bot
3e381c9ad5 Merge pull request #4879 from xiaoping378/patch-1
fix unformatted log
2022-05-23 11:56:51 +08:00
LiHui
35027a346b add openpitrix Client to apiserver 2022-05-20 17:37:52 +08:00
LiHui
32b85cd625 cluster clusters 2022-05-20 11:53:51 +08:00
KubeSphere CI Bot
559539275e Merge pull request #4888 from wansir/master
refactor: remove the useless CRD
2022-05-19 15:58:58 +08:00
hongming
211fb293e0 refactor: remove the useless CRD 2022-05-19 15:43:37 +08:00
suwanliang
530b358c94 test functions in package resources/v1alpha3 by building restful's res and res 2022-05-16 18:27:06 +08:00
KubeSphere CI Bot
49cc977cf0 Merge pull request #4877 from wansir/fix-4876
Reduce unnecessary status updates
2022-05-16 17:18:06 +08:00
KubeSphere CI Bot
2b575d04aa Merge pull request #4880 from iawia002/workspace-detail-api
Add get workspace API
2022-05-16 17:17:05 +08:00
Xinzhao Xu
4a0e4ba73c update openapi 2022-05-16 16:16:36 +08:00
Xinzhao Xu
26576cc665 Add get workspace API 2022-05-16 16:14:33 +08:00
hongming
c434971140 Sync cluster status periodically 2022-05-16 16:00:54 +08:00
hongming
825a38f948 Reduce unnecessary status updates 2022-05-16 10:43:27 +08:00
xiaoping
aa78e3215c fix unformatted log 2022-05-15 21:05:58 +08:00
KubeSphere CI Bot
1c96f99072 Merge pull request #4870 from wansir/fix-4857
Fix: restricted users cannot activate manually
2022-05-12 14:08:04 +08:00
KubeSphere CI Bot
788fc508e3 Merge pull request #4868 from wansir/fix-4780
Fix: deny the blocked user request
2022-05-12 13:53:04 +08:00
KubeSphere CI Bot
0f1c815cf7 Merge pull request #4865 from weihongzhoulord/fix-gateway-4841
fix:modify the default resource reservation of gateway system
2022-05-12 12:59:04 +08:00
KubeSphere CI Bot
f9abd09f99 Merge pull request #4861 from StevenBrown008/master
fix tcp match error
2022-05-12 12:57:40 +08:00
hongming
f304ecdd01 Fix: deny the blocked user request 2022-05-12 12:17:41 +08:00
hongming
a67451a51a Fix: restricted users cannot activate manually 2022-05-12 10:00:49 +08:00
fangyunyun
ce431c53a7 Merge remote-tracking branch 'upstream/master' 2022-05-11 17:59:53 +08:00
hongzhouzi
dd836fc652 fix:modify the default resource reservation of gateway system, gateway.go typo 2022-05-11 10:50:33 +08:00
KubeSphere CI Bot
ac423922cf Merge pull request #4866 from wenchajun/gpu
Fix gpu null pointer exception
2022-05-11 10:12:38 +08:00
chengdehao
75803113f6 fix nil pointer
Signed-off-by: chengdehao <dehaocheng@yunify.com>
2022-05-10 23:36:50 +08:00
KubeSphere CI Bot
1a6bc3c890 Merge pull request #4862 from wansir/fix-4781
Fix disabled status not work for OAuth
2022-05-10 11:18:36 +08:00
hongming
0a44c30a46 Fix disabled status not work for OAuth 2022-05-09 17:11:04 +08:00
fangyunyun
0b17228017 fix tcp match error 2022-05-09 15:43:34 +08:00
KubeSphere CI Bot
499e21193c Merge pull request #4605 from iawia002/clean
Cleanup cluster controller and remove unused code
2022-05-06 16:39:02 +08:00
KubeSphere CI Bot
6f3eec23ae Merge pull request #4847 from xyz-li/master
Fix: e2e test failed
2022-05-06 16:08:02 +08:00
LiHui
3a681a28c6 update kind image 2022-05-06 14:30:26 +08:00
LiHui
f994174f75 Fix: e2e test failed 2022-05-06 10:14:32 +08:00
KubeSphere CI Bot
233829a7d5 Merge pull request #4838 from wansir/fix-4039
Fix typo
2022-05-05 09:47:30 +08:00
hongming
bc7adc1be6 Fix typo 2022-04-29 18:49:44 +08:00
KubeSphere CI Bot
446f55206e Merge pull request #4835 from wansir/fix-4039
Fix crash caused by resouce discovery failed
2022-04-29 16:40:00 +08:00
KubeSphere CI Bot
b2b1fb31d3 Merge pull request #4815 from 2hangchen/master
fix: fix the gateway variable name.
2022-04-29 15:19:00 +08:00
KubeSphere CI Bot
8d97652b13 Merge pull request #4833 from SinTod/master
fix controller-manager Dockerfile kustomize targetos
2022-04-29 15:18:58 +08:00
hongming
7603c74ebb Fix crash caused by resouce discovery failed 2022-04-29 00:03:32 +08:00
SinTod
dc10a37624 fix controller-manager Dockerfile kustomize targetos 2022-04-28 14:52:30 +08:00
KubeSphere CI Bot
ef5fcbd9ce Merge pull request #4831 from iawia002/fix-clusterclient
Double check in clusterclient if the cluster exists but is not cached
2022-04-27 17:30:45 +08:00
Pilipalaca
1e5c4c7749 make deepcopy
Signed-off-by: Pilipalaca <85749695@qq.com>
2022-04-27 15:50:46 +08:00
Xinzhao Xu
757fca8ade Double check in clusterclient if the cluster exists but is not cached 2022-04-27 15:32:31 +08:00
Pilipalaca
e90efe1c34 fix: fix the gateway variable name.
Signed-off-by: Pilipalaca <85749695@qq.com>
2022-04-21 17:13:05 +08:00
KubeSphere CI Bot
7d9563dca1 Merge pull request #4803 from polym/docs/readme
docs: update kubekey version to v2.0.0
2022-04-20 10:45:38 +08:00
hongbo.mo
eec4217fdb docs: update kubekey version to v2.0.0 2022-04-18 16:32:34 +08:00
Xinzhao Xu
1e760b0069 Cleanup cluster controller and remove unused code 2022-03-09 10:52:33 +08:00
131 changed files with 3372 additions and 1987 deletions

View File

@@ -1,5 +1,6 @@
---
name: Bug report
labels: ["kind/bug"]
about: Create a report to help us improve
---

View File

@@ -2,7 +2,7 @@ kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.19.7
image: kindest/node:v1.21.1
extraMounts:
- hostPath: /etc/localtime
containerPath: /etc/localtime

View File

@@ -6,8 +6,8 @@
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:trivialVersions=true"
GV="network:v1alpha1 servicemesh:v1alpha2 tenant:v1alpha1 tenant:v1alpha2 devops:v1alpha1 iam:v1alpha2 devops:v1alpha3 cluster:v1alpha1 storage:v1alpha1 auditing:v1alpha1 types:v1beta1 quota:v1alpha2 application:v1alpha1 notification:v2beta1"
MANIFESTS="application/* cluster/* iam/* network/v1alpha1 quota/* storage/* tenant/*"
GV="network:v1alpha1 servicemesh:v1alpha2 tenant:v1alpha1 tenant:v1alpha2 devops:v1alpha1 iam:v1alpha2 devops:v1alpha3 cluster:v1alpha1 storage:v1alpha1 auditing:v1alpha1 types:v1beta1 quota:v1alpha2 application:v1alpha1 notification:v2beta1 gateway:v1alpha1"
MANIFESTS="application/* cluster/* iam/* network/v1alpha1 quota/* storage/* tenant/* gateway/*"
# App Version
APP_VERSION = v3.2.0

View File

@@ -139,7 +139,7 @@ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3
```yaml
# Download KubeKey
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -
# Make kk executable
chmod +x kk
# Create a cluster

50
SECURITY.md Normal file
View File

@@ -0,0 +1,50 @@
# Security Policy
## Supported Versions
Use this section to tell people about which versions of your project are
currently being supported with security updates.
| Version | Supported |
| ------- | ------------------ |
| 3.2.x | :white_check_mark: |
| 3.1.x | :white_check_mark: |
| 3.0.x | :white_check_mark: |
| 2.1.x | :white_check_mark: |
| < 2.1.x | :x: |
## Reporting a Vulnerability
# Security Vulnerability Disclosure and Response Process
To ensure KubeSphere security, a security vulnerability disclosure and response process is adopted. And the security team is set up in KubeSphere community, also any issue and PR is welcome for every contributors.
The primary goal of this process is to reduce the total exposure time of users to publicly known vulnerabilities. To quickly fix vulnerabilities of KubeSphere, the security team is responsible for the entire vulnerability management process, including internal communication and external disclosure.
If you find a vulnerability or encounter a security incident involving vulnerabilities of KubeSphere, please report it as soon as possible to the KubeSphere security team (security@kubesphere.io).
Please kindly help provide as much vulnerability information as possible in the following format:
- Issue title(Please add 'Security' lable)*:
- Overview*:
- Affected components and version number*:
- CVE number (if any):
- Vulnerability verification process*:
- Contact information*:
The asterisk (*) indicates the required field.
# Response Time
The KubeSphere security team will confirm the vulnerabilities and contact you within 2 working days after your submission.
We will publicly thank you after fixing the security vulnerability. To avoid negative impact, please keep the vulnerability confidential until we fix it. We would appreciate it if you could obey the following code of conduct:
The vulnerability will not be disclosed until KubeSphere releases a patch for it.
The details of the vulnerability, for example, exploits code, will not be disclosed.

View File

@@ -13167,6 +13167,35 @@
}
}
},
"/kapis/tenant.kubesphere.io/v1alpha3/workspaces/{workspace}": {
"get": {
"produces": [
"application/json"
],
"tags": [
"Workspace"
],
"summary": "Get workspace.",
"operationId": "GetWorkspace",
"parameters": [
{
"type": "string",
"description": "workspace name",
"name": "workspace",
"in": "path",
"required": true
}
],
"responses": {
"200": {
"description": "ok",
"schema": {
"$ref": "#/definitions/v1alpha1.Workspace"
}
}
}
}
},
"/kapis/tenant.kubesphere.io/v1alpha3/workspacetemplates": {
"get": {
"produces": [
@@ -14280,14 +14309,14 @@
},
"metering.OpenPitrixStatistic": {
"required": [
"memory_usage_wo_cache",
"net_bytes_transmitted",
"net_bytes_received",
"pvc_bytes_total",
"deployments",
"statefulsets",
"daemonsets",
"cpu_usage",
"memory_usage_wo_cache",
"net_bytes_transmitted",
"net_bytes_received"
"cpu_usage"
],
"properties": {
"cpu_usage": {
@@ -20810,6 +20839,27 @@
}
}
},
"v1alpha1.Workspace": {
"properties": {
"apiVersion": {
"description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources",
"type": "string"
},
"kind": {
"description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"type": "string"
},
"metadata": {
"$ref": "#/definitions/v1.ObjectMeta"
},
"spec": {
"$ref": "#/definitions/v1alpha1.WorkspaceSpec"
},
"status": {
"$ref": "#/definitions/v1alpha1.WorkspaceStatus"
}
}
},
"v1alpha1.WorkspaceSpec": {
"properties": {
"manager": {
@@ -20820,6 +20870,7 @@
}
}
},
"v1alpha1.WorkspaceStatus": {},
"v1alpha2.APIResponse": {
"properties": {
"histogram": {
@@ -21337,8 +21388,8 @@
},
"v1alpha2.Node": {
"required": [
"id",
"labelMinor",
"id",
"label",
"rank",
"controls"
@@ -21448,10 +21499,10 @@
},
"v1alpha2.NodeSummary": {
"required": [
"rank",
"id",
"label",
"labelMinor"
"labelMinor",
"rank"
],
"properties": {
"adjacency": {

View File

@@ -26,7 +26,7 @@ RUN mv /tmp/${TARGETOS}-${TARGETARCH}/helm ${OUTDIR}/usr/local/bin/
# install kustomize
ADD https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2F${KUSTOMIZE_VERSION}/kustomize_${KUSTOMIZE_VERSION}_${TARGETOS}_${TARGETARCH}.tar.gz /tmp
RUN tar xvzf /tmp/kustomize_${KUSTOMIZE_VERSION}_linux_${TARGETARCH}.tar.gz -C /tmp
RUN tar xvzf /tmp/kustomize_${KUSTOMIZE_VERSION}_${TARGETOS}_${TARGETARCH}.tar.gz -C /tmp
RUN mv /tmp/kustomize ${OUTDIR}/usr/local/bin/

View File

@@ -488,12 +488,12 @@ func addAllControllers(mgr manager.Manager, client k8s.Client, informerFactory i
if cmOptions.MultiClusterOptions.Enable {
clusterController := cluster.NewClusterController(
client.Kubernetes(),
client.KubeSphere(),
client.Config(),
kubesphereInformer.Cluster().V1alpha1().Clusters(),
client.KubeSphere().ClusterV1alpha1().Clusters(),
kubesphereInformer.Iam().V1alpha2().Users().Lister(),
cmOptions.MultiClusterOptions.ClusterControllerResyncPeriod,
cmOptions.MultiClusterOptions.HostClusterName,
kubernetesInformer.Core().V1().ConfigMaps(),
)
addController(mgr, "cluster", clusterController)
}

View File

@@ -82,6 +82,9 @@ type KubeSphereControllerManagerOptions struct {
// * has the lowest priority.
// e.g. *,-foo, means "disable 'foo'"
ControllerGates []string
// Enable gops or not.
GOPSEnabled bool
}
func NewKubeSphereControllerManagerOptions() *KubeSphereControllerManagerOptions {
@@ -144,6 +147,9 @@ func (s *KubeSphereControllerManagerOptions) Flags(allControllerNameSelectors []
"named 'foo', '-foo' disables the controller named 'foo'.\nAll controllers: %s",
strings.Join(allControllerNameSelectors, ", ")))
gfs.BoolVar(&s.GOPSEnabled, "gops", s.GOPSEnabled, "Whether to enable gops or not. When enabled this option, "+
"controller-manager will listen on a random port on 127.0.0.1, then you can use the gops tool to list and diagnose the controller-manager currently running.")
kfs := fss.FlagSet("klog")
local := flag.NewFlagSet("klog", flag.ExitOnError)
klog.InitFlags(local)
@@ -236,4 +242,5 @@ func (s *KubeSphereControllerManagerOptions) MergeConfig(cfg *controllerconfig.C
s.MultiClusterOptions = cfg.MultiClusterOptions
s.ServiceMeshOptions = cfg.ServiceMeshOptions
s.GatewayOptions = cfg.GatewayOptions
s.MonitoringOptions = cfg.MonitoringOptions
}

View File

@@ -21,6 +21,7 @@ import (
"fmt"
"os"
"github.com/google/gops/agent"
"github.com/spf13/cobra"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
@@ -73,12 +74,21 @@ func NewControllerManagerCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "controller-manager",
Long: `KubeSphere controller manager is a daemon that`,
Long: `KubeSphere controller manager is a daemon that embeds the control loops shipped with KubeSphere.`,
Run: func(cmd *cobra.Command, args []string) {
if errs := s.Validate(allControllers); len(errs) != 0 {
klog.Error(utilerrors.NewAggregate(errs))
os.Exit(1)
}
if s.GOPSEnabled {
// Add agent to report additional information such as the current stack trace, Go version, memory stats, etc.
// Bind to a random port on address 127.0.0.1
if err := agent.Listen(agent.Options{}); err != nil {
klog.Fatal(err)
}
}
if err = Run(s, controllerconfig.WatchConfigChange(), signals.SetupSignalHandler()); err != nil {
klog.Error(err)
os.Exit(1)

View File

@@ -20,6 +20,12 @@ import (
"crypto/tls"
"flag"
"fmt"
"net/http"
"strings"
"sync"
openpitrixv1 "kubesphere.io/kubesphere/pkg/kapis/openpitrix/v1"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
"kubesphere.io/kubesphere/pkg/apiserver/authentication/token"
@@ -38,9 +44,6 @@ import (
auditingclient "kubesphere.io/kubesphere/pkg/simple/client/auditing/elasticsearch"
"kubesphere.io/kubesphere/pkg/simple/client/cache"
"net/http"
"strings"
"kubesphere.io/kubesphere/pkg/simple/client/devops/jenkins"
eventsclient "kubesphere.io/kubesphere/pkg/simple/client/events/elasticsearch"
"kubesphere.io/kubesphere/pkg/simple/client/k8s"
@@ -56,15 +59,18 @@ type ServerRunOptions struct {
ConfigFile string
GenericServerRunOptions *genericoptions.ServerRunOptions
*apiserverconfig.Config
schemeOnce sync.Once
DebugMode bool
//
DebugMode bool
// Enable gops or not.
GOPSEnabled bool
}
func NewServerRunOptions() *ServerRunOptions {
s := &ServerRunOptions{
GenericServerRunOptions: genericoptions.NewServerRunOptions(),
Config: apiserverconfig.New(),
schemeOnce: sync.Once{},
}
return s
@@ -73,13 +79,14 @@ func NewServerRunOptions() *ServerRunOptions {
func (s *ServerRunOptions) Flags() (fss cliflag.NamedFlagSets) {
fs := fss.FlagSet("generic")
fs.BoolVar(&s.DebugMode, "debug", false, "Don't enable this if you don't know what it means.")
fs.BoolVar(&s.GOPSEnabled, "gops", false, "Whether to enable gops or not. When enabled this option, "+
"ks-apiserver will listen on a random port on 127.0.0.1, then you can use the gops tool to list and diagnose the ks-apiserver currently running.")
s.GenericServerRunOptions.AddFlags(fs, s.GenericServerRunOptions)
s.KubernetesOptions.AddFlags(fss.FlagSet("kubernetes"), s.KubernetesOptions)
s.AuthenticationOptions.AddFlags(fss.FlagSet("authentication"), s.AuthenticationOptions)
s.AuthorizationOptions.AddFlags(fss.FlagSet("authorization"), s.AuthorizationOptions)
s.DevopsOptions.AddFlags(fss.FlagSet("devops"), s.DevopsOptions)
s.SonarQubeOptions.AddFlags(fss.FlagSet("sonarqube"), s.SonarQubeOptions)
s.RedisOptions.AddFlags(fss.FlagSet("redis"), s.RedisOptions)
s.S3Options.AddFlags(fss.FlagSet("s3"), s.S3Options)
s.OpenPitrixOptions.AddFlags(fss.FlagSet("openpitrix"), s.OpenPitrixOptions)
s.NetworkOptions.AddFlags(fss.FlagSet("network"), s.NetworkOptions)
@@ -168,21 +175,23 @@ func (s *ServerRunOptions) NewAPIServer(stopCh <-chan struct{}) (*apiserver.APIS
apiServer.SonarClient = sonarqube.NewSonar(sonarClient.SonarQube())
}
var cacheClient cache.Interface
if s.RedisOptions != nil && len(s.RedisOptions.Host) != 0 {
if s.RedisOptions.Host == fakeInterface && s.DebugMode {
apiServer.CacheClient = cache.NewSimpleCache()
} else {
cacheClient, err = cache.NewRedisClient(s.RedisOptions, stopCh)
if err != nil {
return nil, fmt.Errorf("failed to connect to redis service, please check redis status, error: %v", err)
}
apiServer.CacheClient = cacheClient
// If debug mode is on or CacheOptions is nil, will create a fake cache.
if s.CacheOptions.Type != "" {
if s.DebugMode {
s.CacheOptions.Type = cache.DefaultCacheType
}
cacheClient, err := cache.New(s.CacheOptions, stopCh)
if err != nil {
return nil, fmt.Errorf("failed to create cache, error: %v", err)
}
apiServer.CacheClient = cacheClient
} else {
klog.Warning("ks-apiserver starts without redis provided, it will use in memory cache. " +
"This may cause inconsistencies when running ks-apiserver with multiple replicas.")
apiServer.CacheClient = cache.NewSimpleCache()
s.CacheOptions = &cache.Options{Type: cache.DefaultCacheType}
// fake cache has no error to return
cacheClient, _ := cache.New(s.CacheOptions, stopCh)
apiServer.CacheClient = cacheClient
klog.Warning("ks-apiserver starts without cache provided, it will use in memory cache. " +
"This may cause inconsistencies when running ks-apiserver with multiple replicas, and memory leak risk")
}
if s.EventsOptions.Host != "" {
@@ -209,6 +218,13 @@ func (s *ServerRunOptions) NewAPIServer(stopCh <-chan struct{}) (*apiserver.APIS
apiServer.AlertingClient = alertingClient
}
if s.Config.MultiClusterOptions.Enable {
cc := clusterclient.NewClusterClient(informerFactory.KubeSphereSharedInformerFactory().Cluster().V1alpha1().Clusters())
apiServer.ClusterClient = cc
}
apiServer.OpenpitrixClient = openpitrixv1.NewOpenpitrixClient(informerFactory, apiServer.KubernetesClient.KubeSphere(), s.OpenPitrixOptions, apiServer.ClusterClient)
server := &http.Server{
Addr: fmt.Sprintf(":%d", s.GenericServerRunOptions.InsecurePort),
}
@@ -226,9 +242,11 @@ func (s *ServerRunOptions) NewAPIServer(stopCh <-chan struct{}) (*apiserver.APIS
}
sch := scheme.Scheme
if err := apis.AddToScheme(sch); err != nil {
klog.Fatalf("unable add APIs to scheme: %v", err)
}
s.schemeOnce.Do(func() {
if err := apis.AddToScheme(sch); err != nil {
klog.Fatalf("unable add APIs to scheme: %v", err)
}
})
apiServer.RuntimeCache, err = runtimecache.New(apiServer.KubernetesClient.Config(), runtimecache.Options{Scheme: sch})
if err != nil {

View File

@@ -21,6 +21,7 @@ import (
"fmt"
"net/http"
"github.com/google/gops/agent"
"github.com/spf13/cobra"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
cliflag "k8s.io/component-base/cli/flag"
@@ -57,6 +58,15 @@ cluster's shared state through which all other components interact.`,
if errs := s.Validate(); len(errs) != 0 {
return utilerrors.NewAggregate(errs)
}
if s.GOPSEnabled {
// Add agent to report additional information such as the current stack trace, Go version, memory stats, etc.
// Bind to a random port on address 127.0.0.1.
if err := agent.Listen(agent.Options{}); err != nil {
klog.Fatal(err)
}
}
return Run(s, apiserverconfig.WatchConfigChange(), signals.SetupSignalHandler())
},
SilenceUsage: true,

View File

@@ -66,6 +66,33 @@ spec:
replicas:
format: int32
type: integer
resources:
description: ResourceRequirements describes the compute resource
requirements.
properties:
limits:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Limits describes the maximum amount of compute
resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
requests:
additionalProperties:
anyOf:
- type: integer
- type: string
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
x-kubernetes-int-or-string: true
description: 'Requests describes the minimum amount of compute
resources required. If Requests is omitted for a container,
it defaults to Limits if that is explicitly specified, otherwise
to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
type: object
type: object
type: object
service:
properties:

View File

@@ -192,13 +192,7 @@ spec:
# ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903
# Ideally, there should be no limits.
# https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/
resources:
# limits:
# cpu: 100m
# memory: 90Mi
requests:
cpu: 100m
memory: 90Mi
resources: {{ toYaml .Values.deployment.resources | nindent 6 }}
# Mutually exclusive with keda autoscaling
autoscaling:

View File

@@ -26,4 +26,12 @@ service:
deployment:
annotations: {}
replicas: 1
resources:
# limits:
# cpu: 100m
# memory: 90Mi
requests:
cpu: 100m
memory: 90Mi

8
go.mod
View File

@@ -50,6 +50,7 @@ require (
github.com/golang/example v0.0.0-20170904185048-46695d81d1fa
github.com/google/go-cmp v0.5.6
github.com/google/go-containerregistry v0.6.0
github.com/google/gops v0.3.23
github.com/google/uuid v1.1.2
github.com/gorilla/handlers v1.4.0 // indirect
github.com/gorilla/websocket v1.4.2
@@ -81,6 +82,8 @@ require (
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/common v0.26.0
github.com/prometheus/prometheus v1.8.2-0.20200907175821-8219b442c864
github.com/shirou/gopsutil v0.0.0-20180427012116-c95755e4bcd7 // indirect
github.com/shirou/w32 v0.0.0-20160930032740-bb4de0191aa4 // indirect
github.com/sony/sonyflake v0.0.0-20181109022403-6d5bd6181009
github.com/speps/go-hashids v2.0.0+incompatible
github.com/spf13/cobra v1.2.1
@@ -258,6 +261,7 @@ replace (
github.com/coreos/pkg => github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f
github.com/cortexproject/cortex => github.com/cortexproject/cortex v1.3.1-0.20200901115931-255ff3306960
github.com/cpuguy83/go-md2man => github.com/cpuguy83/go-md2man v1.0.10
github.com/cpuguy83/go-md2man/v2 => github.com/cpuguy83/go-md2man/v2 v2.0.0
github.com/creack/pty => github.com/creack/pty v1.1.7
github.com/cyphar/filepath-securejoin => github.com/cyphar/filepath-securejoin v0.2.2
github.com/cznic/b => github.com/cznic/b v0.0.0-20180115125044-35e9bbe41f07
@@ -375,7 +379,6 @@ replace (
github.com/gobwas/pool => github.com/gobwas/pool v0.2.0
github.com/gobwas/ws => github.com/gobwas/ws v1.0.2
github.com/gocql/gocql => github.com/gocql/gocql v0.0.0-20200526081602-cd04bd7f22a7
github.com/gocraft/dbr => github.com/gocraft/dbr v0.0.0-20180507214907-a0fd650918f6
github.com/godbus/dbus => github.com/godbus/dbus v0.0.0-20190402143921-271e53dc4968
github.com/godror/godror => github.com/godror/godror v0.13.3
github.com/gofrs/flock => github.com/gofrs/flock v0.7.1
@@ -501,10 +504,10 @@ replace (
github.com/kr/pty => github.com/kr/pty v1.1.5
github.com/kr/text => github.com/kr/text v0.1.0
github.com/kshvakov/clickhouse => github.com/kshvakov/clickhouse v1.3.5
github.com/kubernetes-csi/external-snapshotter/client/v3 => github.com/kubernetes-csi/external-snapshotter/client/v3 v3.0.0
github.com/kubernetes-csi/external-snapshotter/client/v4 => github.com/kubernetes-csi/external-snapshotter/client/v4 v4.2.0
github.com/kubesphere/pvc-autoresizer => github.com/kubesphere/pvc-autoresizer v0.1.1
github.com/kubesphere/sonargo => github.com/kubesphere/sonargo v0.0.2
github.com/kubesphere/storageclass-accessor => github.com/kubesphere/storageclass-accessor v0.2.0
github.com/kylelemons/go-gypsy => github.com/kylelemons/go-gypsy v0.0.0-20160905020020-08cad365cd28
github.com/kylelemons/godebug => github.com/kylelemons/godebug v0.0.0-20160406211939-eadb3ce320cb
github.com/lann/builder => github.com/lann/builder v0.0.0-20180802200727-47ae307949d0
@@ -653,6 +656,7 @@ replace (
github.com/sergi/go-diff => github.com/sergi/go-diff v1.0.0
github.com/shopspring/decimal => github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24
github.com/shurcooL/httpfs => github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749
github.com/shurcooL/sanitized_anchor_name => github.com/shurcooL/sanitized_anchor_name v1.0.0
github.com/shurcooL/vfsgen => github.com/shurcooL/vfsgen v0.0.0-20181202132449-6a9ea43bcacd
github.com/siebenmann/go-kstat => github.com/siebenmann/go-kstat v0.0.0-20160321171754-d34789b79745
github.com/sirupsen/logrus => github.com/sirupsen/logrus v1.4.2

12
go.sum
View File

@@ -59,6 +59,7 @@ github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d h1:UrqY+r/O
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/StackExchange/wmi v1.2.1/go.mod h1:rcmrprowKIVzvc+NUiLncP2uuArMWLCbu9SBzvHz7e8=
github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g=
github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia65gfNATL8TAiHDNxPzPdmEL5uirI2Uyuz6c=
github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4RqaHDIsdSBg7lM=
@@ -287,6 +288,8 @@ github.com/go-logr/logr v0.4.0 h1:K7/B1jt6fIBQVd4Owv2MqGQClcgf0R266+7C/QjRcLc=
github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/zapr v0.4.0 h1:uc1uML3hRYL9/ZZPdgHS/n8Nzo+eaYL/Efxkkamf7OM=
github.com/go-logr/zapr v0.4.0/go.mod h1:tabnROwaDl0UNxkVeFRbY8bwB37GwRv0P8lg6aAiEnk=
github.com/go-ole/go-ole v1.2.5/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.2.6-0.20210915003542-8b1f7f90f6b1/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-openapi/analysis v0.19.10 h1:5BHISBAXOc/aJK25irLZnx2D3s6WyYaY9D4gmuz9fdE=
github.com/go-openapi/analysis v0.19.10/go.mod h1:qmhS3VNFxBlquFJ0RGoDtylO9y4pgTAUNE9AEEMdlJQ=
github.com/go-openapi/errors v0.19.4 h1:fSGwO1tSYHFu70NKaWJt5Qh0qoBRtCm/mXS1yhf+0W0=
@@ -388,6 +391,8 @@ github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASu
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gops v0.3.23 h1:OjsHRINl5FiIyTc8jivIg4UN0GY6Nh32SL8KRbl8GQo=
github.com/google/gops v0.3.23/go.mod h1:7diIdLsqpCihPSX3fQagksT/Ku/y4RL9LHTlKyEUDl8=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20200417002340-c6e0a841f49a/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
@@ -501,6 +506,7 @@ github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dv
github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
github.com/kevinburke/ssh_config v0.0.0-20180830205328-81db2a75821e h1:RgQk53JHp/Cjunrr1WlsXSZpqXn+uREuHvUVcK82CV8=
github.com/kevinburke/ssh_config v0.0.0-20180830205328-81db2a75821e/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/keybase/go-ps v0.0.0-20190827175125-91aafc93ba19/go.mod h1:hY+WOq6m2FpbvyrI93sMaypsttvaIL5nhVR92dTMUcQ=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kisielk/sqlstruct v0.0.0-20150923205031-648daed35d49/go.mod h1:yyMNCyc/Ib3bDTKd379tNMpB/7/H5TjM2Y9QJ5THLbE=
@@ -741,6 +747,9 @@ github.com/segmentio/kafka-go v0.2.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfP
github.com/sercand/kuberesolver v2.4.0+incompatible/go.mod h1:lWF3GL0xptCB/vCiJPl/ZshwPsX/n4Y7u0CW9E7aQIQ=
github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shirou/gopsutil v0.0.0-20180427012116-c95755e4bcd7/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/shirou/gopsutil/v3 v3.21.9/go.mod h1:YWp/H8Qs5fVmf17v7JNZzA0mPJ+mS2e9JdiUF9LlKzQ=
github.com/shirou/w32 v0.0.0-20160930032740-bb4de0191aa4/go.mod h1:qsXQc7+bwAM3Q1u/4XEfrquwF8Lw7D7y5cD8CuHnfIc=
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4=
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
@@ -784,6 +793,8 @@ github.com/thanos-io/thanos v0.13.1-0.20200910143741-e0b7f7b32e9c/go.mod h1:1Ize
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tinylib/msgp v1.1.0/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tklauser/go-sysconf v0.3.9/go.mod h1:11DU/5sG7UexIrp/O6g35hrWzu0JxlwQ3LSFUzyeuhs=
github.com/tklauser/numcpus v0.3.0/go.mod h1:yFGUr7TUHQRAhyqBcEg0Ge34zDBAsIvJJcyE6boqnA8=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5 h1:LnC5Kc/wtumK+WB441p7ynQJzVuNRJiqddSIE3IlSEQ=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
@@ -991,6 +1002,7 @@ k8s.io/utils v0.0.0-20200603063816-c1c6865ac451/go.mod h1:jPW/WVKK9YHAvNhRxK0md/
kubesphere.io/monitoring-dashboard v0.2.2 h1:aniATtXLgRAAvKOjd2UxWWHMh4/T7a0HoQ9bd+/bGcA=
kubesphere.io/monitoring-dashboard v0.2.2/go.mod h1:ksDjmOuoN0C0GuYp0s5X3186cPgk2asLUaO1WlEKISY=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/goversion v1.2.0/go.mod h1:Eih9y/uIBS3ulggl7KNJ09xGSLcuNaLgmvvqa07sgfo=
rsc.io/letsencrypt v0.0.1 h1:DV0d09Ne9E7UUa9ZqWktZ9L2VmybgTgfq7xlfFR/bbU=
rsc.io/letsencrypt v0.0.1/go.mod h1:buyQKZ6IXrRnB7TdkHP0RyEybLx18HHyOSoTyoOLqNY=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=

View File

@@ -39,6 +39,7 @@ find_files() {
-o -wholename '*/third_party/*' \
-o -wholename '*/vendor/*' \
-o -wholename './staging/src/kubesphere.io/client-go/*vendor/*' \
-o -wholename './staging/src/kubesphere.io/api/*/zz_generated.deepcopy.go' \
\) -prune \
\) -name '*.go'
}

1
hack/verify-gofmt.sh Normal file → Executable file
View File

@@ -44,6 +44,7 @@ find_files() {
-o -wholename '*/third_party/*' \
-o -wholename '*/vendor/*' \
-o -wholename './staging/src/kubesphere.io/client-go/*vendor/*' \
-o -wholename './staging/src/kubesphere.io/api/*/zz_generated.deepcopy.go' \
-o -wholename '*/bindata.go' \
\) -prune \
\) -name '*.go'

View File

@@ -28,20 +28,23 @@ import (
"github.com/emicklei/go-restful"
extv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/api/errors"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
urlruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/sets"
unionauth "k8s.io/apiserver/pkg/authentication/request/union"
"k8s.io/apiserver/pkg/endpoints/handlers/responsewriters"
"k8s.io/client-go/discovery"
"k8s.io/client-go/util/retry"
"k8s.io/klog"
runtimecache "sigs.k8s.io/controller-runtime/pkg/cache"
runtimeclient "sigs.k8s.io/controller-runtime/pkg/client"
clusterv1alpha1 "kubesphere.io/api/cluster/v1alpha1"
iamv1alpha2 "kubesphere.io/api/iam/v1alpha2"
notificationv2beta1 "kubesphere.io/api/notification/v2beta1"
tenantv1alpha1 "kubesphere.io/api/tenant/v1alpha1"
typesv1beta1 "kubesphere.io/api/types/v1beta1"
runtimecache "sigs.k8s.io/controller-runtime/pkg/cache"
runtimeclient "sigs.k8s.io/controller-runtime/pkg/client"
audit "kubesphere.io/kubesphere/pkg/apiserver/auditing"
"kubesphere.io/kubesphere/pkg/apiserver/authentication/authenticators/basic"
@@ -92,6 +95,7 @@ import (
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/group"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
"kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/loginrecord"
"kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/user"
"kubesphere.io/kubesphere/pkg/simple/client/alerting"
@@ -104,6 +108,7 @@ import (
"kubesphere.io/kubesphere/pkg/simple/client/monitoring"
"kubesphere.io/kubesphere/pkg/simple/client/s3"
"kubesphere.io/kubesphere/pkg/simple/client/sonarqube"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
"kubesphere.io/kubesphere/pkg/utils/iputil"
"kubesphere.io/kubesphere/pkg/utils/metrics"
)
@@ -158,6 +163,10 @@ type APIServer struct {
// controller-runtime client
RuntimeClient runtimeclient.Client
ClusterClient clusterclient.ClusterClients
OpenpitrixClient openpitrix.Interface
}
func (s *APIServer) PrepareRun(stopCh <-chan struct{}) error {
@@ -217,17 +226,17 @@ func (s *APIServer) installKubeSphereAPIs(stopCh <-chan struct{}) {
urlruntime.Must(configv1alpha2.AddToContainer(s.container, s.Config))
urlruntime.Must(resourcev1alpha3.AddToContainer(s.container, s.InformerFactory, s.RuntimeCache))
urlruntime.Must(monitoringv1alpha3.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.MonitoringClient, s.MetricsClient, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.Config.OpenPitrixOptions, s.RuntimeClient, stopCh))
urlruntime.Must(meteringv1alpha1.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.MonitoringClient, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.RuntimeCache, s.Config.MeteringOptions, nil, s.RuntimeClient, stopCh))
urlruntime.Must(openpitrixv1.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.Config.OpenPitrixOptions, stopCh))
urlruntime.Must(monitoringv1alpha3.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.MonitoringClient, s.MetricsClient, s.InformerFactory, s.OpenpitrixClient, s.RuntimeClient))
urlruntime.Must(meteringv1alpha1.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.MonitoringClient, s.InformerFactory, s.RuntimeCache, s.Config.MeteringOptions, s.OpenpitrixClient, s.RuntimeClient))
urlruntime.Must(openpitrixv1.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.Config.OpenPitrixOptions, s.OpenpitrixClient))
urlruntime.Must(openpitrixv2alpha1.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.Config.OpenPitrixOptions))
urlruntime.Must(operationsv1alpha2.AddToContainer(s.container, s.KubernetesClient.Kubernetes()))
urlruntime.Must(resourcesv1alpha2.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.InformerFactory,
s.KubernetesClient.Master()))
urlruntime.Must(tenantv1alpha2.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.Kubernetes(),
s.KubernetesClient.KubeSphere(), s.EventsClient, s.LoggingClient, s.AuditingClient, amOperator, rbacAuthorizer, s.MonitoringClient, s.RuntimeCache, s.Config.MeteringOptions, stopCh))
s.KubernetesClient.KubeSphere(), s.EventsClient, s.LoggingClient, s.AuditingClient, amOperator, imOperator, rbacAuthorizer, s.MonitoringClient, s.RuntimeCache, s.Config.MeteringOptions, s.OpenpitrixClient))
urlruntime.Must(tenantv1alpha3.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.Kubernetes(),
s.KubernetesClient.KubeSphere(), s.EventsClient, s.LoggingClient, s.AuditingClient, amOperator, rbacAuthorizer, s.MonitoringClient, s.RuntimeCache, s.Config.MeteringOptions, stopCh))
s.KubernetesClient.KubeSphere(), s.EventsClient, s.LoggingClient, s.AuditingClient, amOperator, imOperator, rbacAuthorizer, s.MonitoringClient, s.RuntimeCache, s.Config.MeteringOptions, s.OpenpitrixClient))
urlruntime.Must(terminalv1alpha2.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), rbacAuthorizer, s.KubernetesClient.Config(), s.Config.TerminalOptions))
urlruntime.Must(clusterkapisv1alpha1.AddToContainer(s.container,
s.KubernetesClient.KubeSphere(),
@@ -254,7 +263,7 @@ func (s *APIServer) installKubeSphereAPIs(stopCh <-chan struct{}) {
urlruntime.Must(alertingv1.AddToContainer(s.container, s.Config.AlertingOptions.Endpoint))
urlruntime.Must(alertingv2alpha1.AddToContainer(s.container, s.InformerFactory,
s.KubernetesClient.Prometheus(), s.AlertingClient, s.Config.AlertingOptions))
urlruntime.Must(version.AddToContainer(s.container, s.KubernetesClient.Discovery()))
urlruntime.Must(version.AddToContainer(s.container, s.KubernetesClient.Kubernetes().Discovery()))
urlruntime.Must(kubeedgev1alpha1.AddToContainer(s.container, s.Config.KubeEdgeOptions.Endpoint))
urlruntime.Must(edgeruntimev1alpha1.AddToContainer(s.container, s.Config.EdgeRuntimeOptions.Endpoint))
urlruntime.Must(notificationkapisv2beta1.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.Kubernetes(),
@@ -340,7 +349,7 @@ func (s *APIServer) buildHandlerChain(stopCh <-chan struct{}) {
handler = filters.WithAuthorization(handler, authorizers)
if s.Config.MultiClusterOptions.Enable {
clusterDispatcher := dispatch.NewClusterDispatch(s.InformerFactory.KubeSphereSharedInformerFactory().Cluster().V1alpha1().Clusters())
clusterDispatcher := dispatch.NewClusterDispatch(s.ClusterClient)
handler = filters.WithMultipleClusterDispatcher(handler, clusterDispatcher)
}
@@ -363,215 +372,237 @@ func (s *APIServer) buildHandlerChain(stopCh <-chan struct{}) {
s.Server.Handler = handler
}
func isResourceExists(apiResources []v1.APIResource, resource schema.GroupVersionResource) bool {
for _, apiResource := range apiResources {
if apiResource.Name == resource.Resource {
return true
}
}
return false
}
type informerForResourceFunc func(resource schema.GroupVersionResource) (interface{}, error)
func waitForCacheSync(discoveryClient discovery.DiscoveryInterface, sharedInformerFactory informers.GenericInformerFactory, informerForResourceFunc informerForResourceFunc, GVRs map[schema.GroupVersion][]string, stopCh <-chan struct{}) error {
for groupVersion, resourceNames := range GVRs {
var apiResourceList *v1.APIResourceList
var err error
err = retry.OnError(retry.DefaultRetry, func(err error) bool {
return !errors.IsNotFound(err)
}, func() error {
apiResourceList, err = discoveryClient.ServerResourcesForGroupVersion(groupVersion.String())
return err
})
if err != nil {
if errors.IsNotFound(err) {
klog.Warningf("group version %s not exists in the cluster", groupVersion)
continue
}
return fmt.Errorf("failed to fetch group version resources %s: %s", groupVersion, err)
}
for _, resourceName := range resourceNames {
groupVersionResource := groupVersion.WithResource(resourceName)
if !isResourceExists(apiResourceList.APIResources, groupVersionResource) {
klog.Warningf("resource %s not exists in the cluster", groupVersionResource)
} else {
// reflect.ValueOf(sharedInformerFactory).MethodByName("ForResource").Call([]reflect.Value{reflect.ValueOf(groupVersionResource)})
if _, err = informerForResourceFunc(groupVersionResource); err != nil {
return fmt.Errorf("failed to create informer for %s: %s", groupVersionResource, err)
}
}
}
}
sharedInformerFactory.Start(stopCh)
sharedInformerFactory.WaitForCacheSync(stopCh)
return nil
}
func (s *APIServer) waitForResourceSync(ctx context.Context) error {
klog.V(0).Info("Start cache objects")
stopCh := ctx.Done()
// resources we have to create informer first
k8sGVRs := map[schema.GroupVersion][]string{
{Group: "", Version: "v1"}: {
"namespaces",
"nodes",
"resourcequotas",
"pods",
"services",
"persistentvolumeclaims",
"persistentvolumes",
"secrets",
"configmaps",
"serviceaccounts",
},
{Group: "rbac.authorization.k8s.io", Version: "v1"}: {
"roles",
"rolebindings",
"clusterroles",
"clusterrolebindings",
},
{Group: "apps", Version: "v1"}: {
"deployments",
"daemonsets",
"replicasets",
"statefulsets",
"controllerrevisions",
},
{Group: "storage.k8s.io", Version: "v1"}: {
"storageclasses",
},
{Group: "batch", Version: "v1"}: {
"jobs",
},
{Group: "batch", Version: "v1beta1"}: {
"cronjobs",
},
{Group: "networking.k8s.io", Version: "v1"}: {
"ingresses",
"networkpolicies",
},
{Group: "autoscaling", Version: "v2beta2"}: {
"horizontalpodautoscalers",
},
}
discoveryClient := s.KubernetesClient.Kubernetes().Discovery()
_, apiResourcesList, err := discoveryClient.ServerGroupsAndResources()
if err != nil {
if err := waitForCacheSync(s.KubernetesClient.Kubernetes().Discovery(),
s.InformerFactory.KubernetesSharedInformerFactory(),
func(resource schema.GroupVersionResource) (interface{}, error) {
return s.InformerFactory.KubernetesSharedInformerFactory().ForResource(resource)
},
k8sGVRs, stopCh); err != nil {
return err
}
isResourceExists := func(resource schema.GroupVersionResource) bool {
for _, apiResource := range apiResourcesList {
if apiResource.GroupVersion == resource.GroupVersion().String() {
for _, rsc := range apiResource.APIResources {
if rsc.Name == resource.Resource {
return true
}
}
}
}
return false
}
// resources we have to create informer first
k8sGVRs := []schema.GroupVersionResource{
{Group: "", Version: "v1", Resource: "namespaces"},
{Group: "", Version: "v1", Resource: "nodes"},
{Group: "", Version: "v1", Resource: "resourcequotas"},
{Group: "", Version: "v1", Resource: "pods"},
{Group: "", Version: "v1", Resource: "services"},
{Group: "", Version: "v1", Resource: "persistentvolumeclaims"},
{Group: "", Version: "v1", Resource: "persistentvolumes"},
{Group: "", Version: "v1", Resource: "secrets"},
{Group: "", Version: "v1", Resource: "configmaps"},
{Group: "", Version: "v1", Resource: "serviceaccounts"},
{Group: "rbac.authorization.k8s.io", Version: "v1", Resource: "roles"},
{Group: "rbac.authorization.k8s.io", Version: "v1", Resource: "rolebindings"},
{Group: "rbac.authorization.k8s.io", Version: "v1", Resource: "clusterroles"},
{Group: "rbac.authorization.k8s.io", Version: "v1", Resource: "clusterrolebindings"},
{Group: "apps", Version: "v1", Resource: "deployments"},
{Group: "apps", Version: "v1", Resource: "daemonsets"},
{Group: "apps", Version: "v1", Resource: "replicasets"},
{Group: "apps", Version: "v1", Resource: "statefulsets"},
{Group: "apps", Version: "v1", Resource: "controllerrevisions"},
{Group: "storage.k8s.io", Version: "v1", Resource: "storageclasses"},
{Group: "batch", Version: "v1", Resource: "jobs"},
{Group: "batch", Version: "v1beta1", Resource: "cronjobs"},
{Group: "networking.k8s.io", Version: "v1", Resource: "ingresses"},
{Group: "autoscaling", Version: "v2beta2", Resource: "horizontalpodautoscalers"},
{Group: "networking.k8s.io", Version: "v1", Resource: "networkpolicies"},
}
for _, gvr := range k8sGVRs {
if !isResourceExists(gvr) {
klog.Warningf("resource %s not exists in the cluster", gvr)
} else {
_, err := s.InformerFactory.KubernetesSharedInformerFactory().ForResource(gvr)
if err != nil {
klog.Errorf("cannot create informer for %s", gvr)
return err
}
}
}
s.InformerFactory.KubernetesSharedInformerFactory().Start(stopCh)
s.InformerFactory.KubernetesSharedInformerFactory().WaitForCacheSync(stopCh)
ksInformerFactory := s.InformerFactory.KubeSphereSharedInformerFactory()
ksGVRs := []schema.GroupVersionResource{
{Group: "tenant.kubesphere.io", Version: "v1alpha1", Resource: "workspaces"},
{Group: "tenant.kubesphere.io", Version: "v1alpha2", Resource: "workspacetemplates"},
{Group: "iam.kubesphere.io", Version: "v1alpha2", Resource: "users"},
{Group: "iam.kubesphere.io", Version: "v1alpha2", Resource: "globalroles"},
{Group: "iam.kubesphere.io", Version: "v1alpha2", Resource: "globalrolebindings"},
{Group: "iam.kubesphere.io", Version: "v1alpha2", Resource: "groups"},
{Group: "iam.kubesphere.io", Version: "v1alpha2", Resource: "groupbindings"},
{Group: "iam.kubesphere.io", Version: "v1alpha2", Resource: "workspaceroles"},
{Group: "iam.kubesphere.io", Version: "v1alpha2", Resource: "workspacerolebindings"},
{Group: "iam.kubesphere.io", Version: "v1alpha2", Resource: "loginrecords"},
{Group: "cluster.kubesphere.io", Version: "v1alpha1", Resource: "clusters"},
{Group: "network.kubesphere.io", Version: "v1alpha1", Resource: "ippools"},
{Group: "notification.kubesphere.io", Version: "v2beta1", Resource: notificationv2beta1.ResourcesPluralConfig},
{Group: "notification.kubesphere.io", Version: "v2beta1", Resource: notificationv2beta1.ResourcesPluralReceiver},
}
devopsGVRs := []schema.GroupVersionResource{
{Group: "devops.kubesphere.io", Version: "v1alpha1", Resource: "s2ibinaries"},
{Group: "devops.kubesphere.io", Version: "v1alpha1", Resource: "s2ibuildertemplates"},
{Group: "devops.kubesphere.io", Version: "v1alpha1", Resource: "s2iruns"},
{Group: "devops.kubesphere.io", Version: "v1alpha1", Resource: "s2ibuilders"},
{Group: "devops.kubesphere.io", Version: "v1alpha3", Resource: "devopsprojects"},
{Group: "devops.kubesphere.io", Version: "v1alpha3", Resource: "pipelines"},
}
servicemeshGVRs := []schema.GroupVersionResource{
{Group: "servicemesh.kubesphere.io", Version: "v1alpha2", Resource: "strategies"},
{Group: "servicemesh.kubesphere.io", Version: "v1alpha2", Resource: "servicepolicies"},
}
// federated resources on cached in multi cluster setup
federatedResourceGVRs := []schema.GroupVersionResource{
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedClusterRole),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedClusterRoleBindingBinding),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedNamespace),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedService),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedDeployment),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedSecret),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedConfigmap),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedStatefulSet),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedIngress),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedResourceQuota),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedPersistentVolumeClaim),
typesv1beta1.SchemeGroupVersion.WithResource(typesv1beta1.ResourcePluralFederatedApplication),
ksGVRs := map[schema.GroupVersion][]string{
{Group: "tenant.kubesphere.io", Version: "v1alpha1"}: {
"workspaces",
},
{Group: "tenant.kubesphere.io", Version: "v1alpha2"}: {
"workspacetemplates",
},
{Group: "iam.kubesphere.io", Version: "v1alpha2"}: {
"users",
"globalroles",
"globalrolebindings",
"groups",
"groupbindings",
"workspaceroles",
"workspacerolebindings",
"loginrecords",
},
{Group: "cluster.kubesphere.io", Version: "v1alpha1"}: {
"clusters",
},
{Group: "network.kubesphere.io", Version: "v1alpha1"}: {
"ippools",
},
{Group: "notification.kubesphere.io", Version: "v2beta1"}: {
notificationv2beta1.ResourcesPluralConfig,
notificationv2beta1.ResourcesPluralReceiver,
},
}
// skip caching devops resources if devops not enabled
if s.DevopsClient != nil {
ksGVRs = append(ksGVRs, devopsGVRs...)
ksGVRs[schema.GroupVersion{Group: "devops.kubesphere.io", Version: "v1alpha1"}] = []string{
"s2ibinaries",
"s2ibuildertemplates",
"s2iruns",
"s2ibuilders",
}
ksGVRs[schema.GroupVersion{Group: "devops.kubesphere.io", Version: "v1alpha3"}] = []string{
"devopsprojects",
"pipelines",
}
}
// skip caching servicemesh resources if servicemesh not enabled
if s.KubernetesClient.Istio() != nil {
ksGVRs = append(ksGVRs, servicemeshGVRs...)
ksGVRs[schema.GroupVersion{Group: "servicemesh.kubesphere.io", Version: "v1alpha2"}] = []string{
"strategies",
"servicepolicies",
}
}
// federated resources on cached in multi cluster setup
if s.Config.MultiClusterOptions.Enable {
ksGVRs = append(ksGVRs, federatedResourceGVRs...)
}
for _, gvr := range ksGVRs {
if !isResourceExists(gvr) {
klog.Warningf("resource %s not exists in the cluster", gvr)
} else {
_, err = ksInformerFactory.ForResource(gvr)
if err != nil {
return err
}
ksGVRs[typesv1beta1.SchemeGroupVersion] = []string{
typesv1beta1.ResourcePluralFederatedClusterRole,
typesv1beta1.ResourcePluralFederatedClusterRoleBindingBinding,
typesv1beta1.ResourcePluralFederatedNamespace,
typesv1beta1.ResourcePluralFederatedService,
typesv1beta1.ResourcePluralFederatedDeployment,
typesv1beta1.ResourcePluralFederatedSecret,
typesv1beta1.ResourcePluralFederatedConfigmap,
typesv1beta1.ResourcePluralFederatedStatefulSet,
typesv1beta1.ResourcePluralFederatedIngress,
typesv1beta1.ResourcePluralFederatedPersistentVolumeClaim,
typesv1beta1.ResourcePluralFederatedApplication,
}
}
ksInformerFactory.Start(stopCh)
ksInformerFactory.WaitForCacheSync(stopCh)
snapshotInformerFactory := s.InformerFactory.SnapshotSharedInformerFactory()
snapshotGVRs := []schema.GroupVersionResource{
{Group: "snapshot.storage.k8s.io", Version: "v1", Resource: "volumesnapshotclasses"},
{Group: "snapshot.storage.k8s.io", Version: "v1", Resource: "volumesnapshots"},
{Group: "snapshot.storage.k8s.io", Version: "v1", Resource: "volumesnapshotcontents"},
}
for _, gvr := range snapshotGVRs {
if !isResourceExists(gvr) {
klog.Warningf("resource %s not exists in the cluster", gvr)
} else {
_, err = snapshotInformerFactory.ForResource(gvr)
if err != nil {
return err
}
}
}
snapshotInformerFactory.Start(stopCh)
snapshotInformerFactory.WaitForCacheSync(stopCh)
apiextensionsInformerFactory := s.InformerFactory.ApiExtensionSharedInformerFactory()
apiextensionsGVRs := []schema.GroupVersionResource{
{Group: "apiextensions.k8s.io", Version: "v1", Resource: "customresourcedefinitions"},
if err := waitForCacheSync(s.KubernetesClient.Kubernetes().Discovery(),
s.InformerFactory.KubeSphereSharedInformerFactory(),
func(resource schema.GroupVersionResource) (interface{}, error) {
return s.InformerFactory.KubeSphereSharedInformerFactory().ForResource(resource)
},
ksGVRs, stopCh); err != nil {
return err
}
for _, gvr := range apiextensionsGVRs {
if !isResourceExists(gvr) {
klog.Warningf("resource %s not exists in the cluster", gvr)
} else {
_, err = apiextensionsInformerFactory.ForResource(gvr)
if err != nil {
return err
}
}
snapshotGVRs := map[schema.GroupVersion][]string{
{Group: "snapshot.storage.k8s.io", Version: "v1"}: {
"volumesnapshots",
"volumesnapshotcontents",
"volumesnapshotclasses",
},
}
if err := waitForCacheSync(s.KubernetesClient.Kubernetes().Discovery(),
s.InformerFactory.SnapshotSharedInformerFactory(), func(resource schema.GroupVersionResource) (interface{}, error) {
return s.InformerFactory.SnapshotSharedInformerFactory().ForResource(resource)
},
snapshotGVRs, stopCh); err != nil {
return err
}
apiextensionsGVRs := map[schema.GroupVersion][]string{
{Group: "apiextensions.k8s.io", Version: "v1"}: {
"customresourcedefinitions",
},
}
if err := waitForCacheSync(s.KubernetesClient.Kubernetes().Discovery(),
s.InformerFactory.ApiExtensionSharedInformerFactory(), func(resource schema.GroupVersionResource) (interface{}, error) {
return s.InformerFactory.ApiExtensionSharedInformerFactory().ForResource(resource)
},
apiextensionsGVRs, stopCh); err != nil {
return err
}
apiextensionsInformerFactory.Start(stopCh)
apiextensionsInformerFactory.WaitForCacheSync(stopCh)
if promFactory := s.InformerFactory.PrometheusSharedInformerFactory(); promFactory != nil {
prometheusGVRs := []schema.GroupVersionResource{
{Group: "monitoring.coreos.com", Version: "v1", Resource: "prometheuses"},
{Group: "monitoring.coreos.com", Version: "v1", Resource: "prometheusrules"},
{Group: "monitoring.coreos.com", Version: "v1", Resource: "thanosrulers"},
prometheusGVRs := map[schema.GroupVersion][]string{
{Group: "monitoring.coreos.com", Version: "v1"}: {
"prometheuses",
"prometheusrules",
"thanosrulers",
},
}
for _, gvr := range prometheusGVRs {
if isResourceExists(gvr) {
_, err = promFactory.ForResource(gvr)
if err != nil {
return err
}
} else {
klog.Warningf("resource %s not exists in the cluster", gvr)
}
if err := waitForCacheSync(s.KubernetesClient.Kubernetes().Discovery(),
promFactory, func(resource schema.GroupVersionResource) (interface{}, error) {
return promFactory.ForResource(resource)
},
prometheusGVRs, stopCh); err != nil {
return err
}
promFactory.Start(stopCh)
promFactory.WaitForCacheSync(stopCh)
}
// controller runtime cache for resources
go s.RuntimeCache.Start(ctx)
s.RuntimeCache.WaitForCacheSync(ctx)
klog.V(0).Info("Finished caching objects")
return nil
}

View File

@@ -141,6 +141,7 @@ func (b *Backend) sendEvents(events *v1alpha1.EventList) {
defer cancel()
stopCh := make(chan struct{})
skipReturnSender := false
send := func() {
ctx, cancel := context.WithTimeout(context.Background(), b.getSenderTimeout)
@@ -149,6 +150,7 @@ func (b *Backend) sendEvents(events *v1alpha1.EventList) {
select {
case <-ctx.Done():
klog.Error("Get auditing event sender timeout")
skipReturnSender = true
return
case b.senderCh <- struct{}{}:
}
@@ -182,7 +184,9 @@ func (b *Backend) sendEvents(events *v1alpha1.EventList) {
go send()
defer func() {
<-b.senderCh
if !skipReturnSender {
<-b.senderCh
}
}()
select {

View File

@@ -33,8 +33,8 @@ import (
"k8s.io/apimachinery/pkg/types"
"k8s.io/apiserver/pkg/apis/audit"
"k8s.io/klog"
devopsv1alpha3 "kubesphere.io/api/devops/v1alpha3"
"kubesphere.io/api/iam/v1alpha2"
auditv1alpha1 "kubesphere.io/kubesphere/pkg/apiserver/auditing/v1alpha1"
"kubesphere.io/kubesphere/pkg/apiserver/query"
@@ -192,7 +192,7 @@ func (a *auditing) LogRequestObject(req *http.Request, info *request.RequestInfo
}
}
if (e.Level.GreaterOrEqual(audit.LevelRequest) || e.Verb == "create") && req.ContentLength > 0 {
if a.needAnalyzeRequestBody(e, req) {
body, err := ioutil.ReadAll(req.Body)
if err != nil {
klog.Error(err)
@@ -212,11 +212,45 @@ func (a *auditing) LogRequestObject(req *http.Request, info *request.RequestInfo
e.ObjectRef.Name = obj.Name
}
}
// for recording disable and enable user
if e.ObjectRef.Resource == "users" && e.Verb == "update" {
u := &v1alpha2.User{}
if err := json.Unmarshal(body, u); err == nil {
if u.Status.State == v1alpha2.UserActive {
e.Verb = "enable"
} else if u.Status.State == v1alpha2.UserDisabled {
e.Verb = "disable"
}
}
}
}
return e
}
func (a *auditing) needAnalyzeRequestBody(e *auditv1alpha1.Event, req *http.Request) bool {
if req.ContentLength <= 0 {
return false
}
if e.Level.GreaterOrEqual(audit.LevelRequest) {
return true
}
if e.Verb == "create" {
return true
}
// for recording disable and enable user
if e.ObjectRef.Resource == "users" && e.Verb == "update" {
return true
}
return false
}
func (a *auditing) LogResponseObject(e *auditv1alpha1.Event, resp *ResponseCapture) {
e.StageTimestamp = metav1.NowMicro()

View File

@@ -60,15 +60,19 @@ func (t *tokenAuthenticator) AuthenticateToken(ctx context.Context, token string
}, true, nil
}
u, err := t.userLister.Get(verified.User.GetName())
userInfo, err := t.userLister.Get(verified.User.GetName())
if err != nil {
return nil, false, err
}
// AuthLimitExceeded state should be ignored
if userInfo.Status.State == iamv1alpha2.UserDisabled {
return nil, false, auth.AccountIsNotActiveError
}
return &authenticator.Response{
User: &user.DefaultInfo{
Name: u.GetName(),
Groups: append(u.Spec.Groups, user.AllAuthenticated),
Name: userInfo.GetName(),
Groups: append(userInfo.Spec.Groups, user.AllAuthenticated),
},
}, true, nil
}

View File

@@ -45,7 +45,7 @@ func init() {
type ldapProvider struct {
// Host and optional port of the LDAP server in the form "host:port".
// If the port is not supplied, 389 for insecure or StartTLS connections, 636
Host string `json:"host,omitempty" yaml:"managerDN"`
Host string `json:"host,omitempty" yaml:"host"`
// Timeout duration when reading data from remote server. Default to 15s.
ReadTimeout int `json:"readTimeout" yaml:"readTimeout"`
// If specified, connections will use the ldaps:// protocol

View File

@@ -160,7 +160,7 @@ type Config struct {
ServiceMeshOptions *servicemesh.Options `json:"servicemesh,omitempty" yaml:"servicemesh,omitempty" mapstructure:"servicemesh"`
NetworkOptions *network.Options `json:"network,omitempty" yaml:"network,omitempty" mapstructure:"network"`
LdapOptions *ldap.Options `json:"-,omitempty" yaml:"ldap,omitempty" mapstructure:"ldap"`
RedisOptions *cache.Options `json:"redis,omitempty" yaml:"redis,omitempty" mapstructure:"redis"`
CacheOptions *cache.Options `json:"cache,omitempty" yaml:"cache,omitempty" mapstructure:"cache"`
S3Options *s3.Options `json:"s3,omitempty" yaml:"s3,omitempty" mapstructure:"s3"`
OpenPitrixOptions *openpitrix.Options `json:"openpitrix,omitempty" yaml:"openpitrix,omitempty" mapstructure:"openpitrix"`
MonitoringOptions *prometheus.Options `json:"monitoring,omitempty" yaml:"monitoring,omitempty" mapstructure:"monitoring"`
@@ -189,7 +189,7 @@ func New() *Config {
ServiceMeshOptions: servicemesh.NewServiceMeshOptions(),
NetworkOptions: network.NewNetworkOptions(),
LdapOptions: ldap.NewOptions(),
RedisOptions: cache.NewRedisOptions(),
CacheOptions: cache.NewCacheOptions(),
S3Options: s3.NewS3Options(),
OpenPitrixOptions: openpitrix.NewOptions(),
MonitoringOptions: prometheus.NewPrometheusOptions(),
@@ -292,8 +292,8 @@ func (conf *Config) ToMap() map[string]bool {
// Remove invalid options before serializing to json or yaml
func (conf *Config) stripEmptyOptions() {
if conf.RedisOptions != nil && conf.RedisOptions.Host == "" {
conf.RedisOptions = nil
if conf.CacheOptions != nil && conf.CacheOptions.Type == "" {
conf.CacheOptions = nil
}
if conf.DevopsOptions != nil && conf.DevopsOptions.Host == "" {

View File

@@ -88,11 +88,9 @@ func newTestConfig() (*Config, error) {
MaxCap: 100,
PoolName: "ldap",
},
RedisOptions: &cache.Options{
Host: "localhost",
Port: 6379,
Password: "KUBESPHERE_REDIS_PASSWORD",
DB: 0,
CacheOptions: &cache.Options{
Type: "redis",
Options: map[string]interface{}{},
},
S3Options: &s3.Options{
Endpoint: "http://minio.openpitrix-system.svc",
@@ -236,9 +234,6 @@ func TestGet(t *testing.T) {
saveTestConfig(t, conf)
defer cleanTestConfig(t)
conf.RedisOptions.Password = "P@88w0rd"
os.Setenv("KUBESPHERE_REDIS_PASSWORD", "P@88w0rd")
conf2, err := TryLoadFromDisk()
if err != nil {
t.Fatal(err)
@@ -251,7 +246,7 @@ func TestGet(t *testing.T) {
func TestStripEmptyOptions(t *testing.T) {
var config Config
config.RedisOptions = &cache.Options{Host: ""}
config.CacheOptions = &cache.Options{Type: ""}
config.DevopsOptions = &jenkins.Options{Host: ""}
config.MonitoringOptions = &prometheus.Options{Endpoint: ""}
config.SonarQubeOptions = &sonarqube.Options{Host: ""}
@@ -284,7 +279,7 @@ func TestStripEmptyOptions(t *testing.T) {
config.stripEmptyOptions()
if config.RedisOptions != nil ||
if config.CacheOptions != nil ||
config.DevopsOptions != nil ||
config.MonitoringOptions != nil ||
config.SonarQubeOptions != nil ||

View File

@@ -30,7 +30,6 @@ import (
clusterv1alpha1 "kubesphere.io/api/cluster/v1alpha1"
"kubesphere.io/kubesphere/pkg/apiserver/request"
clusterinformer "kubesphere.io/kubesphere/pkg/client/informers/externalversions/cluster/v1alpha1"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
)
@@ -47,8 +46,8 @@ type clusterDispatch struct {
clusterclient.ClusterClients
}
func NewClusterDispatch(clusterInformer clusterinformer.ClusterInformer) Dispatcher {
return &clusterDispatch{clusterclient.NewClusterClient(clusterInformer)}
func NewClusterDispatch(cc clusterclient.ClusterClients) Dispatcher {
return &clusterDispatch{cc}
}
// Dispatch dispatch requests to designated cluster

View File

@@ -246,8 +246,6 @@ func (r *RequestInfoFactory) NewRequestInfo(req *http.Request) (*RequestInfo, er
// parsing successful, so we now know the proper value for .Parts
requestInfo.Parts = currentParts
requestInfo.ResourceScope = r.resolveResourceScope(requestInfo)
// parts look like: resource/resourceName/subresource/other/stuff/we/don't/interpret
switch {
case len(requestInfo.Parts) >= 3 && !specialVerbsNoSubresources.Has(requestInfo.Verb):
@@ -260,6 +258,8 @@ func (r *RequestInfoFactory) NewRequestInfo(req *http.Request) (*RequestInfo, er
requestInfo.Resource = requestInfo.Parts[0]
}
requestInfo.ResourceScope = r.resolveResourceScope(requestInfo)
// if there's no name on the request and we thought it was a get before, then the actual verb is a list or a watch
if len(requestInfo.Name) == 0 && requestInfo.Verb == "get" {
opts := metainternalversion.ListOptions{}

View File

@@ -1,142 +0,0 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
"context"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
labels "k8s.io/apimachinery/pkg/labels"
schema "k8s.io/apimachinery/pkg/runtime/schema"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
testing "k8s.io/client-go/testing"
v1beta1 "kubesphere.io/api/types/v1beta1"
)
// FakeFederatedResourceQuotas implements FederatedResourceQuotaInterface
type FakeFederatedResourceQuotas struct {
Fake *FakeTypesV1beta1
ns string
}
var federatedresourcequotasResource = schema.GroupVersionResource{Group: "types.kubefed.io", Version: "v1beta1", Resource: "federatedresourcequotas"}
var federatedresourcequotasKind = schema.GroupVersionKind{Group: "types.kubefed.io", Version: "v1beta1", Kind: "FederatedResourceQuota"}
// Get takes name of the federatedResourceQuota, and returns the corresponding federatedResourceQuota object, and an error if there is any.
func (c *FakeFederatedResourceQuotas) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.FederatedResourceQuota, err error) {
obj, err := c.Fake.
Invokes(testing.NewGetAction(federatedresourcequotasResource, c.ns, name), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}
// List takes label and field selectors, and returns the list of FederatedResourceQuotas that match those selectors.
func (c *FakeFederatedResourceQuotas) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.FederatedResourceQuotaList, err error) {
obj, err := c.Fake.
Invokes(testing.NewListAction(federatedresourcequotasResource, federatedresourcequotasKind, c.ns, opts), &v1beta1.FederatedResourceQuotaList{})
if obj == nil {
return nil, err
}
label, _, _ := testing.ExtractFromListOptions(opts)
if label == nil {
label = labels.Everything()
}
list := &v1beta1.FederatedResourceQuotaList{ListMeta: obj.(*v1beta1.FederatedResourceQuotaList).ListMeta}
for _, item := range obj.(*v1beta1.FederatedResourceQuotaList).Items {
if label.Matches(labels.Set(item.Labels)) {
list.Items = append(list.Items, item)
}
}
return list, err
}
// Watch returns a watch.Interface that watches the requested federatedResourceQuotas.
func (c *FakeFederatedResourceQuotas) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
return c.Fake.
InvokesWatch(testing.NewWatchAction(federatedresourcequotasResource, c.ns, opts))
}
// Create takes the representation of a federatedResourceQuota and creates it. Returns the server's representation of the federatedResourceQuota, and an error, if there is any.
func (c *FakeFederatedResourceQuotas) Create(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.CreateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
obj, err := c.Fake.
Invokes(testing.NewCreateAction(federatedresourcequotasResource, c.ns, federatedResourceQuota), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}
// Update takes the representation of a federatedResourceQuota and updates it. Returns the server's representation of the federatedResourceQuota, and an error, if there is any.
func (c *FakeFederatedResourceQuotas) Update(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
obj, err := c.Fake.
Invokes(testing.NewUpdateAction(federatedresourcequotasResource, c.ns, federatedResourceQuota), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *FakeFederatedResourceQuotas) UpdateStatus(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (*v1beta1.FederatedResourceQuota, error) {
obj, err := c.Fake.
Invokes(testing.NewUpdateSubresourceAction(federatedresourcequotasResource, "status", c.ns, federatedResourceQuota), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}
// Delete takes name of the federatedResourceQuota and deletes it. Returns an error if one occurs.
func (c *FakeFederatedResourceQuotas) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewDeleteAction(federatedresourcequotasResource, c.ns, name), &v1beta1.FederatedResourceQuota{})
return err
}
// DeleteCollection deletes a collection of objects.
func (c *FakeFederatedResourceQuotas) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
action := testing.NewDeleteCollectionAction(federatedresourcequotasResource, c.ns, listOpts)
_, err := c.Fake.Invokes(action, &v1beta1.FederatedResourceQuotaList{})
return err
}
// Patch applies the patch and returns the patched federatedResourceQuota.
func (c *FakeFederatedResourceQuotas) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.FederatedResourceQuota, err error) {
obj, err := c.Fake.
Invokes(testing.NewPatchSubresourceAction(federatedresourcequotasResource, c.ns, name, pt, data, subresources...), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}

View File

@@ -76,10 +76,6 @@ func (c *FakeTypesV1beta1) FederatedPersistentVolumeClaims(namespace string) v1b
return &FakeFederatedPersistentVolumeClaims{c, namespace}
}
func (c *FakeTypesV1beta1) FederatedResourceQuotas(namespace string) v1beta1.FederatedResourceQuotaInterface {
return &FakeFederatedResourceQuotas{c, namespace}
}
func (c *FakeTypesV1beta1) FederatedSecrets(namespace string) v1beta1.FederatedSecretInterface {
return &FakeFederatedSecrets{c, namespace}
}

View File

@@ -1,195 +0,0 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1beta1
import (
"context"
"time"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
rest "k8s.io/client-go/rest"
v1beta1 "kubesphere.io/api/types/v1beta1"
scheme "kubesphere.io/kubesphere/pkg/client/clientset/versioned/scheme"
)
// FederatedResourceQuotasGetter has a method to return a FederatedResourceQuotaInterface.
// A group's client should implement this interface.
type FederatedResourceQuotasGetter interface {
FederatedResourceQuotas(namespace string) FederatedResourceQuotaInterface
}
// FederatedResourceQuotaInterface has methods to work with FederatedResourceQuota resources.
type FederatedResourceQuotaInterface interface {
Create(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.CreateOptions) (*v1beta1.FederatedResourceQuota, error)
Update(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (*v1beta1.FederatedResourceQuota, error)
UpdateStatus(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (*v1beta1.FederatedResourceQuota, error)
Delete(ctx context.Context, name string, opts v1.DeleteOptions) error
DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error
Get(ctx context.Context, name string, opts v1.GetOptions) (*v1beta1.FederatedResourceQuota, error)
List(ctx context.Context, opts v1.ListOptions) (*v1beta1.FederatedResourceQuotaList, error)
Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error)
Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.FederatedResourceQuota, err error)
FederatedResourceQuotaExpansion
}
// federatedResourceQuotas implements FederatedResourceQuotaInterface
type federatedResourceQuotas struct {
client rest.Interface
ns string
}
// newFederatedResourceQuotas returns a FederatedResourceQuotas
func newFederatedResourceQuotas(c *TypesV1beta1Client, namespace string) *federatedResourceQuotas {
return &federatedResourceQuotas{
client: c.RESTClient(),
ns: namespace,
}
}
// Get takes name of the federatedResourceQuota, and returns the corresponding federatedResourceQuota object, and an error if there is any.
func (c *federatedResourceQuotas) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Get().
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(name).
VersionedParams(&options, scheme.ParameterCodec).
Do(ctx).
Into(result)
return
}
// List takes label and field selectors, and returns the list of FederatedResourceQuotas that match those selectors.
func (c *federatedResourceQuotas) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.FederatedResourceQuotaList, err error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
result = &v1beta1.FederatedResourceQuotaList{}
err = c.client.Get().
Namespace(c.ns).
Resource("federatedresourcequotas").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Do(ctx).
Into(result)
return
}
// Watch returns a watch.Interface that watches the requested federatedResourceQuotas.
func (c *federatedResourceQuotas) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
opts.Watch = true
return c.client.Get().
Namespace(c.ns).
Resource("federatedresourcequotas").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Watch(ctx)
}
// Create takes the representation of a federatedResourceQuota and creates it. Returns the server's representation of the federatedResourceQuota, and an error, if there is any.
func (c *federatedResourceQuotas) Create(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.CreateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Post().
Namespace(c.ns).
Resource("federatedresourcequotas").
VersionedParams(&opts, scheme.ParameterCodec).
Body(federatedResourceQuota).
Do(ctx).
Into(result)
return
}
// Update takes the representation of a federatedResourceQuota and updates it. Returns the server's representation of the federatedResourceQuota, and an error, if there is any.
func (c *federatedResourceQuotas) Update(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Put().
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(federatedResourceQuota.Name).
VersionedParams(&opts, scheme.ParameterCodec).
Body(federatedResourceQuota).
Do(ctx).
Into(result)
return
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *federatedResourceQuotas) UpdateStatus(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Put().
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(federatedResourceQuota.Name).
SubResource("status").
VersionedParams(&opts, scheme.ParameterCodec).
Body(federatedResourceQuota).
Do(ctx).
Into(result)
return
}
// Delete takes name of the federatedResourceQuota and deletes it. Returns an error if one occurs.
func (c *federatedResourceQuotas) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
return c.client.Delete().
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(name).
Body(&opts).
Do(ctx).
Error()
}
// DeleteCollection deletes a collection of objects.
func (c *federatedResourceQuotas) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
var timeout time.Duration
if listOpts.TimeoutSeconds != nil {
timeout = time.Duration(*listOpts.TimeoutSeconds) * time.Second
}
return c.client.Delete().
Namespace(c.ns).
Resource("federatedresourcequotas").
VersionedParams(&listOpts, scheme.ParameterCodec).
Timeout(timeout).
Body(&opts).
Do(ctx).
Error()
}
// Patch applies the patch and returns the patched federatedResourceQuota.
func (c *federatedResourceQuotas) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Patch(pt).
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(name).
SubResource(subresources...).
VersionedParams(&opts, scheme.ParameterCodec).
Body(data).
Do(ctx).
Into(result)
return
}

View File

@@ -42,8 +42,6 @@ type FederatedNamespaceExpansion interface{}
type FederatedPersistentVolumeClaimExpansion interface{}
type FederatedResourceQuotaExpansion interface{}
type FederatedSecretExpansion interface{}
type FederatedServiceExpansion interface{}

View File

@@ -38,7 +38,6 @@ type TypesV1beta1Interface interface {
FederatedLimitRangesGetter
FederatedNamespacesGetter
FederatedPersistentVolumeClaimsGetter
FederatedResourceQuotasGetter
FederatedSecretsGetter
FederatedServicesGetter
FederatedStatefulSetsGetter
@@ -97,10 +96,6 @@ func (c *TypesV1beta1Client) FederatedPersistentVolumeClaims(namespace string) F
return newFederatedPersistentVolumeClaims(c, namespace)
}
func (c *TypesV1beta1Client) FederatedResourceQuotas(namespace string) FederatedResourceQuotaInterface {
return newFederatedResourceQuotas(c, namespace)
}
func (c *TypesV1beta1Client) FederatedSecrets(namespace string) FederatedSecretInterface {
return newFederatedSecrets(c, namespace)
}

View File

@@ -188,8 +188,6 @@ func (f *sharedInformerFactory) ForResource(resource schema.GroupVersionResource
return &genericInformer{resource: resource.GroupResource(), informer: f.Types().V1beta1().FederatedNamespaces().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("federatedpersistentvolumeclaims"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Types().V1beta1().FederatedPersistentVolumeClaims().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("federatedresourcequotas"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Types().V1beta1().FederatedResourceQuotas().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("federatedsecrets"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Types().V1beta1().FederatedSecrets().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("federatedservices"):

View File

@@ -1,90 +0,0 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package v1beta1
import (
"context"
time "time"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
watch "k8s.io/apimachinery/pkg/watch"
cache "k8s.io/client-go/tools/cache"
typesv1beta1 "kubesphere.io/api/types/v1beta1"
versioned "kubesphere.io/kubesphere/pkg/client/clientset/versioned"
internalinterfaces "kubesphere.io/kubesphere/pkg/client/informers/externalversions/internalinterfaces"
v1beta1 "kubesphere.io/kubesphere/pkg/client/listers/types/v1beta1"
)
// FederatedResourceQuotaInformer provides access to a shared informer and lister for
// FederatedResourceQuotas.
type FederatedResourceQuotaInformer interface {
Informer() cache.SharedIndexInformer
Lister() v1beta1.FederatedResourceQuotaLister
}
type federatedResourceQuotaInformer struct {
factory internalinterfaces.SharedInformerFactory
tweakListOptions internalinterfaces.TweakListOptionsFunc
namespace string
}
// NewFederatedResourceQuotaInformer constructs a new informer for FederatedResourceQuota type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewFederatedResourceQuotaInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer {
return NewFilteredFederatedResourceQuotaInformer(client, namespace, resyncPeriod, indexers, nil)
}
// NewFilteredFederatedResourceQuotaInformer constructs a new informer for FederatedResourceQuota type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewFilteredFederatedResourceQuotaInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer {
return cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options v1.ListOptions) (runtime.Object, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.TypesV1beta1().FederatedResourceQuotas(namespace).List(context.TODO(), options)
},
WatchFunc: func(options v1.ListOptions) (watch.Interface, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.TypesV1beta1().FederatedResourceQuotas(namespace).Watch(context.TODO(), options)
},
},
&typesv1beta1.FederatedResourceQuota{},
resyncPeriod,
indexers,
)
}
func (f *federatedResourceQuotaInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer {
return NewFilteredFederatedResourceQuotaInformer(client, f.namespace, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions)
}
func (f *federatedResourceQuotaInformer) Informer() cache.SharedIndexInformer {
return f.factory.InformerFor(&typesv1beta1.FederatedResourceQuota{}, f.defaultInformer)
}
func (f *federatedResourceQuotaInformer) Lister() v1beta1.FederatedResourceQuotaLister {
return v1beta1.NewFederatedResourceQuotaLister(f.Informer().GetIndexer())
}

View File

@@ -48,8 +48,6 @@ type Interface interface {
FederatedNamespaces() FederatedNamespaceInformer
// FederatedPersistentVolumeClaims returns a FederatedPersistentVolumeClaimInformer.
FederatedPersistentVolumeClaims() FederatedPersistentVolumeClaimInformer
// FederatedResourceQuotas returns a FederatedResourceQuotaInformer.
FederatedResourceQuotas() FederatedResourceQuotaInformer
// FederatedSecrets returns a FederatedSecretInformer.
FederatedSecrets() FederatedSecretInformer
// FederatedServices returns a FederatedServiceInformer.
@@ -129,11 +127,6 @@ func (v *version) FederatedPersistentVolumeClaims() FederatedPersistentVolumeCla
return &federatedPersistentVolumeClaimInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}
}
// FederatedResourceQuotas returns a FederatedResourceQuotaInformer.
func (v *version) FederatedResourceQuotas() FederatedResourceQuotaInformer {
return &federatedResourceQuotaInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}
}
// FederatedSecrets returns a FederatedSecretInformer.
func (v *version) FederatedSecrets() FederatedSecretInformer {
return &federatedSecretInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}

View File

@@ -106,14 +106,6 @@ type FederatedPersistentVolumeClaimListerExpansion interface{}
// FederatedPersistentVolumeClaimNamespaceLister.
type FederatedPersistentVolumeClaimNamespaceListerExpansion interface{}
// FederatedResourceQuotaListerExpansion allows custom methods to be added to
// FederatedResourceQuotaLister.
type FederatedResourceQuotaListerExpansion interface{}
// FederatedResourceQuotaNamespaceListerExpansion allows custom methods to be added to
// FederatedResourceQuotaNamespaceLister.
type FederatedResourceQuotaNamespaceListerExpansion interface{}
// FederatedSecretListerExpansion allows custom methods to be added to
// FederatedSecretLister.
type FederatedSecretListerExpansion interface{}

View File

@@ -1,99 +0,0 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by lister-gen. DO NOT EDIT.
package v1beta1
import (
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/tools/cache"
v1beta1 "kubesphere.io/api/types/v1beta1"
)
// FederatedResourceQuotaLister helps list FederatedResourceQuotas.
// All objects returned here must be treated as read-only.
type FederatedResourceQuotaLister interface {
// List lists all FederatedResourceQuotas in the indexer.
// Objects returned here must be treated as read-only.
List(selector labels.Selector) (ret []*v1beta1.FederatedResourceQuota, err error)
// FederatedResourceQuotas returns an object that can list and get FederatedResourceQuotas.
FederatedResourceQuotas(namespace string) FederatedResourceQuotaNamespaceLister
FederatedResourceQuotaListerExpansion
}
// federatedResourceQuotaLister implements the FederatedResourceQuotaLister interface.
type federatedResourceQuotaLister struct {
indexer cache.Indexer
}
// NewFederatedResourceQuotaLister returns a new FederatedResourceQuotaLister.
func NewFederatedResourceQuotaLister(indexer cache.Indexer) FederatedResourceQuotaLister {
return &federatedResourceQuotaLister{indexer: indexer}
}
// List lists all FederatedResourceQuotas in the indexer.
func (s *federatedResourceQuotaLister) List(selector labels.Selector) (ret []*v1beta1.FederatedResourceQuota, err error) {
err = cache.ListAll(s.indexer, selector, func(m interface{}) {
ret = append(ret, m.(*v1beta1.FederatedResourceQuota))
})
return ret, err
}
// FederatedResourceQuotas returns an object that can list and get FederatedResourceQuotas.
func (s *federatedResourceQuotaLister) FederatedResourceQuotas(namespace string) FederatedResourceQuotaNamespaceLister {
return federatedResourceQuotaNamespaceLister{indexer: s.indexer, namespace: namespace}
}
// FederatedResourceQuotaNamespaceLister helps list and get FederatedResourceQuotas.
// All objects returned here must be treated as read-only.
type FederatedResourceQuotaNamespaceLister interface {
// List lists all FederatedResourceQuotas in the indexer for a given namespace.
// Objects returned here must be treated as read-only.
List(selector labels.Selector) (ret []*v1beta1.FederatedResourceQuota, err error)
// Get retrieves the FederatedResourceQuota from the indexer for a given namespace and name.
// Objects returned here must be treated as read-only.
Get(name string) (*v1beta1.FederatedResourceQuota, error)
FederatedResourceQuotaNamespaceListerExpansion
}
// federatedResourceQuotaNamespaceLister implements the FederatedResourceQuotaNamespaceLister
// interface.
type federatedResourceQuotaNamespaceLister struct {
indexer cache.Indexer
namespace string
}
// List lists all FederatedResourceQuotas in the indexer for a given namespace.
func (s federatedResourceQuotaNamespaceLister) List(selector labels.Selector) (ret []*v1beta1.FederatedResourceQuota, err error) {
err = cache.ListAllByNamespace(s.indexer, s.namespace, selector, func(m interface{}) {
ret = append(ret, m.(*v1beta1.FederatedResourceQuota))
})
return ret, err
}
// Get retrieves the FederatedResourceQuota from the indexer for a given namespace and name.
func (s federatedResourceQuotaNamespaceLister) Get(name string) (*v1beta1.FederatedResourceQuota, error) {
obj, exists, err := s.indexer.GetByKey(s.namespace + "/" + name)
if err != nil {
return nil, err
}
if !exists {
return nil, errors.NewNotFound(v1beta1.Resource("federatedresourcequota"), name)
}
return obj.(*v1beta1.FederatedResourceQuota), nil
}

View File

@@ -196,13 +196,13 @@ func newDeployments(deploymentName, namespace string, labels map[string]string,
return deployment
}
func newService(serviceName, namesapce string, labels map[string]string) *corev1.Service {
func newService(serviceName, namespace string, labels map[string]string) *corev1.Service {
labels["app"] = serviceName
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: serviceName,
Namespace: namesapce,
Namespace: namespace,
Labels: labels,
Annotations: map[string]string{
"servicemesh.kubesphere.io/enabled": "true",

View File

@@ -25,20 +25,18 @@ import (
"fmt"
"net/http"
"reflect"
"sync"
"strings"
"time"
"gopkg.in/yaml.v2"
v1 "k8s.io/api/core/v1"
apiextv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/api/equality"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/wait"
coreinformers "k8s.io/client-go/informers/core/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/kubernetes/scheme"
corev1 "k8s.io/client-go/kubernetes/typed/core/v1"
@@ -46,17 +44,18 @@ import (
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/retry"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog"
fedv1b1 "sigs.k8s.io/kubefed/pkg/apis/core/v1beta1"
clusterv1alpha1 "kubesphere.io/api/cluster/v1alpha1"
iamv1alpha2 "kubesphere.io/api/iam/v1alpha2"
"kubesphere.io/kubesphere/pkg/apiserver/config"
clusterclient "kubesphere.io/kubesphere/pkg/client/clientset/versioned/typed/cluster/v1alpha1"
kubesphere "kubesphere.io/kubesphere/pkg/client/clientset/versioned"
clusterinformer "kubesphere.io/kubesphere/pkg/client/informers/externalversions/cluster/v1alpha1"
clusterlister "kubesphere.io/kubesphere/pkg/client/listers/cluster/v1alpha1"
iamv1alpha2listers "kubesphere.io/kubesphere/pkg/client/listers/iam/v1alpha2"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/simple/client/multicluster"
"kubesphere.io/kubesphere/pkg/utils/k8sutil"
@@ -125,98 +124,71 @@ var hostCluster = &clusterv1alpha1.Cluster{
},
}
// ClusterData stores cluster client
type clusterData struct {
// cached rest.Config
config *rest.Config
// cached kubernetes client, rebuild once cluster changed
client kubernetes.Interface
// cached kubeconfig
cachedKubeconfig []byte
// cached transport, used to proxy kubesphere version request
transport http.RoundTripper
}
type clusterController struct {
eventBroadcaster record.EventBroadcaster
eventRecorder record.EventRecorder
// build this only for host cluster
client kubernetes.Interface
k8sClient kubernetes.Interface
hostConfig *rest.Config
clusterClient clusterclient.ClusterInterface
ksClient kubesphere.Interface
clusterLister clusterlister.ClusterLister
userLister iamv1alpha2listers.UserLister
clusterHasSynced cache.InformerSynced
queue workqueue.RateLimitingInterface
workerLoopPeriod time.Duration
mu sync.RWMutex
clusterMap map[string]*clusterData
resyncPeriod time.Duration
hostClusterName string
}
func NewClusterController(
client kubernetes.Interface,
k8sClient kubernetes.Interface,
ksClient kubesphere.Interface,
config *rest.Config,
clusterInformer clusterinformer.ClusterInformer,
clusterClient clusterclient.ClusterInterface,
userLister iamv1alpha2listers.UserLister,
resyncPeriod time.Duration,
hostClusterName string,
configmapInformer coreinformers.ConfigMapInformer,
) *clusterController {
broadcaster := record.NewBroadcaster()
broadcaster.StartLogging(func(format string, args ...interface{}) {
klog.Info(fmt.Sprintf(format, args))
})
broadcaster.StartRecordingToSink(&corev1.EventSinkImpl{Interface: client.CoreV1().Events("")})
broadcaster.StartRecordingToSink(&corev1.EventSinkImpl{Interface: k8sClient.CoreV1().Events("")})
recorder := broadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: "cluster-controller"})
c := &clusterController{
eventBroadcaster: broadcaster,
eventRecorder: recorder,
client: client,
k8sClient: k8sClient,
ksClient: ksClient,
hostConfig: config,
clusterClient: clusterClient,
queue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "cluster"),
workerLoopPeriod: time.Second,
clusterMap: make(map[string]*clusterData),
resyncPeriod: resyncPeriod,
hostClusterName: hostClusterName,
userLister: userLister,
}
c.clusterLister = clusterInformer.Lister()
c.clusterHasSynced = clusterInformer.Informer().HasSynced
clusterInformer.Informer().AddEventHandlerWithResyncPeriod(cache.ResourceEventHandlerFuncs{
AddFunc: c.addCluster,
AddFunc: c.enqueueCluster,
UpdateFunc: func(oldObj, newObj interface{}) {
c.addCluster(newObj)
},
DeleteFunc: c.addCluster,
}, resyncPeriod)
configmapInformer.Informer().AddEventHandlerWithResyncPeriod(cache.ResourceEventHandlerFuncs{
UpdateFunc: func(oldObj, newObj interface{}) {
oldCM := oldObj.(*v1.ConfigMap)
newCM := newObj.(*v1.ConfigMap)
if oldCM.ResourceVersion == newCM.ResourceVersion {
return
oldCluster := oldObj.(*clusterv1alpha1.Cluster)
newCluster := newObj.(*clusterv1alpha1.Cluster)
if !reflect.DeepEqual(oldCluster.Spec, newCluster.Spec) || newCluster.DeletionTimestamp != nil {
c.enqueueCluster(newObj)
}
// Update the clusterName field when the kubesphere-config configmap is updated.
c.syncClusterNameInConfigMap()
},
DeleteFunc: c.enqueueCluster,
}, resyncPeriod)
return c
@@ -247,10 +219,9 @@ func (c *clusterController) Run(workers int, stopCh <-chan struct{}) error {
klog.Errorf("Error create host cluster, error %v", err)
}
if err := c.probeClusters(); err != nil {
if err := c.resyncClusters(); err != nil {
klog.Errorf("failed to reconcile cluster ready status, err: %v", err)
}
}, c.resyncPeriod, stopCh)
<-stopCh
@@ -275,58 +246,6 @@ func (c *clusterController) processNextItem() bool {
return true
}
func buildClusterData(kubeconfig []byte) (*clusterData, error) {
// prepare for
clientConfig, err := clientcmd.NewClientConfigFromBytes(kubeconfig)
if err != nil {
klog.Errorf("Unable to create client config from kubeconfig bytes, %#v", err)
return nil, err
}
clusterConfig, err := clientConfig.ClientConfig()
if err != nil {
klog.Errorf("Failed to get client config, %#v", err)
return nil, err
}
transport, err := rest.TransportFor(clusterConfig)
if err != nil {
klog.Errorf("Failed to create transport, %#v", err)
return nil, err
}
clientSet, err := kubernetes.NewForConfig(clusterConfig)
if err != nil {
klog.Errorf("Failed to create ClientSet from config, %#v", err)
return nil, err
}
return &clusterData{
cachedKubeconfig: kubeconfig,
config: clusterConfig,
client: clientSet,
transport: transport,
}, nil
}
func (c *clusterController) syncStatus() error {
clusters, err := c.clusterLister.List(labels.Everything())
if err != nil {
return err
}
for _, cluster := range clusters {
key, err := cache.MetaNamespaceKeyFunc(cluster)
if err != nil {
return err
}
c.queue.AddRateLimited(key)
}
return nil
}
// reconcileHostCluster will create a host cluster if there are no clusters labeled 'cluster-role.kubesphere.io/host'
func (c *clusterController) reconcileHostCluster() error {
clusters, err := c.clusterLister.List(labels.SelectorFromSet(labels.Set{clusterv1alpha1.HostCluster: ""}))
@@ -343,14 +262,14 @@ func (c *clusterController) reconcileHostCluster() error {
if len(clusters) == 0 {
hostCluster.Spec.Connection.KubeConfig = hostKubeConfig
hostCluster.Name = c.hostClusterName
_, err = c.clusterClient.Create(context.TODO(), hostCluster, metav1.CreateOptions{})
_, err = c.ksClient.ClusterV1alpha1().Clusters().Create(context.TODO(), hostCluster, metav1.CreateOptions{})
return err
} else if len(clusters) > 1 {
return fmt.Errorf("there MUST not be more than one host clusters, while there are %d", len(clusters))
}
// only deal with cluster managed by kubesphere
cluster := clusters[0]
cluster := clusters[0].DeepCopy()
managedByKubesphere, ok := cluster.Labels[kubesphereManaged]
if !ok || managedByKubesphere != "true" {
return nil
@@ -367,84 +286,19 @@ func (c *clusterController) reconcileHostCluster() error {
}
// update host cluster config
_, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{})
_, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{})
return err
}
func (c *clusterController) probeClusters() error {
func (c *clusterController) resyncClusters() error {
clusters, err := c.clusterLister.List(labels.Everything())
if err != nil {
return err
}
for _, cluster := range clusters {
// if the cluster is not federated, we skip it and consider it not ready.
if !isConditionTrue(cluster, clusterv1alpha1.ClusterFederated) {
continue
}
if len(cluster.Spec.Connection.KubeConfig) == 0 {
continue
}
clientConfig, err := clientcmd.NewClientConfigFromBytes(cluster.Spec.Connection.KubeConfig)
if err != nil {
klog.Error(err)
continue
}
config, err := clientConfig.ClientConfig()
if err != nil {
klog.Error(err)
continue
}
config.Timeout = probeClusterTimeout
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {
klog.Error(err)
continue
}
var con clusterv1alpha1.ClusterCondition
_, err = clientSet.Discovery().ServerVersion()
if err == nil {
con = clusterv1alpha1.ClusterCondition{
Type: clusterv1alpha1.ClusterReady,
Status: v1.ConditionTrue,
LastUpdateTime: metav1.Now(),
LastTransitionTime: metav1.Now(),
Reason: string(clusterv1alpha1.ClusterReady),
Message: "Cluster is available now",
}
} else {
con = clusterv1alpha1.ClusterCondition{
Type: clusterv1alpha1.ClusterReady,
Status: v1.ConditionFalse,
LastUpdateTime: metav1.Now(),
LastTransitionTime: metav1.Now(),
Reason: "failed to connect get kubernetes version",
Message: "Cluster is not available now",
}
}
c.updateClusterCondition(cluster, con)
err = retry.RetryOnConflict(retry.DefaultBackoff, func() error {
ct, err := c.clusterClient.Get(context.TODO(), cluster.Name, metav1.GetOptions{})
if err != nil {
return err
}
ct.Status.Conditions = cluster.Status.Conditions
ct, err = c.clusterClient.Update(context.TODO(), ct, metav1.UpdateOptions{})
return err
})
if err != nil {
klog.Errorf("failed to update cluster %s status, err: %v", cluster.Name, err)
} else {
klog.V(4).Infof("successfully updated cluster %s to status %v", cluster.Name, con)
}
key, _ := cache.MetaNamespaceKeyFunc(cluster)
c.queue.Add(key)
}
return nil
@@ -465,7 +319,6 @@ func (c *clusterController) syncCluster(key string) error {
}()
cluster, err := c.clusterLister.Get(name)
if err != nil {
// cluster not found, possibly been deleted
// need to do the cleanup
@@ -483,7 +336,7 @@ func (c *clusterController) syncCluster(key string) error {
// registering our finalizer.
if !sets.NewString(cluster.ObjectMeta.Finalizers...).Has(clusterv1alpha1.Finalizer) {
cluster.ObjectMeta.Finalizers = append(cluster.ObjectMeta.Finalizers, clusterv1alpha1.Finalizer)
if cluster, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{}); err != nil {
if cluster, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{}); err != nil {
return err
}
}
@@ -493,17 +346,21 @@ func (c *clusterController) syncCluster(key string) error {
// need to unJoin federation first, before there are
// some cleanup work to do in member cluster which depends
// agent to proxy traffic
err = c.unJoinFederation(nil, name)
if err != nil {
if err = c.unJoinFederation(nil, name); err != nil {
klog.Errorf("Failed to unjoin federation for cluster %s, error %v", name, err)
return err
}
// cleanup after cluster has been deleted
if err := c.syncClusterMembers(nil, cluster); err != nil {
klog.Errorf("Failed to sync cluster members for %s: %v", name, err)
return err
}
// remove our cluster finalizer
finalizers := sets.NewString(cluster.ObjectMeta.Finalizers...)
finalizers.Delete(clusterv1alpha1.Finalizer)
cluster.ObjectMeta.Finalizers = finalizers.List()
if _, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{}); err != nil {
if _, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{}); err != nil {
return err
}
}
@@ -525,28 +382,30 @@ func (c *clusterController) syncCluster(key string) error {
return nil
}
// build up cached cluster data if there isn't any
c.mu.Lock()
clusterDt, ok := c.clusterMap[cluster.Name]
if !ok || clusterDt == nil || !equality.Semantic.DeepEqual(clusterDt.cachedKubeconfig, cluster.Spec.Connection.KubeConfig) {
clusterDt, err = buildClusterData(cluster.Spec.Connection.KubeConfig)
if err != nil {
c.mu.Unlock()
return err
}
c.clusterMap[cluster.Name] = clusterDt
clusterConfig, err := clientcmd.RESTConfigFromKubeConfig(cluster.Spec.Connection.KubeConfig)
if err != nil {
return fmt.Errorf("failed to create cluster config for %s: %s", cluster.Name, err)
}
clusterClient, err := kubernetes.NewForConfig(clusterConfig)
if err != nil {
return fmt.Errorf("failed to create cluster client for %s: %s", cluster.Name, err)
}
proxyTransport, err := rest.TransportFor(clusterConfig)
if err != nil {
return fmt.Errorf("failed to create proxy transport for %s: %s", cluster.Name, err)
}
c.mu.Unlock()
if !cluster.Spec.JoinFederation { // trying to unJoin federation
err = c.unJoinFederation(clusterDt.config, cluster.Name)
err = c.unJoinFederation(clusterConfig, cluster.Name)
if err != nil {
klog.Errorf("Failed to unJoin federation for cluster %s, error %v", cluster.Name, err)
c.eventRecorder.Event(cluster, v1.EventTypeWarning, "UnJoinFederation", err.Error())
return err
}
} else { // join federation
_, err = c.joinFederation(clusterDt.config, cluster.Name, cluster.Labels)
_, err = c.joinFederation(clusterConfig, cluster.Name, cluster.Labels)
if err != nil {
klog.Errorf("Failed to join federation for cluster %s, error %v", cluster.Name, err)
@@ -559,8 +418,17 @@ func (c *clusterController) syncCluster(key string) error {
Message: "Cluster can not join federation control plane",
}
c.updateClusterCondition(cluster, federationNotReadyCondition)
notReadyCondition := clusterv1alpha1.ClusterCondition{
Type: clusterv1alpha1.ClusterReady,
Status: v1.ConditionFalse,
LastUpdateTime: metav1.Now(),
LastTransitionTime: metav1.Now(),
Reason: "Cluster join federation control plane failed",
Message: "Cluster is Not Ready now",
}
c.updateClusterCondition(cluster, notReadyCondition)
_, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{})
_, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{})
if err != nil {
klog.Errorf("Failed to update cluster status, %#v", err)
}
@@ -586,29 +454,33 @@ func (c *clusterController) syncCluster(key string) error {
// since there is no agent necessary for host cluster, so updates for host cluster
// is safe.
if len(cluster.Spec.Connection.KubernetesAPIEndpoint) == 0 {
cluster.Spec.Connection.KubernetesAPIEndpoint = clusterDt.config.Host
cluster.Spec.Connection.KubernetesAPIEndpoint = clusterConfig.Host
}
version, err := clusterDt.client.Discovery().ServerVersion()
serverVersion, err := clusterClient.Discovery().ServerVersion()
if err != nil {
klog.Errorf("Failed to get kubernetes version, %#v", err)
return err
}
cluster.Status.KubernetesVersion = version.GitVersion
cluster.Status.KubernetesVersion = serverVersion.GitVersion
nodes, err := clusterDt.client.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
nodes, err := clusterClient.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
if err != nil {
klog.Errorf("Failed to get cluster nodes, %#v", err)
return err
}
cluster.Status.NodeCount = len(nodes.Items)
configz, err := c.tryToFetchKubeSphereComponents(clusterDt.config.Host, clusterDt.transport)
if err == nil {
// TODO use rest.Interface instead
configz, err := c.tryToFetchKubeSphereComponents(clusterConfig.Host, proxyTransport)
if err != nil {
klog.Warningf("failed to fetch kubesphere components status in cluster %s: %s", cluster.Name, err)
} else {
cluster.Status.Configz = configz
}
v, err := c.tryFetchKubeSphereVersion(clusterDt.config.Host, clusterDt.transport)
// TODO use rest.Interface instead
v, err := c.tryFetchKubeSphereVersion(clusterConfig.Host, proxyTransport)
if err != nil {
klog.Errorf("failed to get KubeSphere version, err: %#v", err)
} else {
@@ -616,7 +488,7 @@ func (c *clusterController) syncCluster(key string) error {
}
// Use kube-system namespace UID as cluster ID
kubeSystem, err := clusterDt.client.CoreV1().Namespaces().Get(context.TODO(), metav1.NamespaceSystem, metav1.GetOptions{})
kubeSystem, err := clusterClient.CoreV1().Namespaces().Get(context.TODO(), metav1.NamespaceSystem, metav1.GetOptions{})
if err != nil {
return err
}
@@ -630,7 +502,7 @@ func (c *clusterController) syncCluster(key string) error {
cluster.Labels[clusterv1alpha1.HostCluster] = ""
}
readyConditon := clusterv1alpha1.ClusterCondition{
readyCondition := clusterv1alpha1.ClusterCondition{
Type: clusterv1alpha1.ClusterReady,
Status: v1.ConditionTrue,
LastUpdateTime: metav1.Now(),
@@ -638,25 +510,29 @@ func (c *clusterController) syncCluster(key string) error {
Reason: string(clusterv1alpha1.ClusterReady),
Message: "Cluster is available now",
}
c.updateClusterCondition(cluster, readyConditon)
c.updateClusterCondition(cluster, readyCondition)
if err = c.updateKubeConfigExpirationDateCondition(cluster); err != nil {
klog.Errorf("sync KubeConfig expiration date for cluster %s failed: %v", cluster.Name, err)
return err
}
if !reflect.DeepEqual(oldCluster, cluster) {
_, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{})
if !reflect.DeepEqual(oldCluster.Status, cluster.Status) {
_, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{})
if err != nil {
klog.Errorf("Failed to update cluster status, %#v", err)
return err
}
}
if err = c.setClusterNameInConfigMap(clusterDt.client, cluster.Name); err != nil {
if err = c.setClusterNameInConfigMap(clusterClient, cluster.Name); err != nil {
return err
}
if err = c.syncClusterMembers(clusterClient, cluster); err != nil {
return fmt.Errorf("failed to sync cluster membership for %s: %s", cluster.Name, err)
}
return nil
}
@@ -689,29 +565,8 @@ func (c *clusterController) setClusterNameInConfigMap(client kubernetes.Interfac
return nil
}
func (c *clusterController) syncClusterNameInConfigMap() {
clusters, err := c.clusterLister.List(labels.Everything())
if err != nil {
klog.Errorf("list clusters failed: %v", err)
return
}
for _, cluster := range clusters {
clusterDt, ok := c.clusterMap[cluster.Name]
if !ok {
continue
}
if err = retry.RetryOnConflict(retry.DefaultRetry, func() (err error) {
return c.setClusterNameInConfigMap(clusterDt.client, cluster.Name)
}); err != nil {
klog.Errorf("update configmap %s failed: %v", constants.KubeSphereConfigName, err)
continue
}
}
}
func (c *clusterController) checkIfClusterIsHostCluster(memberClusterNodes *v1.NodeList) bool {
hostNodes, err := c.client.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
hostNodes, err := c.k8sClient.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
if err != nil {
return false
}
@@ -759,7 +614,6 @@ func (c *clusterController) tryToFetchKubeSphereComponents(host string, transpor
return configz, nil
}
//
func (c *clusterController) tryFetchKubeSphereVersion(host string, transport http.RoundTripper) (string, error) {
client := http.Client{
Transport: transport,
@@ -797,7 +651,7 @@ func (c *clusterController) tryFetchKubeSphereVersion(host string, transport htt
return info.GitVersion, nil
}
func (c *clusterController) addCluster(obj interface{}) {
func (c *clusterController) enqueueCluster(obj interface{}) {
cluster := obj.(*clusterv1alpha1.Cluster)
key, err := cache.MetaNamespaceKeyFunc(obj)
@@ -957,3 +811,55 @@ func (c *clusterController) updateKubeConfigExpirationDateCondition(cluster *clu
})
return nil
}
// syncClusterMembers Sync granted clusters for users periodically
func (c *clusterController) syncClusterMembers(clusterClient *kubernetes.Clientset, cluster *clusterv1alpha1.Cluster) error {
users, err := c.userLister.List(labels.Everything())
if err != nil {
return fmt.Errorf("failed to list users: %s", err)
}
grantedUsers := sets.NewString()
clusterName := cluster.Name
if cluster.DeletionTimestamp.IsZero() {
list, err := clusterClient.RbacV1().ClusterRoleBindings().List(context.Background(),
metav1.ListOptions{LabelSelector: iamv1alpha2.UserReferenceLabel})
if err != nil {
return fmt.Errorf("failed to list clusterrolebindings: %s", err)
}
for _, clusterRoleBinding := range list.Items {
for _, sub := range clusterRoleBinding.Subjects {
if sub.Kind == iamv1alpha2.ResourceKindUser {
grantedUsers.Insert(sub.Name)
}
}
}
}
for _, user := range users {
user = user.DeepCopy()
grantedClustersAnnotation := user.Annotations[iamv1alpha2.GrantedClustersAnnotation]
var grantedClusters sets.String
if len(grantedClustersAnnotation) > 0 {
grantedClusters = sets.NewString(strings.Split(grantedClustersAnnotation, ",")...)
} else {
grantedClusters = sets.NewString()
}
if grantedUsers.Has(user.Name) && !grantedClusters.Has(clusterName) {
grantedClusters.Insert(clusterName)
} else if !grantedUsers.Has(user.Name) && grantedClusters.Has(clusterName) {
grantedClusters.Delete(clusterName)
}
grantedClustersAnnotation = strings.Join(grantedClusters.List(), ",")
if user.Annotations[iamv1alpha2.GrantedClustersAnnotation] != grantedClustersAnnotation {
if user.Annotations == nil {
user.Annotations = make(map[string]string, 0)
}
user.Annotations[iamv1alpha2.GrantedClustersAnnotation] = grantedClustersAnnotation
if _, err := c.ksClient.IamV1alpha2().Users().Update(context.Background(), user, metav1.UpdateOptions{}); err != nil {
return fmt.Errorf("failed to update user %s: %s", user.Name, err)
}
}
}
return nil
}

View File

@@ -18,6 +18,7 @@ package cluster
import (
"context"
"fmt"
"reflect"
"time"
@@ -268,7 +269,7 @@ func createAuthorizedServiceAccount(joiningClusterClientset kubeclient.Interface
klog.V(2).Infof("Creating service account in joining cluster: %s", joiningClusterName)
saName, err := createServiceAccount(joiningClusterClientset, namespace,
saName, err := createServiceAccountWithSecret(joiningClusterClientset, namespace,
joiningClusterName, hostClusterName, dryRun, errorOnExisting)
if err != nil {
klog.V(2).Infof("Error creating service account: %s in joining cluster: %s due to: %v",
@@ -320,31 +321,75 @@ func createAuthorizedServiceAccount(joiningClusterClientset kubeclient.Interface
return saName, nil
}
// createServiceAccount creates a service account in the cluster associated
// createServiceAccountWithSecret creates a service account and secret in the cluster associated
// with clusterClientset with credentials that will be used by the host cluster
// to access its API server.
func createServiceAccount(clusterClientset kubeclient.Interface, namespace,
func createServiceAccountWithSecret(clusterClientset kubeclient.Interface, namespace,
joiningClusterName, hostClusterName string, dryRun, errorOnExisting bool) (string, error) {
saName := util.ClusterServiceAccountName(joiningClusterName, hostClusterName)
sa := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: saName,
Namespace: namespace,
},
}
if dryRun {
return saName, nil
}
// Create a new service account.
_, err := clusterClientset.CoreV1().ServiceAccounts(namespace).Create(context.Background(), sa, metav1.CreateOptions{})
switch {
case apierrors.IsAlreadyExists(err) && errorOnExisting:
klog.V(2).Infof("Service account %s/%s already exists in target cluster %s", namespace, saName, joiningClusterName)
ctx := context.Background()
sa, err := clusterClientset.CoreV1().ServiceAccounts(namespace).Get(ctx, saName, metav1.GetOptions{})
if err != nil {
if apierrors.IsNotFound(err) {
sa = &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: saName,
Namespace: namespace,
},
}
// We must create the sa first, then create the associated secret, and update the sa at last.
// Or the kube-controller-manager will delete the secret.
sa, err = clusterClientset.CoreV1().ServiceAccounts(namespace).Create(ctx, sa, metav1.CreateOptions{})
switch {
case apierrors.IsAlreadyExists(err) && errorOnExisting:
klog.V(2).Infof("Service account %s/%s already exists in target cluster %s", namespace, saName, joiningClusterName)
return "", err
case err != nil && !apierrors.IsAlreadyExists(err):
klog.V(2).Infof("Could not create service account %s/%s in target cluster %s due to: %v", namespace, saName, joiningClusterName, err)
return "", err
}
} else {
return "", err
}
}
if len(sa.Secrets) > 0 {
return saName, nil
}
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
GenerateName: fmt.Sprintf("%s-token-", saName),
Namespace: namespace,
Annotations: map[string]string{
corev1.ServiceAccountNameKey: saName,
},
},
Type: corev1.SecretTypeServiceAccountToken,
}
// After kubernetes v1.24, kube-controller-manger will not create the default secret for
// service account. http://kep.k8s.io/2800
// Create a default secret.
secret, err = clusterClientset.CoreV1().Secrets(namespace).Create(ctx, secret, metav1.CreateOptions{})
if err != nil && !apierrors.IsAlreadyExists(err) {
klog.V(2).Infof("Could not create secret for service account %s/%s in target cluster %s due to: %v", namespace, saName, joiningClusterName, err)
return "", err
case err != nil && !apierrors.IsAlreadyExists(err):
klog.V(2).Infof("Could not create service account %s/%s in target cluster %s due to: %v", namespace, saName, joiningClusterName, err)
}
// At last, update the service account.
sa.Secrets = append(sa.Secrets, corev1.ObjectReference{Name: secret.Name})
_, err = clusterClientset.CoreV1().ServiceAccounts(namespace).Update(ctx, sa, metav1.UpdateOptions{})
switch {
case err != nil:
klog.Infof("Could not update service account %s/%s in target cluster %s due to: %v", namespace, saName, joiningClusterName, err)
return "", err
default:
return saName, nil

View File

@@ -18,84 +18,14 @@ package cluster
import (
"fmt"
"strings"
"github.com/pkg/errors"
"k8s.io/apimachinery/pkg/api/meta"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
pkgruntime "k8s.io/apimachinery/pkg/runtime"
kubeclient "k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
)
// Default values for the federated group and version used by
// the enable and disable subcommands of `kubefedctl`.
const (
DefaultFederatedGroup = "types.kubefed.io"
DefaultFederatedVersion = "v1beta1"
FederatedKindPrefix = "Federated"
)
// FedConfig provides a rest config based on the filesystem kubeconfig (via
// pathOptions) and context in order to talk to the host kubernetes cluster
// and the joining kubernetes cluster.
type FedConfig interface {
HostConfig(context, kubeconfigPath string) (*rest.Config, error)
ClusterConfig(context, kubeconfigPath string) (*rest.Config, error)
GetClientConfig(ontext, kubeconfigPath string) clientcmd.ClientConfig
}
// fedConfig implements the FedConfig interface.
type fedConfig struct {
pathOptions *clientcmd.PathOptions
}
// NewFedConfig creates a fedConfig for `kubefedctl` commands.
func NewFedConfig(pathOptions *clientcmd.PathOptions) FedConfig {
return &fedConfig{
pathOptions: pathOptions,
}
}
// HostConfig provides a rest config to talk to the host kubernetes cluster
// based on the context and kubeconfig passed in.
func (a *fedConfig) HostConfig(context, kubeconfigPath string) (*rest.Config, error) {
hostConfig := a.GetClientConfig(context, kubeconfigPath)
hostClientConfig, err := hostConfig.ClientConfig()
if err != nil {
return nil, err
}
return hostClientConfig, nil
}
// ClusterConfig provides a rest config to talk to the joining kubernetes
// cluster based on the context and kubeconfig passed in.
func (a *fedConfig) ClusterConfig(context, kubeconfigPath string) (*rest.Config, error) {
clusterConfig := a.GetClientConfig(context, kubeconfigPath)
clusterClientConfig, err := clusterConfig.ClientConfig()
if err != nil {
return nil, err
}
return clusterClientConfig, nil
}
// getClientConfig is a helper method to create a client config from the
// context and kubeconfig passed as arguments.
func (a *fedConfig) GetClientConfig(context, kubeconfigPath string) clientcmd.ClientConfig {
loadingRules := *a.pathOptions.LoadingRules
loadingRules.Precedence = a.pathOptions.GetLoadingPrecedence()
loadingRules.ExplicitPath = kubeconfigPath
overrides := &clientcmd.ConfigOverrides{
CurrentContext: context,
}
return clientcmd.NewNonInteractiveDeferredLoadingClientConfig(&loadingRules, overrides)
}
// HostClientset provides a kubernetes API compliant clientset to
// communicate with the host cluster's kubernetes API server.
func HostClientset(config *rest.Config) (*kubeclient.Clientset, error) {
@@ -114,54 +44,6 @@ func ClusterServiceAccountName(joiningClusterName, hostClusterName string) strin
return fmt.Sprintf("%s-%s", joiningClusterName, hostClusterName)
}
// RoleName returns the name of a Role or ClusterRole and its
// associated RoleBinding or ClusterRoleBinding that are used to allow
// the service account to access necessary resources on the cluster.
func RoleName(serviceAccountName string) string {
return fmt.Sprintf("kubefed-controller-manager:%s", serviceAccountName)
}
// HealthCheckRoleName returns the name of a ClusterRole and its
// associated ClusterRoleBinding that is used to allow the service
// account to check the health of the cluster and list nodes.
func HealthCheckRoleName(serviceAccountName, namespace string) string {
return fmt.Sprintf("kubefed-controller-manager:%s:healthcheck-%s", namespace, serviceAccountName)
}
// IsFederatedAPIResource checks if a resource with the given Kind and group is a Federated one
func IsFederatedAPIResource(kind, group string) bool {
return strings.HasPrefix(kind, FederatedKindPrefix) && group == DefaultFederatedGroup
}
// GetNamespace returns namespace of the current context
func GetNamespace(hostClusterContext string, kubeconfig string, config FedConfig) (string, error) {
clientConfig := config.GetClientConfig(hostClusterContext, kubeconfig)
currentContext, err := CurrentContext(clientConfig)
if err != nil {
return "", err
}
ns, _, err := clientConfig.Namespace()
if err != nil {
return "", errors.Wrapf(err, "Failed to get ClientConfig for host cluster context %q and kubeconfig %q",
currentContext, kubeconfig)
}
if len(ns) == 0 {
ns = "default"
}
return ns, nil
}
// CurrentContext retrieves the current context from the provided config.
func CurrentContext(config clientcmd.ClientConfig) (string, error) {
rawConfig, err := config.RawConfig()
if err != nil {
return "", errors.Wrap(err, "Failed to get current context from config")
}
return rawConfig.CurrentContext, nil
}
// IsPrimaryCluster checks if the caller is working with objects for the
// primary cluster by checking if the UIDs match for both ObjectMetas passed
// in.

View File

@@ -73,7 +73,7 @@ func (r *Reconciler) SetupWithManager(mgr ctrl.Manager) error {
if err := r.SetupWithManager(mgr); err != nil {
return err
}
klog.Info("configured watch", "gvk", w.GroupVersionKind, "chartPath", w.ChartPath, "maxConcurrentReconciles", maxConcurrentReconciles, "reconcilePeriod", reconcilePeriod)
klog.Infoln("configured watch", "gvk", w.GroupVersionKind, "chartPath", w.ChartPath, "maxConcurrentReconciles", maxConcurrentReconciles, "reconcilePeriod", reconcilePeriod)
}
return nil
}

View File

@@ -1,3 +1,17 @@
// Copyright 2022 The KubeSphere Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
package webhooks
import (

View File

@@ -530,7 +530,9 @@ func (r *Reconciler) syncUserStatus(ctx context.Context, user *iamv1alpha2.User)
now := time.Now()
failedLoginAttempts := 0
for _, loginRecord := range records.Items {
afterStateTransition := user.Status.LastTransitionTime == nil || loginRecord.CreationTimestamp.After(user.Status.LastTransitionTime.Time)
if !loginRecord.Spec.Success &&
afterStateTransition &&
loginRecord.CreationTimestamp.Add(r.AuthenticationOptions.AuthenticateRateLimiterDuration).After(now) {
failedLoginAttempts++
}

View File

@@ -68,9 +68,11 @@ func TestDoNothing(t *testing.T) {
for i := 0; i < authenticateOptions.AuthenticateRateLimiterMaxTries+1; i++ {
loginRecord := iamv1alpha2.LoginRecord{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("%s-%d", user.Name, i),
Labels: map[string]string{iamv1alpha2.UserReferenceLabel: user.Name},
CreationTimestamp: metav1.Now(),
Name: fmt.Sprintf("%s-%d", user.Name, i),
Labels: map[string]string{iamv1alpha2.UserReferenceLabel: user.Name},
// Ensure that the failed login record created after the user status change to active,
// otherwise, the failed login attempts will not be counted.
CreationTimestamp: metav1.NewTime(time.Now().Add(time.Minute)),
},
Spec: iamv1alpha2.LoginRecordSpec{
Success: false,

View File

@@ -537,7 +537,13 @@ func (v *VirtualServiceController) generateVirtualServiceSpec(strategy *servicem
}
if len(strategyTempSpec.Tcp) > 0 && !servicemesh.SupportHttpProtocol(port.Name) {
for _, tcp := range strategyTempSpec.Tcp {
tcp.Match = []*apinetworkingv1alpha3.L4MatchAttributes{{Port: uint32(port.Port)}}
if len(tcp.Match) == 0 {
tcp.Match = []*apinetworkingv1alpha3.L4MatchAttributes{{Port: uint32(port.Port)}}
} else {
for _, match := range tcp.Match {
match.Port = uint32(port.Port)
}
}
for _, r := range tcp.Route {
r.Destination.Port = &apinetworkingv1alpha3.PortSelector{Number: uint32(port.Port)}
}

View File

@@ -17,6 +17,7 @@ limitations under the License.
package informers
import (
"reflect"
"time"
snapshotclient "github.com/kubernetes-csi/external-snapshotter/client/v4/clientset/versioned"
@@ -51,6 +52,11 @@ type InformerFactory interface {
Start(stopCh <-chan struct{})
}
type GenericInformerFactory interface {
Start(stopCh <-chan struct{})
WaitForCacheSync(stopCh <-chan struct{}) map[reflect.Type]bool
}
type informerFactories struct {
informerFactory k8sinformers.SharedInformerFactory
ksInformerFactory ksinformers.SharedInformerFactory

View File

@@ -20,6 +20,8 @@ import (
"github.com/emicklei/go-restful"
"k8s.io/apimachinery/pkg/runtime/schema"
"kubesphere.io/kubesphere/pkg/simple/client/gpu"
kubesphereconfig "kubesphere.io/kubesphere/pkg/apiserver/config"
"kubesphere.io/kubesphere/pkg/apiserver/runtime"
)
@@ -48,7 +50,11 @@ func AddToContainer(c *restful.Container, config *kubesphereconfig.Config) error
webservice.Route(webservice.GET("/configs/gpu/kinds").
Doc("Get all supported GPU kinds.").
To(func(request *restful.Request, response *restful.Response) {
response.WriteAsJson(config.GPUOptions.Kinds)
var kinds []gpu.GPUKind
if config.GPUOptions != nil {
kinds = config.GPUOptions.Kinds
}
response.WriteAsJson(kinds)
}))
c.Add(webservice)

View File

@@ -17,7 +17,6 @@ limitations under the License.
package v1alpha1
import (
"context"
"fmt"
"time"
@@ -177,7 +176,7 @@ func (h *handler) PodLog(request *restful.Request, response *restful.Response) {
}
fw := flushwriter.Wrap(response.ResponseWriter)
err := h.gw.GetPodLogs(context.TODO(), podNamespace, podID, logOptions, fw)
err := h.gw.GetPodLogs(request.Request.Context(), podNamespace, podID, logOptions, fw)
if err != nil {
api.HandleError(response, request, err)
return
@@ -196,7 +195,7 @@ func (h *handler) PodLogSearch(request *restful.Request, response *restful.Respo
api.HandleError(response, request, err)
return
}
// ES log will be filted by pods and namespace by default.
// ES log will be filtered by pods and namespace by default.
pods, err := h.gw.GetPods(ns, &query.Query{})
if err != nil {
api.HandleError(response, request, err)

View File

@@ -380,6 +380,7 @@ func (h *iamHandler) ListWorkspaceRoles(request *restful.Request, response *rest
queryParam.Filters[iamv1alpha2.ScopeWorkspace] = query.Value(workspace)
// shared workspace role template
if string(queryParam.Filters[query.FieldLabel]) == fmt.Sprintf("%s=%s", iamv1alpha2.RoleTemplateLabel, "true") ||
strings.Contains(queryParam.LabelSelector, iamv1alpha2.RoleTemplateLabel) ||
queryParam.Filters[iamv1alpha2.AggregateTo] != "" {
delete(queryParam.Filters, iamv1alpha2.ScopeWorkspace)
}

View File

@@ -21,12 +21,10 @@ package v1alpha1
import (
"github.com/emicklei/go-restful"
"k8s.io/client-go/kubernetes"
openpitrixoptions "kubesphere.io/kubesphere/pkg/simple/client/openpitrix"
runtimeclient "sigs.k8s.io/controller-runtime/pkg/client"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
"kubesphere.io/kubesphere/pkg/informers"
monitorhle "kubesphere.io/kubesphere/pkg/kapis/monitoring/v1alpha3"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
@@ -47,6 +45,6 @@ type meterHandler interface {
HandlePVCMeterQuery(req *restful.Request, resp *restful.Response)
}
func newHandler(k kubernetes.Interface, m monitoring.Interface, f informers.InformerFactory, ksClient versioned.Interface, resourceGetter *resourcev1alpha3.ResourceGetter, meteringOptions *meteringclient.Options, opOptions *openpitrixoptions.Options, rtClient runtimeclient.Client, stopCh <-chan struct{}) meterHandler {
return monitorhle.NewHandler(k, m, nil, f, ksClient, resourceGetter, meteringOptions, opOptions, rtClient, stopCh)
func newHandler(k kubernetes.Interface, m monitoring.Interface, f informers.InformerFactory, resourceGetter *resourcev1alpha3.ResourceGetter, meteringOptions *meteringclient.Options, opClient openpitrix.Interface, rtClient runtimeclient.Client) meterHandler {
return monitorhle.NewHandler(k, m, nil, f, resourceGetter, meteringOptions, opClient, rtClient)
}

View File

@@ -20,9 +20,7 @@ package v1alpha1
import (
"net/http"
openpitrixoptions "kubesphere.io/kubesphere/pkg/simple/client/openpitrix"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
"github.com/emicklei/go-restful"
restfulspec "github.com/emicklei/go-restful-openapi"
@@ -49,10 +47,10 @@ const (
var GroupVersion = schema.GroupVersion{Group: groupName, Version: "v1alpha1"}
func AddToContainer(c *restful.Container, k8sClient kubernetes.Interface, meteringClient monitoring.Interface, factory informers.InformerFactory, ksClient versioned.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, opOptions *openpitrixoptions.Options, rtClient runtimeclient.Client, stopCh <-chan struct{}) error {
func AddToContainer(c *restful.Container, k8sClient kubernetes.Interface, meteringClient monitoring.Interface, factory informers.InformerFactory, cache cache.Cache, meteringOptions *meteringclient.Options, opClient openpitrix.Interface, rtClient runtimeclient.Client) error {
ws := runtime.NewWebService(GroupVersion)
h := newHandler(k8sClient, meteringClient, factory, ksClient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opOptions, rtClient, stopCh)
h := newHandler(k8sClient, meteringClient, factory, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opClient, rtClient)
ws.Route(ws.GET("/cluster").
To(h.HandleClusterMeterQuery).

View File

@@ -27,14 +27,8 @@ import (
"regexp"
"strings"
"k8s.io/klog"
converter "kubesphere.io/monitoring-dashboard/tools/converter"
openpitrixoptions "kubesphere.io/kubesphere/pkg/simple/client/openpitrix"
"kubesphere.io/kubesphere/pkg/simple/client/s3"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
"github.com/emicklei/go-restful"
@@ -60,27 +54,16 @@ type handler struct {
rtClient runtimeclient.Client
}
func NewHandler(k kubernetes.Interface, monitoringClient monitoring.Interface, metricsClient monitoring.Interface, f informers.InformerFactory, ksClient versioned.Interface, resourceGetter *resourcev1alpha3.ResourceGetter, meteringOptions *meteringclient.Options, opOptions *openpitrixoptions.Options, rtClient runtimeclient.Client, stopCh <-chan struct{}) *handler {
var opRelease openpitrix.Interface
var s3Client s3.Interface
if opOptions != nil && opOptions.S3Options != nil && len(opOptions.S3Options.Endpoint) != 0 {
var err error
s3Client, err = s3.NewS3Client(opOptions.S3Options)
if err != nil {
klog.Errorf("failed to connect to storage, please check storage service status, error: %v", err)
}
}
if ksClient != nil {
opRelease = openpitrix.NewOpenpitrixOperator(f, ksClient, s3Client, stopCh)
}
func NewHandler(k kubernetes.Interface, monitoringClient monitoring.Interface, metricsClient monitoring.Interface, f informers.InformerFactory, resourceGetter *resourcev1alpha3.ResourceGetter, meteringOptions *meteringclient.Options, opClient openpitrix.Interface, rtClient runtimeclient.Client) *handler {
if meteringOptions == nil || meteringOptions.RetentionDay == "" {
meteringOptions = &meteringclient.DefaultMeteringOption
}
return &handler{
k: k,
mo: model.NewMonitoringOperator(monitoringClient, metricsClient, k, f, resourceGetter, opRelease),
opRelease: opRelease,
mo: model.NewMonitoringOperator(monitoringClient, metricsClient, k, f, resourceGetter, opClient),
opRelease: opClient,
meteringOptions: meteringOptions,
rtClient: rtClient,
}

View File

@@ -373,7 +373,7 @@ func TestParseRequestParams(t *testing.T) {
fakeInformerFactory.KubeSphereSharedInformerFactory()
handler := NewHandler(client, nil, nil, fakeInformerFactory, ksClient, nil, nil, nil, nil, nil)
handler := NewHandler(client, nil, nil, fakeInformerFactory, nil, nil, nil, nil)
result, err := handler.makeQueryOptions(tt.params, tt.lvl)
if err != nil {

View File

@@ -20,12 +20,10 @@ package v1alpha3
import (
"net/http"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
monitoringdashboardv1alpha2 "kubesphere.io/monitoring-dashboard/api/v1alpha2"
openpitrixoptions "kubesphere.io/kubesphere/pkg/simple/client/openpitrix"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"github.com/emicklei/go-restful"
restfulspec "github.com/emicklei/go-restful-openapi"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -47,10 +45,10 @@ const (
var GroupVersion = schema.GroupVersion{Group: groupName, Version: "v1alpha3"}
func AddToContainer(c *restful.Container, k8sClient kubernetes.Interface, monitoringClient monitoring.Interface, metricsClient monitoring.Interface, factory informers.InformerFactory, ksClient versioned.Interface, opOptions *openpitrixoptions.Options, rtClient runtimeclient.Client, stopCh <-chan struct{}) error {
func AddToContainer(c *restful.Container, k8sClient kubernetes.Interface, monitoringClient monitoring.Interface, metricsClient monitoring.Interface, factory informers.InformerFactory, opClient openpitrix.Interface, rtClient runtimeclient.Client) error {
ws := runtime.NewWebService(GroupVersion)
h := NewHandler(k8sClient, monitoringClient, metricsClient, factory, ksClient, nil, nil, opOptions, rtClient, stopCh)
h := NewHandler(k8sClient, monitoringClient, metricsClient, factory, nil, nil, opClient, rtClient)
ws.Route(ws.GET("/kubesphere").
To(h.handleKubeSphereMetricsQuery).

View File

@@ -437,6 +437,9 @@ func (h *handler) passwordGrant(username string, password string, req *restful.R
authenticated, provider, err := h.passwordAuthenticator.Authenticate(req.Request.Context(), username, password)
if err != nil {
switch err {
case auth.AccountIsNotActiveError:
response.WriteHeaderAndEntity(http.StatusBadRequest, oauth.NewInvalidGrant(err))
return
case auth.IncorrectPasswordError:
requestInfo, _ := request.RequestInfoFrom(req.Request.Context())
if err := h.loginRecorder.RecordLogin(username, iamv1alpha2.Token, provider, requestInfo.SourceIP, requestInfo.UserAgent, err); err != nil {

View File

@@ -23,6 +23,8 @@ import (
"strings"
"time"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
restful "github.com/emicklei/go-restful"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@@ -50,7 +52,7 @@ type openpitrixHandler struct {
openpitrix openpitrix.Interface
}
func newOpenpitrixHandler(ksInformers informers.InformerFactory, ksClient versioned.Interface, option *openpitrixoptions.Options, stopCh <-chan struct{}) *openpitrixHandler {
func NewOpenpitrixClient(ksInformers informers.InformerFactory, ksClient versioned.Interface, option *openpitrixoptions.Options, cc clusterclient.ClusterClients) openpitrix.Interface {
var s3Client s3.Interface
if option != nil && option.S3Options != nil && len(option.S3Options.Endpoint) != 0 {
var err error
@@ -60,9 +62,7 @@ func newOpenpitrixHandler(ksInformers informers.InformerFactory, ksClient versio
}
}
return &openpitrixHandler{
openpitrix.NewOpenpitrixOperator(ksInformers, ksClient, s3Client, stopCh),
}
return openpitrix.NewOpenpitrixOperator(ksInformers, ksClient, s3Client, cc)
}
func (h *openpitrixHandler) CreateRepo(req *restful.Request, resp *restful.Response) {
@@ -753,7 +753,7 @@ func (h *openpitrixHandler) ListApplications(req *restful.Request, resp *restful
return
}
resp.WriteAsJson(result)
resp.WriteEntity(result)
}
func (h *openpitrixHandler) UpgradeApplication(req *restful.Request, resp *restful.Response) {

View File

@@ -38,11 +38,13 @@ const (
var GroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1"}
func AddToContainer(c *restful.Container, ksInfomrers informers.InformerFactory, ksClient versioned.Interface, options *openpitrixoptions.Options, stopCh <-chan struct{}) error {
func AddToContainer(c *restful.Container, ksInfomrers informers.InformerFactory, ksClient versioned.Interface, options *openpitrixoptions.Options, opClient openpitrix.Interface) error {
mimePatch := []string{restful.MIME_JSON, runtime.MimeJsonPatchJson, runtime.MimeMergePatchJson}
webservice := runtime.NewWebService(GroupVersion)
handler := newOpenpitrixHandler(ksInfomrers, ksClient, options, stopCh)
handler := &openpitrixHandler{
opClient,
}
webservice.Route(webservice.POST("/repos").
To(handler.CreateRepo).

View File

@@ -17,7 +17,16 @@ limitations under the License.
package v1alpha3
import (
"fmt"
"io"
"net/http"
"net/http/httptest"
"reflect"
"testing"
"unsafe"
"github.com/emicklei/go-restful"
"k8s.io/klog"
"github.com/google/go-cmp/cmp"
fakesnapshot "github.com/kubernetes-csi/external-snapshotter/client/v4/clientset/versioned/fake"
@@ -87,13 +96,11 @@ func TestResourceV1alpha2Fallback(t *testing.T) {
},
}
factory, err := prepare()
handler, err := prepare()
if err != nil {
t.Fatal(err)
}
handler := New(resourcev1alpha3.NewResourceGetter(factory, nil), resourcev1alpha2.NewResourceGetter(factory), components.NewComponentsGetter(factory.KubernetesSharedInformerFactory()))
for _, test := range tests {
got, err := listResources(test.namespace, test.resource, test.query, handler)
@@ -175,12 +182,31 @@ var (
ReadyReplicas: 0,
},
}
apiServerService = &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "ks-apiserver",
Namespace: "istio-system",
},
Spec: corev1.ServiceSpec{
Selector: map[string]string{"app": "ks-apiserver-app"},
},
}
ksControllerService = &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "ks-controller",
Namespace: "kubesphere-system",
},
Spec: corev1.ServiceSpec{
Selector: map[string]string{"app": "ks-controller-app"},
},
}
deployments = []interface{}{redisDeployment, nginxDeployment}
namespaces = []interface{}{defaultNamespace, kubesphereNamespace}
secrets = []interface{}{secretFoo1, secretFoo2}
services = []interface{}{apiServerService, ksControllerService}
)
func prepare() (informers.InformerFactory, error) {
func prepare() (*Handler, error) {
ksClient := fakeks.NewSimpleClientset()
k8sClient := fakek8s.NewSimpleClientset()
@@ -210,6 +236,91 @@ func prepare() (informers.InformerFactory, error) {
return nil, err
}
}
for _, service := range services {
err := k8sInformerFactory.Core().V1().Services().Informer().GetIndexer().Add(service)
if err != nil {
return nil, err
}
}
return fakeInformerFactory, nil
handler := New(resourcev1alpha3.NewResourceGetter(fakeInformerFactory, nil), resourcev1alpha2.NewResourceGetter(fakeInformerFactory), components.NewComponentsGetter(fakeInformerFactory.KubernetesSharedInformerFactory()))
return handler, nil
}
func TestHandleGetComponentStatus(t *testing.T) {
param := map[string]string{
"component": "ks-controller",
}
request, response, err := buildReqAndRes("GET", "/kapis/resources.kubesphere.io/v1alpha3/components/{component}", param, nil)
if err != nil {
t.Fatal("build res or req failed ")
}
handler, err := prepare()
if err != nil {
t.Fatal("init handler failed")
}
handler.handleGetComponentStatus(request, response)
if status := response.StatusCode(); status != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v",
status, http.StatusOK)
}
}
func TestHandleGetComponents(t *testing.T) {
request, response, err := buildReqAndRes("GET", "/kapis/resources.kubesphere.io/v1alpha3/components", nil, nil)
if err != nil {
t.Fatal("build res or req failed ")
}
handler, err := prepare()
if err != nil {
t.Fatal("init handler failed")
}
handler.handleGetComponents(request, response)
if status := response.StatusCode(); status != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v",
status, http.StatusOK)
}
}
//build req and res in *restful
func buildReqAndRes(method, target string, param map[string]string, body io.Reader) (*restful.Request, *restful.Response, error) {
//build req
request := httptest.NewRequest(method, target, body)
newRequest := restful.NewRequest(request)
if param != nil {
err := setUnExportedFields(newRequest, "pathParameters", param)
if err != nil {
klog.Error("set pathParameters failed ")
return nil, nil, err
}
}
//build res
response := httptest.NewRecorder()
newResponse := restful.NewResponse(response)
// assign Key:routeProduces a value of "application/json"
err := setUnExportedFields(newResponse, "routeProduces", []string{"application/json"})
if err != nil {
klog.Error("set routeProduces failed ")
return nil, nil, err
}
return newRequest, newResponse, nil
}
//Setting unexported fields by using reflect
func setUnExportedFields(ptr interface{}, filedName string, newFiledValue interface{}) (err error) {
v := reflect.ValueOf(ptr).Elem().FieldByName(filedName)
v = reflect.NewAt(v.Type(), unsafe.Pointer(v.UnsafeAddr())).Elem()
nv := reflect.ValueOf(newFiledValue)
if v.Kind() != nv.Kind() {
return fmt.Errorf("kind error")
}
v.Set(nv)
return nil
}

View File

@@ -48,15 +48,17 @@ func NewHandler(o *servicemesh.Options, client kubernetes.Interface, cache cache
if o != nil && o.KialiQueryHost != "" {
sa, err := client.CoreV1().ServiceAccounts(KubesphereNamespace).Get(context.TODO(), KubeSphereServiceAccount, metav1.GetOptions{})
if err == nil {
secret, err := client.CoreV1().Secrets(KubesphereNamespace).Get(context.TODO(), sa.Secrets[0].Name, metav1.GetOptions{})
if err == nil {
return &Handler{
opt: o,
client: kiali.NewDefaultClient(
cache,
string(secret.Data["token"]),
o.KialiQueryHost,
),
if len(sa.Secrets) > 0 {
secret, err := client.CoreV1().Secrets(KubesphereNamespace).Get(context.TODO(), sa.Secrets[0].Name, metav1.GetOptions{})
if err == nil {
return &Handler{
opt: o,
client: kiali.NewDefaultClient(
cache,
string(secret.Data["token"]),
o.KialiQueryHost,
),
}
}
}
klog.Warningf("get ServiceAccount's Secret failed %v", err)

View File

@@ -0,0 +1,142 @@
package v1alpha2
import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"github.com/emicklei/go-restful"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
fakek8s "k8s.io/client-go/kubernetes/fake"
"k8s.io/klog"
"kubesphere.io/kubesphere/pkg/simple/client/kiali"
"kubesphere.io/kubesphere/pkg/simple/client/servicemesh"
"kubesphere.io/kubesphere/pkg/utils/reflectutils"
)
func prepare() (*Handler, error) {
var namespaceName = "kubesphere-system"
var serviceAccountName = "kubesphere"
var secretName = "kiali"
clientset := fakek8s.NewSimpleClientset()
ctx := context.Background()
namespacesClient := clientset.CoreV1().Namespaces()
ns := &corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: namespaceName,
},
}
_, err := namespacesClient.Create(ctx, ns, metav1.CreateOptions{})
if err != nil {
klog.Errorf("create namespace failed ")
return nil, err
}
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: namespaceName,
},
}
object := &corev1.ObjectReference{
Name: secretName,
}
sa := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: serviceAccountName,
Namespace: namespaceName,
},
Secrets: []corev1.ObjectReference{*object},
}
serviceAccountClient := clientset.CoreV1().ServiceAccounts(namespaceName)
_, err = serviceAccountClient.Create(ctx, sa, metav1.CreateOptions{})
if err != nil {
klog.Errorf("create serviceAccount failed ")
return nil, err
}
secretClient := clientset.CoreV1().Secrets(namespaceName)
_, err = secretClient.Create(ctx, secret, metav1.CreateOptions{})
if err != nil {
klog.Errorf("create secret failed ")
return nil, err
}
// mock jaeger server
ts := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
writer.WriteHeader(http.StatusOK)
}))
options := &servicemesh.Options{
IstioPilotHost: "",
KialiQueryHost: "",
JaegerQueryHost: ts.URL,
ServicemeshPrometheusHost: "",
}
handler := NewHandler(options, clientset, nil)
token, _ := json.Marshal(
&kiali.TokenResponse{
Username: "test",
Token: "test",
},
)
mc := &kiali.MockClient{
TokenResult: token,
RequestResult: "fake",
}
client := kiali.NewClient("token", nil, mc, "token", options.KialiQueryHost)
err = reflectutils.SetUnExportedField(handler, "client", client)
if err != nil {
klog.Errorf("apply mock client failed")
return nil, err
}
return handler, nil
}
func TestGetServiceTracing(t *testing.T) {
handler, err := prepare()
if err != nil {
t.Fatalf("init handler failed")
}
namespaceName := "namespace-test"
serviceName := "service-test"
url := fmt.Sprintf("/namespaces/%s/services/%s/traces", namespaceName, serviceName)
request, _ := http.NewRequest("GET", url, nil)
query := request.URL.Query()
query.Add("start", "1650167872000000")
query.Add("end", "1650211072000000")
query.Add("limit", "10")
request.URL.RawQuery = query.Encode()
restfulRequest := restful.NewRequest(request)
pathMap := make(map[string]string)
pathMap["namespace"] = namespaceName
pathMap["service"] = serviceName
if err := reflectutils.SetUnExportedField(restfulRequest, "pathParameters", pathMap); err != nil {
t.Fatalf("set pathParameters failed")
}
recorder := httptest.NewRecorder()
restfulResponse := restful.NewResponse(recorder)
restfulResponse.SetRequestAccepts("application/json")
handler.GetServiceTracing(restfulRequest, restfulResponse)
if status := restfulResponse.StatusCode(); status != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
}
}

View File

@@ -41,6 +41,8 @@ import (
kubesphere "kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/informers"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
"kubesphere.io/kubesphere/pkg/models/tenant"
servererr "kubesphere.io/kubesphere/pkg/server/errors"
@@ -58,16 +60,16 @@ type tenantHandler struct {
func NewTenantHandler(factory informers.InformerFactory, k8sclient kubernetes.Interface, ksclient kubesphere.Interface,
evtsClient events.Client, loggingClient logging.Client, auditingclient auditing.Client,
am am.AccessManagementInterface, authorizer authorizer.Authorizer,
am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, resourceGetter *resourcev1alpha3.ResourceGetter,
meteringOptions *meteringclient.Options, stopCh <-chan struct{}) *tenantHandler {
meteringOptions *meteringclient.Options, opClient openpitrix.Interface) *tenantHandler {
if meteringOptions == nil || meteringOptions.RetentionDay == "" {
meteringOptions = &meteringclient.DefaultMeteringOption
}
return &tenantHandler{
tenant: tenant.New(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourceGetter, stopCh),
tenant: tenant.New(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourceGetter, opClient),
meteringOptions: meteringOptions,
}
}
@@ -200,30 +202,40 @@ func (h *tenantHandler) CreateNamespace(request *restful.Request, response *rest
response.WriteEntity(created)
}
func (h *tenantHandler) CreateWorkspaceTemplate(request *restful.Request, response *restful.Response) {
func (h *tenantHandler) CreateWorkspaceTemplate(req *restful.Request, resp *restful.Response) {
var workspace tenantv1alpha2.WorkspaceTemplate
err := request.ReadEntity(&workspace)
err := req.ReadEntity(&workspace)
if err != nil {
klog.Error(err)
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
requestUser, ok := request.UserFrom(req.Request.Context())
if !ok {
err := fmt.Errorf("cannot obtain user info")
klog.Errorln(err)
api.HandleForbidden(resp, req, err)
}
created, err := h.tenant.CreateWorkspaceTemplate(&workspace)
created, err := h.tenant.CreateWorkspaceTemplate(requestUser, &workspace)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {
api.HandleNotFound(response, request, err)
api.HandleNotFound(resp, req, err)
return
}
api.HandleBadRequest(response, request, err)
if errors.IsForbidden(err) {
api.HandleForbidden(resp, req, err)
return
}
api.HandleBadRequest(resp, req, err)
return
}
response.WriteEntity(created)
resp.WriteEntity(created)
}
func (h *tenantHandler) DeleteWorkspaceTemplate(request *restful.Request, response *restful.Response) {
@@ -251,42 +263,53 @@ func (h *tenantHandler) DeleteWorkspaceTemplate(request *restful.Request, respon
response.WriteEntity(servererr.None)
}
func (h *tenantHandler) UpdateWorkspaceTemplate(request *restful.Request, response *restful.Response) {
workspaceName := request.PathParameter("workspace")
func (h *tenantHandler) UpdateWorkspaceTemplate(req *restful.Request, resp *restful.Response) {
workspaceName := req.PathParameter("workspace")
var workspace tenantv1alpha2.WorkspaceTemplate
err := request.ReadEntity(&workspace)
err := req.ReadEntity(&workspace)
if err != nil {
klog.Error(err)
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
if workspaceName != workspace.Name {
err := fmt.Errorf("the name of the object (%s) does not match the name on the URL (%s)", workspace.Name, workspaceName)
klog.Errorf("%+v", err)
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
updated, err := h.tenant.UpdateWorkspaceTemplate(&workspace)
requestUser, ok := request.UserFrom(req.Request.Context())
if !ok {
err := fmt.Errorf("cannot obtain user info")
klog.Errorln(err)
api.HandleForbidden(resp, req, err)
}
updated, err := h.tenant.UpdateWorkspaceTemplate(requestUser, &workspace)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {
api.HandleNotFound(response, request, err)
api.HandleNotFound(resp, req, err)
return
}
if errors.IsBadRequest(err) {
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
api.HandleInternalError(response, request, err)
if errors.IsForbidden(err) {
api.HandleForbidden(resp, req, err)
return
}
api.HandleInternalError(resp, req, err)
return
}
response.WriteEntity(updated)
resp.WriteEntity(updated)
}
func (h *tenantHandler) DescribeWorkspaceTemplate(request *restful.Request, response *restful.Response) {
@@ -518,33 +541,44 @@ func (h *tenantHandler) PatchNamespace(request *restful.Request, response *restf
response.WriteEntity(patched)
}
func (h *tenantHandler) PatchWorkspaceTemplate(request *restful.Request, response *restful.Response) {
workspaceName := request.PathParameter("workspace")
func (h *tenantHandler) PatchWorkspaceTemplate(req *restful.Request, resp *restful.Response) {
workspaceName := req.PathParameter("workspace")
var data json.RawMessage
err := request.ReadEntity(&data)
err := req.ReadEntity(&data)
if err != nil {
klog.Error(err)
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
patched, err := h.tenant.PatchWorkspaceTemplate(workspaceName, data)
requestUser, ok := request.UserFrom(req.Request.Context())
if !ok {
err := fmt.Errorf("cannot obtain user info")
klog.Errorln(err)
api.HandleForbidden(resp, req, err)
}
patched, err := h.tenant.PatchWorkspaceTemplate(requestUser, workspaceName, data)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {
api.HandleNotFound(response, request, err)
api.HandleNotFound(resp, req, err)
return
}
if errors.IsBadRequest(err) {
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
api.HandleInternalError(response, request, err)
if errors.IsNotFound(err) {
api.HandleForbidden(resp, req, err)
return
}
api.HandleInternalError(resp, req, err)
return
}
response.WriteEntity(patched)
resp.WriteEntity(patched)
}
func (h *tenantHandler) ListClusters(r *restful.Request, response *restful.Response) {
@@ -555,8 +589,8 @@ func (h *tenantHandler) ListClusters(r *restful.Request, response *restful.Respo
return
}
result, err := h.tenant.ListClusters(user)
queryParam := query.ParseQueryParameter(r)
result, err := h.tenant.ListClusters(user, queryParam)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {

View File

@@ -40,8 +40,10 @@ import (
"kubesphere.io/kubesphere/pkg/informers"
"kubesphere.io/kubesphere/pkg/models"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/metering"
"kubesphere.io/kubesphere/pkg/models/monitoring"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
"kubesphere.io/kubesphere/pkg/server/errors"
"kubesphere.io/kubesphere/pkg/simple/client/auditing"
@@ -63,12 +65,12 @@ func Resource(resource string) schema.GroupResource {
func AddToContainer(c *restful.Container, factory informers.InformerFactory, k8sclient kubernetes.Interface,
ksclient kubesphere.Interface, evtsClient events.Client, loggingClient logging.Client,
auditingclient auditing.Client, am am.AccessManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, stopCh <-chan struct{}) error {
auditingclient auditing.Client, am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, opClient openpitrix.Interface) error {
mimePatch := []string{restful.MIME_JSON, runtime.MimeMergePatchJson, runtime.MimeJsonPatchJson}
ws := runtime.NewWebService(GroupVersion)
handler := NewTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, stopCh)
handler := NewTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opClient)
ws.Route(ws.GET("/clusters").
To(handler.ListClusters).

View File

@@ -20,6 +20,7 @@ import (
"fmt"
"github.com/emicklei/go-restful"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/client-go/kubernetes"
"k8s.io/klog"
@@ -30,6 +31,8 @@ import (
kubesphere "kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/informers"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
"kubesphere.io/kubesphere/pkg/models/tenant"
"kubesphere.io/kubesphere/pkg/simple/client/auditing"
@@ -46,16 +49,16 @@ type tenantHandler struct {
func newTenantHandler(factory informers.InformerFactory, k8sclient kubernetes.Interface, ksclient kubesphere.Interface,
evtsClient events.Client, loggingClient logging.Client, auditingclient auditing.Client,
am am.AccessManagementInterface, authorizer authorizer.Authorizer,
am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, resourceGetter *resourcev1alpha3.ResourceGetter,
meteringOptions *meteringclient.Options, stopCh <-chan struct{}) *tenantHandler {
meteringOptions *meteringclient.Options, opClient openpitrix.Interface) *tenantHandler {
if meteringOptions == nil || meteringOptions.RetentionDay == "" {
meteringOptions = &meteringclient.DefaultMeteringOption
}
return &tenantHandler{
tenant: tenant.New(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourceGetter, stopCh),
tenant: tenant.New(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourceGetter, opClient),
meteringOptions: meteringOptions,
}
}
@@ -78,3 +81,18 @@ func (h *tenantHandler) ListWorkspaces(req *restful.Request, resp *restful.Respo
resp.WriteEntity(result)
}
func (h *tenantHandler) GetWorkspace(request *restful.Request, response *restful.Response) {
workspace, err := h.tenant.GetWorkspace(request.PathParameter("workspace"))
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {
api.HandleNotFound(response, request, err)
return
}
api.HandleInternalError(response, request, err)
return
}
response.WriteEntity(workspace)
}

View File

@@ -23,6 +23,7 @@ import (
restfulspec "github.com/emicklei/go-restful-openapi"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/kubernetes"
tenantv1alpha1 "kubesphere.io/api/tenant/v1alpha1"
"sigs.k8s.io/controller-runtime/pkg/cache"
tenantv1alpha2 "kubesphere.io/api/tenant/v1alpha2"
@@ -36,6 +37,8 @@ import (
"kubesphere.io/kubesphere/pkg/kapis/tenant/v1alpha2"
"kubesphere.io/kubesphere/pkg/models"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
"kubesphere.io/kubesphere/pkg/server/errors"
"kubesphere.io/kubesphere/pkg/simple/client/auditing"
@@ -57,13 +60,13 @@ func Resource(resource string) schema.GroupResource {
func AddToContainer(c *restful.Container, factory informers.InformerFactory, k8sclient kubernetes.Interface,
ksclient kubesphere.Interface, evtsClient events.Client, loggingClient logging.Client,
auditingclient auditing.Client, am am.AccessManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, stopCh <-chan struct{}) error {
auditingclient auditing.Client, am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, opClient openpitrix.Interface) error {
mimePatch := []string{restful.MIME_JSON, runtime.MimeMergePatchJson, runtime.MimeJsonPatchJson}
ws := runtime.NewWebService(GroupVersion)
v1alpha2Handler := v1alpha2.NewTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, stopCh)
handler := newTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, stopCh)
v1alpha2Handler := v1alpha2.NewTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opClient)
handler := newTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opClient)
ws.Route(ws.POST("/workspacetemplates").
To(v1alpha2Handler.CreateWorkspaceTemplate).
@@ -115,6 +118,13 @@ func AddToContainer(c *restful.Container, factory informers.InformerFactory, k8s
Doc("List all workspaces that belongs to the current user").
Metadata(restfulspec.KeyOpenAPITags, []string{constants.WorkspaceTag}))
ws.Route(ws.GET("/workspaces/{workspace}").
To(handler.GetWorkspace).
Param(ws.PathParameter("workspace", "workspace name")).
Returns(http.StatusOK, api.StatusOK, tenantv1alpha1.Workspace{}).
Doc("Get workspace.").
Metadata(restfulspec.KeyOpenAPITags, []string{constants.WorkspaceTag}))
c.Add(ws)
return nil
}

View File

@@ -91,6 +91,10 @@ func (o *oauthAuthenticator) Authenticate(_ context.Context, provider string, re
}
if user != nil {
if user.Status.State == iamv1alpha2.UserDisabled {
// state not active
return nil, "", AccountIsNotActiveError
}
return &authuser.DefaultInfo{Name: user.GetName()}, providerOptions.Name, nil
}

View File

@@ -53,6 +53,11 @@ func Test_oauthAuthenticator_Authenticate(t *testing.T) {
"email": "user1@kubesphere.io",
"username": "user1",
},
"code2": map[string]string{
"uid": "100002",
"email": "user2@kubesphere.io",
"username": "user2",
},
},
},
},
@@ -67,8 +72,14 @@ func Test_oauthAuthenticator_Authenticate(t *testing.T) {
ksClient := fakeks.NewSimpleClientset()
ksInformerFactory := ksinformers.NewSharedInformerFactory(ksClient, 0)
err := ksInformerFactory.Iam().V1alpha2().Users().Informer().GetIndexer().Add(newUser("user1", "100001", "fake"))
if err != nil {
if err := ksInformerFactory.Iam().V1alpha2().Users().Informer().GetIndexer().Add(newUser("user1", "100001", "fake")); err != nil {
t.Fatal(err)
}
blockedUser := newUser("user2", "100002", "fake")
blockedUser.Status = iamv1alpha2.UserStatus{State: iamv1alpha2.UserDisabled}
if err := ksInformerFactory.Iam().V1alpha2().Users().Informer().GetIndexer().Add(blockedUser); err != nil {
t.Fatal(err)
}
@@ -103,6 +114,22 @@ func Test_oauthAuthenticator_Authenticate(t *testing.T) {
provider: "fake",
wantErr: false,
},
{
name: "Blocked user test",
oauthAuthenticator: NewOAuthAuthenticator(
nil,
ksInformerFactory.Iam().V1alpha2().Users().Lister(),
oauthOptions,
),
args: args{
ctx: context.Background(),
provider: "fake",
req: must(http.NewRequest(http.MethodGet, "https://ks-console.kubesphere.io/oauth/callback/test?code=code2&state=100002", nil)),
},
userInfo: nil,
provider: "",
wantErr: true,
},
{
name: "Should successfully",
oauthAuthenticator: NewOAuthAuthenticator(

View File

@@ -47,12 +47,13 @@ import (
)
const (
MasterLabel = "node-role.kubernetes.io/master"
SidecarInject = "sidecar.istio.io/inject"
gatewayPrefix = "kubesphere-router-"
workingNamespace = "kubesphere-controls-system"
globalGatewayname = gatewayPrefix + "kubesphere-system"
helmPatch = `{"metadata":{"annotations":{"meta.helm.sh/release-name":"%s-ingress","meta.helm.sh/release-namespace":"%s"},"labels":{"helm.sh/chart":"ingress-nginx-3.35.0","app.kubernetes.io/managed-by":"Helm","app":null,"component":null,"tier":null}},"spec":{"selector":null}}`
MasterLabel = "node-role.kubernetes.io/master"
SidecarInject = "sidecar.istio.io/inject"
gatewayPrefix = "kubesphere-router-"
workingNamespace = "kubesphere-controls-system"
globalGatewayNameSuffix = "kubesphere-system"
globalGatewayName = gatewayPrefix + globalGatewayNameSuffix
helmPatch = `{"metadata":{"annotations":{"meta.helm.sh/release-name":"%s-ingress","meta.helm.sh/release-namespace":"%s"},"labels":{"helm.sh/chart":"ingress-nginx-3.35.0","app.kubernetes.io/managed-by":"Helm","app":null,"component":null,"tier":null}},"spec":{"selector":null}}`
)
type GatewayOperator interface {
@@ -62,7 +63,7 @@ type GatewayOperator interface {
UpdateGateway(namespace string, obj *v1alpha1.Gateway) (*v1alpha1.Gateway, error)
UpgradeGateway(namespace string) (*v1alpha1.Gateway, error)
ListGateways(query *query.Query) (*api.ListResult, error)
GetPods(namesapce string, query *query.Query) (*api.ListResult, error)
GetPods(namespace string, query *query.Query) (*api.ListResult, error)
GetPodLogs(ctx context.Context, namespace string, podName string, logOptions *corev1.PodLogOptions, responseWriter io.Writer) error
}
@@ -86,19 +87,23 @@ func NewGatewayOperator(client client.Client, cache cache.Cache, options *gatewa
func (c *gatewayOperator) getWorkingNamespace(namespace string) string {
ns := c.options.Namespace
// Set the working namespace to watching namespace when the Gatway's Namsapce Option is empty
// Set the working namespace to watching namespace when the Gateway's Namespace Option is empty
if ns == "" {
ns = namespace
}
// Convert the global gateway query parameter
if namespace == globalGatewayNameSuffix {
ns = workingNamespace
}
return ns
}
// overide user's setting when create/update a project gateway.
func (c *gatewayOperator) overideDefaultValue(gateway *v1alpha1.Gateway, namespace string) *v1alpha1.Gateway {
// overide default name
// override user's setting when create/update a project gateway.
func (c *gatewayOperator) overrideDefaultValue(gateway *v1alpha1.Gateway, namespace string) *v1alpha1.Gateway {
// override default name
gateway.Name = fmt.Sprint(gatewayPrefix, namespace)
if gateway.Name != globalGatewayname {
gateway.Spec.Conroller.Scope = v1alpha1.Scope{Enabled: true, Namespace: namespace}
if gateway.Name != globalGatewayName {
gateway.Spec.Controller.Scope = v1alpha1.Scope{Enabled: true, Namespace: namespace}
}
gateway.Namespace = c.getWorkingNamespace(namespace)
return gateway
@@ -108,7 +113,7 @@ func (c *gatewayOperator) overideDefaultValue(gateway *v1alpha1.Gateway, namespa
func (c *gatewayOperator) getGlobalGateway() *v1alpha1.Gateway {
globalkey := types.NamespacedName{
Namespace: workingNamespace,
Name: globalGatewayname,
Name: globalGatewayName,
}
global := &v1alpha1.Gateway{}
@@ -155,7 +160,7 @@ func (c *gatewayOperator) convert(namespace string, svc *corev1.Service, deploy
Namespace: svc.Namespace,
},
Spec: v1alpha1.GatewaySpec{
Conroller: v1alpha1.ControllerSpec{
Controller: v1alpha1.ControllerSpec{
Scope: v1alpha1.Scope{
Enabled: true,
Namespace: namespace,
@@ -174,6 +179,9 @@ func (c *gatewayOperator) convert(namespace string, svc *corev1.Service, deploy
legacy.Spec.Deployment.Annotations = make(map[string]string)
legacy.Spec.Deployment.Annotations[SidecarInject] = an
}
if len(deploy.Spec.Template.Spec.Containers) > 0 {
legacy.Spec.Deployment.Resources = deploy.Spec.Template.Spec.Containers[0].Resources
}
return &legacy
}
@@ -201,7 +209,7 @@ func (c *gatewayOperator) getMasterNodeIp() []string {
}
func (c *gatewayOperator) updateStatus(gateway *v1alpha1.Gateway, svc *corev1.Service) (*v1alpha1.Gateway, error) {
// append selected node ip as loadbalancer ingress ip
// append selected node ip as loadBalancer ingress ip
if svc.Spec.Type != corev1.ServiceTypeLoadBalancer && len(svc.Status.LoadBalancer.Ingress) == 0 {
rips := c.getMasterNodeIp()
for _, rip := range rips {
@@ -240,8 +248,8 @@ func (c *gatewayOperator) updateStatus(gateway *v1alpha1.Gateway, svc *corev1.Se
return gateway, nil
}
// GetGateways returns all Gateways from the project. There are at most 2 gatways exists in a project,
// a Glabal Gateway and a Project Gateway or a Legacy Project Gateway.
// GetGateways returns all Gateways from the project. There are at most 2 gateways exists in a project,
// a Global Gateway and a Project Gateway or a Legacy Project Gateway.
func (c *gatewayOperator) GetGateways(namespace string) ([]*v1alpha1.Gateway, error) {
var gateways []*v1alpha1.Gateway
@@ -295,7 +303,7 @@ func (c *gatewayOperator) CreateGateway(namespace string, obj *v1alpha1.Gateway)
return nil, fmt.Errorf("can't create project gateway if legacy gateway exists, please upgrade the gateway firstly")
}
c.overideDefaultValue(obj, namespace)
c.overrideDefaultValue(obj, namespace)
err := c.client.Create(context.TODO(), obj)
return obj, err
}
@@ -314,9 +322,9 @@ func (c *gatewayOperator) DeleteGateway(namespace string) error {
// Update Gateway
func (c *gatewayOperator) UpdateGateway(namespace string, obj *v1alpha1.Gateway) (*v1alpha1.Gateway, error) {
if c.options.Namespace == "" && obj.Namespace != namespace || c.options.Namespace != "" && c.options.Namespace != obj.Namespace {
return nil, fmt.Errorf("namepsace doesn't match with origin namesapce")
return nil, fmt.Errorf("namespace doesn't match with origin namespace")
}
c.overideDefaultValue(obj, namespace)
c.overrideDefaultValue(obj, namespace)
err := c.client.Update(context.TODO(), obj)
return obj, err
}
@@ -328,21 +336,21 @@ func (c *gatewayOperator) UpgradeGateway(namespace string) (*v1alpha1.Gateway, e
if l == nil {
return nil, fmt.Errorf("invalid operation, no legacy gateway was found")
}
if l.Namespace != c.options.Namespace {
if l.Namespace != c.getWorkingNamespace(namespace) {
return nil, fmt.Errorf("invalid operation, can't upgrade legacy gateway when working namespace changed")
}
// Get legency gateway's config from configmap
// Get legacy gateway's config from configmap
cm := &corev1.ConfigMap{}
err := c.client.Get(context.TODO(), client.ObjectKey{Namespace: l.Namespace, Name: fmt.Sprintf("%s-nginx", l.Name)}, cm)
if err == nil {
l.Spec.Conroller.Config = cm.Data
l.Spec.Controller.Config = cm.Data
defer func() {
c.client.Delete(context.TODO(), cm)
}()
}
// Delete old deployment, because it's not compatile with the deployment in the helm chart.
// Delete old deployment, because it's not compatible with the deployment in the helm chart.
// We can't defer here, there's a potential race condition causing gateway operator fails.
d := &appsv1.Deployment{
ObjectMeta: v1.ObjectMeta{
@@ -355,7 +363,7 @@ func (c *gatewayOperator) UpgradeGateway(namespace string) (*v1alpha1.Gateway, e
return nil, err
}
// Patch the legacy Serivce with helm annotations, So that it can be mannaged by the helm release.
// Patch the legacy Service with helm annotations, So that it can be managed by the helm release.
patch := []byte(fmt.Sprintf(helmPatch, l.Name, l.Namespace))
err = c.client.Patch(context.Background(), &corev1.Service{
ObjectMeta: v1.ObjectMeta{
@@ -368,7 +376,7 @@ func (c *gatewayOperator) UpgradeGateway(namespace string) (*v1alpha1.Gateway, e
return nil, err
}
c.overideDefaultValue(l, namespace)
c.overrideDefaultValue(l, namespace)
err = c.client.Create(context.TODO(), l)
return l, err
}
@@ -448,7 +456,7 @@ func (c *gatewayOperator) compare(left runtime.Object, right runtime.Object, fie
func (c *gatewayOperator) filter(object runtime.Object, filter query.Filter) bool {
var objMeta v1.ObjectMeta
var namesapce string
var namespace string
gateway, ok := object.(*v1alpha1.Gateway)
if !ok {
@@ -456,31 +464,31 @@ func (c *gatewayOperator) filter(object runtime.Object, filter query.Filter) boo
if !ok {
return false
}
namesapce = svc.Labels["project"]
namespace = svc.Labels["project"]
objMeta = svc.ObjectMeta
} else {
namesapce = gateway.Spec.Conroller.Scope.Namespace
namespace = gateway.Spec.Controller.Scope.Namespace
objMeta = gateway.ObjectMeta
}
switch filter.Field {
case query.FieldNamespace:
return strings.Compare(namesapce, string(filter.Value)) == 0
return strings.Compare(namespace, string(filter.Value)) == 0
default:
return v1alpha3.DefaultObjectMetaFilter(objMeta, filter)
}
}
func (c *gatewayOperator) GetPods(namesapce string, query *query.Query) (*api.ListResult, error) {
func (c *gatewayOperator) GetPods(namespace string, query *query.Query) (*api.ListResult, error) {
podGetter := pod.New(c.factory.KubernetesSharedInformerFactory())
//TODO: move the selector string to options
selector, err := labels.Parse(fmt.Sprintf("app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/instance=kubesphere-router-%s-ingress", namesapce))
selector, err := labels.Parse(fmt.Sprintf("app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/instance=kubesphere-router-%s-ingress", namespace))
if err != nil {
return nil, fmt.Errorf("invild selector config")
return nil, fmt.Errorf("invaild selector config")
}
query.LabelSelector = selector.String()
return podGetter.List(c.getWorkingNamespace(namesapce), query)
return podGetter.List(c.getWorkingNamespace(namespace), query)
}
func (c *gatewayOperator) GetPodLogs(ctx context.Context, namespace string, podName string, logOptions *corev1.PodLogOptions, responseWriter io.Writer) error {

View File

@@ -92,7 +92,7 @@ func Test_gatewayOperator_GetGateways(t *testing.T) {
},
},
args: args{
namespace: "projct1",
namespace: "project1",
},
},
{
@@ -105,7 +105,7 @@ func Test_gatewayOperator_GetGateways(t *testing.T) {
},
},
args: args{
namespace: "projct1",
namespace: "project1",
},
},
{
@@ -169,7 +169,7 @@ func Test_gatewayOperator_GetGateways(t *testing.T) {
Namespace: "kubesphere-controls-system",
},
Spec: v1alpha1.GatewaySpec{
Conroller: v1alpha1.ControllerSpec{
Controller: v1alpha1.ControllerSpec{
Scope: v1alpha1.Scope{
Enabled: true,
Namespace: "project6",
@@ -336,17 +336,17 @@ func Test_gatewayOperator_CreateGateway(t *testing.T) {
},
},
args: args{
namespace: "projct1",
namespace: "project1",
obj: &v1alpha1.Gateway{
TypeMeta: v1.TypeMeta{
Kind: "Gateway",
APIVersion: "gateway.kubesphere.io/v1alpha1",
},
Spec: v1alpha1.GatewaySpec{
Conroller: v1alpha1.ControllerSpec{
Controller: v1alpha1.ControllerSpec{
Scope: v1alpha1.Scope{
Enabled: true,
Namespace: "projct1",
Namespace: "project1",
},
},
},
@@ -367,17 +367,17 @@ func Test_gatewayOperator_CreateGateway(t *testing.T) {
},
},
args: args{
namespace: "projct2",
namespace: "project2",
obj: &v1alpha1.Gateway{
TypeMeta: v1.TypeMeta{
Kind: "Gateway",
APIVersion: "gateway.kubesphere.io/v1alpha1",
},
Spec: v1alpha1.GatewaySpec{
Conroller: v1alpha1.ControllerSpec{
Controller: v1alpha1.ControllerSpec{
Scope: v1alpha1.Scope{
Enabled: true,
Namespace: "projct2",
Namespace: "project2",
},
},
},
@@ -506,7 +506,7 @@ func Test_gatewayOperator_UpdateGateway(t *testing.T) {
ResourceVersion: "1",
},
Spec: v1alpha1.GatewaySpec{
Conroller: v1alpha1.ControllerSpec{
Controller: v1alpha1.ControllerSpec{
Scope: v1alpha1.Scope{
Enabled: true,
Namespace: "project3",
@@ -593,7 +593,7 @@ func Test_gatewayOperator_UpgradeGateway(t *testing.T) {
},
},
args: args{
namespace: "projct1",
namespace: "project1",
},
wantErr: true,
},
@@ -615,7 +615,7 @@ func Test_gatewayOperator_UpgradeGateway(t *testing.T) {
ResourceVersion: "1",
},
Spec: v1alpha1.GatewaySpec{
Conroller: v1alpha1.ControllerSpec{
Controller: v1alpha1.ControllerSpec{
Scope: v1alpha1.Scope{
Enabled: true,
Namespace: "project2",
@@ -691,7 +691,7 @@ func Test_gatewayOperator_ListGateways(t *testing.T) {
Namespace: "kubesphere-controls-system",
},
Spec: v1alpha1.GatewaySpec{
Conroller: v1alpha1.ControllerSpec{
Controller: v1alpha1.ControllerSpec{
Scope: v1alpha1.Scope{
Enabled: true,
Namespace: "project2",

View File

@@ -220,7 +220,7 @@ func (o *operator) createCSR(username string) error {
}
var csrBuffer, keyBuffer bytes.Buffer
if err = pem.Encode(&keyBuffer, &pem.Block{Type: "PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(x509key)}); err != nil {
if err = pem.Encode(&keyBuffer, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(x509key)}); err != nil {
klog.Errorln(err)
return err
}

View File

@@ -21,6 +21,8 @@ import (
"encoding/base64"
"testing"
"kubesphere.io/kubesphere/pkg/utils/reposcache"
"github.com/go-openapi/strfmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
@@ -172,5 +174,5 @@ func prepareAppOperator() ApplicationInterface {
k8sClient = fakek8s.NewSimpleClientset()
fakeInformerFactory = informers.NewInformerFactories(k8sClient, ksClient, nil, nil, nil, nil)
return newApplicationOperator(cachedReposData, fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient, fake.NewFakeS3())
return newApplicationOperator(reposcache.NewReposCache(), fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient, fake.NewFakeS3())
}

View File

@@ -17,11 +17,11 @@ limitations under the License.
package openpitrix
import (
"sync"
"k8s.io/client-go/tools/cache"
"k8s.io/klog"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
"kubesphere.io/api/application/v1alpha1"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
@@ -46,51 +46,42 @@ type openpitrixOperator struct {
CategoryInterface
}
var cachedReposData reposcache.ReposCache
var helmReposInformer cache.SharedIndexInformer
var once sync.Once
func init() {
cachedReposData = reposcache.NewReposCache()
}
func NewOpenpitrixOperator(ksInformers ks_informers.InformerFactory, ksClient versioned.Interface, s3Client s3.Interface, stopCh <-chan struct{}) Interface {
once.Do(func() {
klog.Infof("start helm repo informer")
helmReposInformer = ksInformers.KubeSphereSharedInformerFactory().Application().V1alpha1().HelmRepos().Informer()
helmReposInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
r := obj.(*v1alpha1.HelmRepo)
cachedReposData.AddRepo(r)
},
UpdateFunc: func(oldObj, newObj interface{}) {
oldRepo := oldObj.(*v1alpha1.HelmRepo)
newRepo := newObj.(*v1alpha1.HelmRepo)
cachedReposData.UpdateRepo(oldRepo, newRepo)
},
DeleteFunc: func(obj interface{}) {
r := obj.(*v1alpha1.HelmRepo)
cachedReposData.DeleteRepo(r)
},
})
ctgInformer := ksInformers.KubeSphereSharedInformerFactory().Application().V1alpha1().HelmCategories().Informer()
ctgInformer.AddIndexers(map[string]cache.IndexFunc{
reposcache.CategoryIndexer: func(obj interface{}) ([]string, error) {
ctg, _ := obj.(*v1alpha1.HelmCategory)
return []string{ctg.Spec.Name}, nil
},
})
indexer := ctgInformer.GetIndexer()
cachedReposData.SetCategoryIndexer(indexer)
func NewOpenpitrixOperator(ksInformers ks_informers.InformerFactory, ksClient versioned.Interface, s3Client s3.Interface, cc clusterclient.ClusterClients) Interface {
klog.Infof("start helm repo informer")
cachedReposData := reposcache.NewReposCache()
helmReposInformer := ksInformers.KubeSphereSharedInformerFactory().Application().V1alpha1().HelmRepos().Informer()
helmReposInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
r := obj.(*v1alpha1.HelmRepo)
cachedReposData.AddRepo(r)
},
UpdateFunc: func(oldObj, newObj interface{}) {
oldRepo := oldObj.(*v1alpha1.HelmRepo)
newRepo := newObj.(*v1alpha1.HelmRepo)
cachedReposData.UpdateRepo(oldRepo, newRepo)
},
DeleteFunc: func(obj interface{}) {
r := obj.(*v1alpha1.HelmRepo)
cachedReposData.DeleteRepo(r)
},
})
ctgInformer := ksInformers.KubeSphereSharedInformerFactory().Application().V1alpha1().HelmCategories().Informer()
ctgInformer.AddIndexers(map[string]cache.IndexFunc{
reposcache.CategoryIndexer: func(obj interface{}) ([]string, error) {
ctg, _ := obj.(*v1alpha1.HelmCategory)
return []string{ctg.Spec.Name}, nil
},
})
indexer := ctgInformer.GetIndexer()
cachedReposData.SetCategoryIndexer(indexer)
return &openpitrixOperator{
AttachmentInterface: newAttachmentOperator(s3Client),
ApplicationInterface: newApplicationOperator(cachedReposData, ksInformers.KubeSphereSharedInformerFactory(), ksClient, s3Client),
RepoInterface: newRepoOperator(cachedReposData, ksInformers.KubeSphereSharedInformerFactory(), ksClient),
ReleaseInterface: newReleaseOperator(cachedReposData, ksInformers.KubernetesSharedInformerFactory(), ksInformers.KubeSphereSharedInformerFactory(), ksClient),
ReleaseInterface: newReleaseOperator(cachedReposData, ksInformers.KubernetesSharedInformerFactory(), ksInformers.KubeSphereSharedInformerFactory(), ksClient, cc),
CategoryInterface: newCategoryOperator(cachedReposData, ksInformers.KubeSphereSharedInformerFactory(), ksClient),
}
}

View File

@@ -70,13 +70,13 @@ type releaseOperator struct {
clusterClients clusterclient.ClusterClients
}
func newReleaseOperator(cached reposcache.ReposCache, k8sFactory informers.SharedInformerFactory, ksFactory externalversions.SharedInformerFactory, ksClient versioned.Interface) ReleaseInterface {
func newReleaseOperator(cached reposcache.ReposCache, k8sFactory informers.SharedInformerFactory, ksFactory externalversions.SharedInformerFactory, ksClient versioned.Interface, cc clusterclient.ClusterClients) ReleaseInterface {
c := &releaseOperator{
informers: k8sFactory,
rlsClient: ksClient.ApplicationV1alpha1().HelmReleases(),
rlsLister: ksFactory.Application().V1alpha1().HelmReleases().Lister(),
cachedRepos: cached,
clusterClients: clusterclient.NewClusterClient(ksFactory.Cluster().V1alpha1().Clusters()),
clusterClients: cc,
appVersionLister: ksFactory.Application().V1alpha1().HelmApplicationVersions().Lister(),
}

View File

@@ -21,6 +21,8 @@ import (
"encoding/base64"
"testing"
"kubesphere.io/kubesphere/pkg/utils/reposcache"
"github.com/go-openapi/strfmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/klog"
@@ -69,7 +71,7 @@ func TestOpenPitrixRelease(t *testing.T) {
}
}
rlsOperator := newReleaseOperator(cachedReposData, fakeInformerFactory.KubernetesSharedInformerFactory(), fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient)
rlsOperator := newReleaseOperator(reposcache.NewReposCache(), fakeInformerFactory.KubernetesSharedInformerFactory(), fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient, nil)
req := CreateClusterRequest{
Name: "test-rls",

View File

@@ -20,6 +20,8 @@ import (
"context"
"testing"
"kubesphere.io/kubesphere/pkg/utils/reposcache"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
fakek8s "k8s.io/client-go/kubernetes/fake"
"k8s.io/klog"
@@ -111,5 +113,5 @@ func prepareRepoOperator() RepoInterface {
k8sClient = fakek8s.NewSimpleClientset()
fakeInformerFactory = informers.NewInformerFactories(k8sClient, ksClient, nil, nil, nil, nil)
return newRepoOperator(cachedReposData, fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient)
return newRepoOperator(reposcache.NewReposCache(), fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient)
}

View File

@@ -302,7 +302,7 @@ func (c *repoOperator) ListRepos(conditions *params.Conditions, orderBy string,
start, end := (&query.Pagination{Limit: limit, Offset: offset}).GetValidPagination(totalCount)
repos = repos[start:end]
items := make([]interface{}, 0, len(repos))
for i, j := offset, 0; i < len(repos) && j < limit; i, j = i+1, j+1 {
for i := range repos {
items = append(items, convertRepo(repos[i]))
}
return &models.PageableResponse{Items: items, TotalCount: totalCount}, nil

View File

@@ -280,6 +280,9 @@ type AppVersionReview struct {
// version type
VersionType string `json:"version_type,omitempty"`
// Workspace of the app version
Workspace string `json:"workspace,omitempty"`
}
type CreateAppRequest struct {
@@ -710,7 +713,7 @@ type Repo struct {
// selectors
Selectors RepoSelectors `json:"selectors"`
// status eg.[active|deleted]
// status eg.[successful|failed|syncing]
Status string `json:"status,omitempty"`
// record status changed time

View File

@@ -399,6 +399,7 @@ func convertAppVersion(in *v1alpha1.HelmApplicationVersion) *AppVersion {
if in.Spec.Metadata != nil {
out.Description = in.Spec.Description
out.Icon = in.Spec.Icon
out.Home = in.Spec.Home
}
// The field Maintainers and Sources were a string field, so I encode the helm field's maintainers and sources,
@@ -431,6 +432,10 @@ func convertRepo(in *v1alpha1.HelmRepo) *Repo {
out.Name = in.GetTrueName()
out.Status = in.Status.State
// set default status `syncing` when helmrepo not reconcile yet
if out.Status == "" {
out.Status = v1alpha1.RepoStateSyncing
}
date := strfmt.DateTime(time.Unix(in.CreationTimestamp.Unix(), 0))
out.CreateTime = &date
@@ -817,6 +822,7 @@ func convertAppVersionReview(app *v1alpha1.HelmApplication, appVersion *v1alpha1
review.VersionID = appVersion.GetHelmApplicationVersionId()
review.Phase = AppVersionReviewPhaseOAIGen{}
review.VersionName = appVersion.GetVersionName()
review.Workspace = appVersion.GetWorkspace()
review.StatusTime = strfmt.DateTime(status.Audit[0].Time.Time)
review.AppName = app.GetTrueName()

View File

@@ -0,0 +1,240 @@
package persistentvolumeclaim
import (
"testing"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes/fake"
"github.com/google/go-cmp/cmp"
snapshot "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
snapshotefakeclient "github.com/kubernetes-csi/external-snapshotter/client/v4/clientset/versioned/fake"
snapshotinformers "github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions"
"kubesphere.io/kubesphere/pkg/models/resources/v1alpha2"
"kubesphere.io/kubesphere/pkg/server/params"
)
var (
testStorageClassName = "sc1"
)
var (
pvc1 = &corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-1",
Namespace: "default",
Annotations: map[string]string{
"kubesphere.io/in-use": "false",
"kubesphere.io/allow-snapshot": "false",
},
},
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
Status: corev1.PersistentVolumeClaimStatus{
Phase: corev1.ClaimPending,
},
}
pvc2 = &corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-2",
Namespace: "default",
Annotations: map[string]string{
"kubesphere.io/in-use": "false",
"kubesphere.io/allow-snapshot": "false",
},
},
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
Status: corev1.PersistentVolumeClaimStatus{
Phase: corev1.ClaimLost,
},
}
pvc3 = &corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-3",
Namespace: "default",
Annotations: map[string]string{
"kubesphere.io/in-use": "true",
"kubesphere.io/allow-snapshot": "false",
},
},
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
Status: corev1.PersistentVolumeClaimStatus{
Phase: corev1.ClaimBound,
},
}
pod1 = &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-1",
Namespace: "default",
},
Spec: corev1.PodSpec{
Volumes: []corev1.Volume{
{
Name: "data",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: pvc3.Name,
},
},
},
},
},
}
vsc1 = &snapshot.VolumeSnapshotClass{
ObjectMeta: metav1.ObjectMeta{
Name: "VolumeSnapshotClass-1",
Namespace: "default",
},
Driver: testStorageClassName,
}
persistentVolumeClaims = []interface{}{pvc1, pvc2, pvc3}
pods = []interface{}{pod1}
volumeSnapshotClasses = []interface{}{vsc1}
)
func prepare() (v1alpha2.Interface, error) {
client := fake.NewSimpleClientset()
informer := informers.NewSharedInformerFactory(client, 0)
snapshotClient := snapshotefakeclient.NewSimpleClientset()
snapshotInformers := snapshotinformers.NewSharedInformerFactory(snapshotClient, 0)
for _, persistentVolumeClaim := range persistentVolumeClaims {
err := informer.Core().V1().PersistentVolumeClaims().Informer().GetIndexer().Add(persistentVolumeClaim)
if err != nil {
return nil, err
}
}
for _, pod := range pods {
err := informer.Core().V1().Pods().Informer().GetIndexer().Add(pod)
if err != nil {
return nil, err
}
}
for _, volumeSnapshotClass := range volumeSnapshotClasses {
err := snapshotInformers.Snapshot().V1().VolumeSnapshotClasses().Informer().GetIndexer().Add(volumeSnapshotClass)
if err != nil {
return nil, err
}
}
return NewPersistentVolumeClaimSearcher(informer, snapshotInformers), nil
}
func TestGet(t *testing.T) {
tests := []struct {
Namespace string
Name string
Expected interface{}
ExpectedErr error
}{
{
"default",
"pvc-1",
pvc1,
nil,
},
}
getter, err := prepare()
if err != nil {
t.Fatal(err)
}
for _, test := range tests {
got, err := getter.Get(test.Namespace, test.Name)
if test.ExpectedErr != nil && err != test.ExpectedErr {
t.Errorf("expected error, got nothing")
} else if err != nil {
t.Fatal(err)
}
diff := cmp.Diff(got, test.Expected)
if diff != "" {
t.Errorf("%T differ (-got, +want): %s", test.Expected, diff)
}
}
}
func TestSearch(t *testing.T) {
tests := []struct {
Namespace string
Conditions *params.Conditions
OrderBy string
Reverse bool
Expected []interface{}
ExpectedErr error
}{
{
Namespace: "default",
Conditions: &params.Conditions{
Match: map[string]string{
v1alpha2.Status: v1alpha2.StatusPending,
},
Fuzzy: nil,
},
OrderBy: "name",
Reverse: false,
Expected: []interface{}{pvc1},
ExpectedErr: nil,
},
{
Namespace: "default",
Conditions: &params.Conditions{
Match: map[string]string{
v1alpha2.Status: v1alpha2.StatusLost,
},
Fuzzy: nil,
},
OrderBy: "name",
Reverse: false,
Expected: []interface{}{pvc2},
ExpectedErr: nil,
},
{
Namespace: "default",
Conditions: &params.Conditions{
Match: map[string]string{
v1alpha2.Status: v1alpha2.StatusBound,
},
Fuzzy: nil,
},
OrderBy: "name",
Reverse: false,
Expected: []interface{}{pvc3},
ExpectedErr: nil,
},
}
searcher, err := prepare()
if err != nil {
t.Fatal(err)
}
for _, test := range tests {
got, err := searcher.Search(test.Namespace, test.Conditions, test.OrderBy, test.Reverse)
if test.ExpectedErr != nil && err != test.ExpectedErr {
t.Errorf("expected error, got nothing")
} else if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(got, test.Expected); diff != "" {
t.Errorf("%T differ (-got, +want): %s", test.Expected, diff)
}
}
}

View File

@@ -0,0 +1,217 @@
package federatedpersistentvolumeclaim
import (
"testing"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"github.com/google/go-cmp/cmp"
fedv1beta1 "kubesphere.io/api/types/v1beta1"
"kubesphere.io/kubesphere/pkg/api"
"kubesphere.io/kubesphere/pkg/apiserver/query"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned/fake"
"kubesphere.io/kubesphere/pkg/client/informers/externalversions"
"kubesphere.io/kubesphere/pkg/models/resources/v1alpha3"
)
var (
testStorageClassName = "sc1"
)
var (
pvc1 = &fedv1beta1.FederatedPersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-1",
Namespace: "default",
},
}
pvc2 = &fedv1beta1.FederatedPersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-2",
Namespace: "default",
},
Spec: fedv1beta1.FederatedPersistentVolumeClaimSpec{
Template: fedv1beta1.PersistentVolumeClaimTemplate{
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
},
},
}
pvc3 = &fedv1beta1.FederatedPersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-3",
Namespace: "default",
Labels: map[string]string{
"kubesphere.io/in-use": "false",
},
},
Spec: fedv1beta1.FederatedPersistentVolumeClaimSpec{
Template: fedv1beta1.PersistentVolumeClaimTemplate{
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
},
},
}
federatedPersistentVolumeClaims = []*fedv1beta1.FederatedPersistentVolumeClaim{pvc1, pvc2, pvc3}
)
func fedPVCsToInterface(federatedPersistentVolumeClaims ...*fedv1beta1.FederatedPersistentVolumeClaim) []interface{} {
items := make([]interface{}, 0)
for _, fedPVC := range federatedPersistentVolumeClaims {
items = append(items, fedPVC)
}
return items
}
func fedPVCsToRuntimeObject(federatedPersistentVolumeClaims ...*fedv1beta1.FederatedPersistentVolumeClaim) []runtime.Object {
items := make([]runtime.Object, 0)
for _, fedPVC := range federatedPersistentVolumeClaims {
items = append(items, fedPVC)
}
return items
}
func prepare() (v1alpha3.Interface, error) {
client := fake.NewSimpleClientset(fedPVCsToRuntimeObject(federatedPersistentVolumeClaims...)...)
informer := externalversions.NewSharedInformerFactory(client, 0)
for _, fedPVC := range federatedPersistentVolumeClaims {
err := informer.Types().V1beta1().FederatedPersistentVolumeClaims().Informer().GetIndexer().Add(fedPVC)
if err != nil {
return nil, err
}
}
return New(informer), nil
}
func TestGet(t *testing.T) {
tests := []struct {
namespace string
name string
expected runtime.Object
expectedErr error
}{
{
namespace: "default",
name: "pvc-1",
expected: fedPVCsToRuntimeObject(pvc1)[0],
expectedErr: nil,
},
}
getter, err := prepare()
if err != nil {
t.Fatal(err)
}
for _, test := range tests {
pvc, err := getter.Get(test.namespace, test.name)
if test.expectedErr != nil && err != test.expectedErr {
t.Errorf("expected error, got nothing")
} else if err != nil {
t.Fatal(err)
}
diff := cmp.Diff(pvc, test.expected)
if diff != "" {
t.Errorf("%T differ (-got, +want): %s", test.expected, diff)
}
}
}
func TestList(t *testing.T) {
tests := []struct {
description string
namespace string
query *query.Query
expected *api.ListResult
expectedErr error
}{
{
description: "test name filter",
namespace: "default",
query: &query.Query{
Pagination: &query.Pagination{
Limit: 10,
Offset: 0,
},
SortBy: query.FieldName,
Ascending: false,
Filters: map[query.Field]query.Value{query.FieldName: query.Value(pvc1.Name)},
},
expected: &api.ListResult{
Items: fedPVCsToInterface(federatedPersistentVolumeClaims[0]),
TotalItems: 1,
},
expectedErr: nil,
},
{
description: "test storageClass filter",
namespace: "default",
query: &query.Query{
Pagination: &query.Pagination{
Limit: 10,
Offset: 0,
},
SortBy: query.FieldName,
Ascending: false,
Filters: map[query.Field]query.Value{query.Field(storageClassName): query.Value(*pvc2.Spec.Template.Spec.StorageClassName)},
},
expected: &api.ListResult{
Items: fedPVCsToInterface(federatedPersistentVolumeClaims[2], federatedPersistentVolumeClaims[1]),
TotalItems: 2,
},
expectedErr: nil,
},
{
description: "test label filter",
namespace: "default",
query: &query.Query{
Pagination: &query.Pagination{
Limit: 10,
Offset: 0,
},
SortBy: query.FieldName,
Ascending: false,
LabelSelector: "kubesphere.io/in-use=false",
Filters: map[query.Field]query.Value{query.Field(storageClassName): query.Value(*pvc2.Spec.Template.Spec.StorageClassName)},
},
expected: &api.ListResult{
Items: fedPVCsToInterface(federatedPersistentVolumeClaims[2]),
TotalItems: 1,
},
expectedErr: nil,
},
}
lister, err := prepare()
if err != nil {
t.Fatal(err)
}
for _, test := range tests {
got, err := lister.List(test.namespace, test.query)
if test.expectedErr != nil && err != test.expectedErr {
t.Errorf("expected error, got nothing")
} else if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(got, test.expected); diff != "" {
t.Errorf("[%s] %T differ (-got, +want): %s", test.description, test.expected, diff)
}
}
}

View File

@@ -20,6 +20,9 @@ import (
"sort"
"strings"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/klog"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
@@ -150,33 +153,13 @@ func DefaultObjectMetaFilter(item metav1.ObjectMeta, filter query.Filter) bool {
}
}
func labelMatch(labels map[string]string, filter string) bool {
fields := strings.SplitN(filter, "=", 2)
var key, value string
var opposite bool
if len(fields) == 2 {
key = fields[0]
if strings.HasSuffix(key, "!") {
key = strings.TrimSuffix(key, "!")
opposite = true
}
value = fields[1]
} else {
key = fields[0]
value = "*"
func labelMatch(m map[string]string, filter string) bool {
labelSelector, err := labels.Parse(filter)
if err != nil {
klog.Warningf("invalid labelSelector %s: %s", filter, err)
return false
}
for k, v := range labels {
if opposite {
if (k == key) && v != value {
return true
}
} else {
if (k == key) && (value == "*" || v == value) {
return true
}
}
}
return false
return labelSelector.Matches(labels.Set(m))
}
func objectsToInterfaces(objs []runtime.Object) []interface{} {

View File

@@ -17,6 +17,9 @@ limitations under the License.
package pod
import (
"fmt"
"strings"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
@@ -31,7 +34,13 @@ const (
fieldNodeName = "nodeName"
fieldPVCName = "pvcName"
fieldServiceName = "serviceName"
fieldPhase = "phase"
fieldStatus = "status"
statusTypeWaitting = "Waiting"
statusTypeRunning = "Running"
statusTypeError = "Error"
statusTypeCompleted = "Completed"
)
type podsGetter struct {
@@ -90,6 +99,9 @@ func (p *podsGetter) filter(object runtime.Object, filter query.Filter) bool {
case fieldServiceName:
return p.podBelongToService(pod, string(filter.Value))
case fieldStatus:
_, statusType := p.getPodStatus(pod)
return statusType == string(filter.Value)
case fieldPhase:
return string(pod.Status.Phase) == string(filter.Value)
default:
return v1alpha3.DefaultObjectMetaFilter(pod.ObjectMeta, filter)
@@ -117,3 +129,133 @@ func (p *podsGetter) podBelongToService(item *corev1.Pod, serviceName string) bo
}
return true
}
// getPodStatus refer to `kubectl get po` result.
// https://github.com/kubernetes/kubernetes/blob/45279654db87f4908911569c07afc42804f0e246/pkg/printers/internalversion/printers.go#L820-920
// podStatusPhase = []string("Pending", "Running","Succeeded","Failed","Unknown")
// podStatusReasons = []string{"Evicted", "NodeAffinity", "NodeLost", "Shutdown", "UnexpectedAdmissionError"}
// containerWaitingReasons = []string{"ContainerCreating", "CrashLoopBackOff", "CreateContainerConfigError", "ErrImagePull", "ImagePullBackOff", "CreateContainerError", "InvalidImageName"}
// containerTerminatedReasons = []string{"OOMKilled", "Completed", "Error", "ContainerCannotRun", "DeadlineExceeded", "Evicted"}
func (p *podsGetter) getPodStatus(pod *corev1.Pod) (string, string) {
reason := string(pod.Status.Phase)
if pod.Status.Reason != "" {
reason = pod.Status.Reason
}
/*
todo: upgrade k8s.io/api version
// If the Pod carries {type:PodScheduled, reason:WaitingForGates}, set reason to 'SchedulingGated'.
for _, condition := range pod.Status.Conditions {
if condition.Type == corev1.PodScheduled && condition.Reason == corev1.PodReasonSchedulingGated {
reason = corev1.PodReasonSchedulingGated
}
}
*/
initializing := false
for i := range pod.Status.InitContainerStatuses {
container := pod.Status.InitContainerStatuses[i]
switch {
case container.State.Terminated != nil && container.State.Terminated.ExitCode == 0:
continue
case container.State.Terminated != nil:
// initialization is failed
if len(container.State.Terminated.Reason) == 0 {
if container.State.Terminated.Signal != 0 {
reason = fmt.Sprintf("Init:Signal:%d", container.State.Terminated.Signal)
} else {
reason = fmt.Sprintf("Init:ExitCode:%d", container.State.Terminated.ExitCode)
}
} else {
reason = "Init:" + container.State.Terminated.Reason
}
initializing = true
case container.State.Waiting != nil && len(container.State.Waiting.Reason) > 0 && container.State.Waiting.Reason != "PodInitializing":
reason = "Init:" + container.State.Waiting.Reason
initializing = true
default:
reason = fmt.Sprintf("Init:%d/%d", i, len(pod.Spec.InitContainers))
initializing = true
}
break
}
if !initializing {
hasRunning := false
for i := len(pod.Status.ContainerStatuses) - 1; i >= 0; i-- {
container := pod.Status.ContainerStatuses[i]
if container.State.Waiting != nil && container.State.Waiting.Reason != "" {
reason = container.State.Waiting.Reason
} else if container.State.Terminated != nil && container.State.Terminated.Reason != "" {
reason = container.State.Terminated.Reason
} else if container.State.Terminated != nil && container.State.Terminated.Reason == "" {
if container.State.Terminated.Signal != 0 {
reason = fmt.Sprintf("Signal:%d", container.State.Terminated.Signal)
} else {
reason = fmt.Sprintf("ExitCode:%d", container.State.Terminated.ExitCode)
}
} else if container.Ready && container.State.Running != nil {
hasRunning = true
}
}
// change pod status back to "Running" if there is at least one container still reporting as "Running" status
if reason == "Completed" && hasRunning {
if hasPodReadyCondition(pod.Status.Conditions) {
reason = "Running"
} else {
reason = "NotReady"
}
}
}
if pod.DeletionTimestamp != nil && pod.Status.Reason == "NodeLost" {
reason = "Unknown"
} else if pod.DeletionTimestamp != nil {
reason = "Terminating"
}
statusType := statusTypeWaitting
switch reason {
case "Running":
statusType = statusTypeRunning
case "Failed":
statusType = statusTypeError
case "Error":
statusType = statusTypeError
case "Completed":
statusType = statusTypeCompleted
case "Succeeded":
if isPodReadyConditionReason(pod.Status.Conditions, "PodCompleted") {
statusType = statusTypeCompleted
}
default:
if strings.HasPrefix(reason, "OutOf") {
statusType = statusTypeError
}
}
return reason, statusType
}
func hasPodReadyCondition(conditions []corev1.PodCondition) bool {
for _, condition := range conditions {
if condition.Type == corev1.PodReady && condition.Status == corev1.ConditionTrue {
return true
}
}
return false
}
func isPodReadyConditionReason(conditions []corev1.PodCondition, reason string) bool {
for _, condition := range conditions {
if condition.Type == corev1.PodReady && condition.Reason != reason {
return false
}
}
return true
}

View File

@@ -78,7 +78,7 @@ func TestListPods(t *testing.T) {
nil,
},
{
"test status filter",
"test phase filter",
"default",
&query.Query{
Pagination: &query.Pagination{
@@ -89,7 +89,7 @@ func TestListPods(t *testing.T) {
Ascending: false,
Filters: map[query.Field]query.Value{
query.FieldNamespace: query.Value("default"),
fieldStatus: query.Value(corev1.PodRunning),
fieldPhase: query.Value(corev1.PodRunning),
},
},
&api.ListResult{
@@ -163,6 +163,7 @@ var (
Phase: corev1.PodRunning,
},
}
pods = []interface{}{foo1, foo2, foo3, foo4, foo5}
)

View File

@@ -24,12 +24,15 @@ import (
"strings"
"time"
"github.com/mitchellh/mapstructure"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apiserver/pkg/authentication/user"
"k8s.io/client-go/kubernetes"
"k8s.io/klog"
@@ -40,6 +43,8 @@ import (
tenantv1alpha2 "kubesphere.io/api/tenant/v1alpha2"
typesv1beta1 "kubesphere.io/api/types/v1beta1"
iamv1alpha2 "kubesphere.io/api/iam/v1alpha2"
"kubesphere.io/kubesphere/pkg/api"
auditingv1alpha1 "kubesphere.io/kubesphere/pkg/api/auditing/v1alpha1"
eventsv1alpha1 "kubesphere.io/kubesphere/pkg/api/events/v1alpha1"
@@ -53,6 +58,7 @@ import (
"kubesphere.io/kubesphere/pkg/models/auditing"
"kubesphere.io/kubesphere/pkg/models/events"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/logging"
"kubesphere.io/kubesphere/pkg/models/metering"
"kubesphere.io/kubesphere/pkg/models/monitoring"
@@ -65,6 +71,8 @@ import (
loggingclient "kubesphere.io/kubesphere/pkg/simple/client/logging"
meteringclient "kubesphere.io/kubesphere/pkg/simple/client/metering"
monitoringclient "kubesphere.io/kubesphere/pkg/simple/client/monitoring"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
jsonpatchutil "kubesphere.io/kubesphere/pkg/utils/josnpatchutil"
"kubesphere.io/kubesphere/pkg/utils/stringutils"
)
@@ -72,11 +80,12 @@ const orphanFinalizer = "orphan.finalizers.kubesphere.io"
type Interface interface {
ListWorkspaces(user user.Info, queryParam *query.Query) (*api.ListResult, error)
GetWorkspace(workspace string) (*tenantv1alpha1.Workspace, error)
ListWorkspaceTemplates(user user.Info, query *query.Query) (*api.ListResult, error)
CreateWorkspaceTemplate(workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error)
CreateWorkspaceTemplate(user user.Info, workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error)
DeleteWorkspaceTemplate(workspace string, opts metav1.DeleteOptions) error
UpdateWorkspaceTemplate(workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error)
PatchWorkspaceTemplate(workspace string, data json.RawMessage) (*tenantv1alpha2.WorkspaceTemplate, error)
UpdateWorkspaceTemplate(user user.Info, workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error)
PatchWorkspaceTemplate(user user.Info, workspace string, data json.RawMessage) (*tenantv1alpha2.WorkspaceTemplate, error)
DescribeWorkspaceTemplate(workspace string) (*tenantv1alpha2.WorkspaceTemplate, error)
ListNamespaces(user user.Info, workspace string, query *query.Query) (*api.ListResult, error)
ListDevOpsProjects(user user.Info, workspace string, query *query.Query) (*api.ListResult, error)
@@ -91,7 +100,7 @@ type Interface interface {
DeleteNamespace(workspace, namespace string) error
UpdateNamespace(workspace string, namespace *corev1.Namespace) (*corev1.Namespace, error)
PatchNamespace(workspace string, namespace *corev1.Namespace) (*corev1.Namespace, error)
ListClusters(info user.Info) (*api.ListResult, error)
ListClusters(info user.Info, queryParam *query.Query) (*api.ListResult, error)
Metering(user user.Info, queryParam *meteringv1alpha1.Query, priceInfo meteringclient.PriceInfo) (monitoring.Metrics, error)
MeteringHierarchy(user user.Info, queryParam *meteringv1alpha1.Query, priceInfo meteringclient.PriceInfo) (metering.ResourceStatistic, error)
CreateWorkspaceResourceQuota(workspace string, resourceQuota *quotav1alpha2.ResourceQuota) (*quotav1alpha2.ResourceQuota, error)
@@ -102,6 +111,7 @@ type Interface interface {
type tenantOperator struct {
am am.AccessManagementInterface
im im.IdentityManagementInterface
authorizer authorizer.Authorizer
k8sclient kubernetes.Interface
ksclient kubesphere.Interface
@@ -111,16 +121,13 @@ type tenantOperator struct {
auditing auditing.Interface
mo monitoring.MonitoringOperator
opRelease openpitrix.ReleaseInterface
clusterClient clusterclient.ClusterClients
}
func New(informers informers.InformerFactory, k8sclient kubernetes.Interface, ksclient kubesphere.Interface, evtsClient eventsclient.Client, loggingClient loggingclient.Client, auditingclient auditingclient.Client, am am.AccessManagementInterface, authorizer authorizer.Authorizer, monitoringclient monitoringclient.Interface, resourceGetter *resourcev1alpha3.ResourceGetter, stopCh <-chan struct{}) Interface {
var openpitrixRelease openpitrix.ReleaseInterface
if ksclient != nil {
openpitrixRelease = openpitrix.NewOpenpitrixOperator(informers, ksclient, nil, stopCh)
}
func New(informers informers.InformerFactory, k8sclient kubernetes.Interface, ksclient kubesphere.Interface, evtsClient eventsclient.Client, loggingClient loggingclient.Client, auditingclient auditingclient.Client, am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer, monitoringclient monitoringclient.Interface, resourceGetter *resourcev1alpha3.ResourceGetter, opClient openpitrix.Interface) Interface {
return &tenantOperator{
am: am,
im: im,
authorizer: authorizer,
resourceGetter: resourcesv1alpha3.NewResourceGetter(informers, nil),
k8sclient: k8sclient,
@@ -129,7 +136,8 @@ func New(informers informers.InformerFactory, k8sclient kubernetes.Interface, ks
lo: logging.NewLoggingOperator(loggingClient),
auditing: auditing.NewEventsOperator(auditingclient),
mo: monitoring.NewMonitoringOperator(monitoringclient, nil, k8sclient, informers, resourceGetter, nil),
opRelease: openpitrixRelease,
opRelease: opClient,
clusterClient: clusterclient.NewClusterClient(informers.KubeSphereSharedInformerFactory().Cluster().V1alpha1().Clusters()),
}
}
@@ -196,6 +204,15 @@ func (t *tenantOperator) ListWorkspaces(user user.Info, queryParam *query.Query)
return result, nil
}
func (t *tenantOperator) GetWorkspace(workspace string) (*tenantv1alpha1.Workspace, error) {
obj, err := t.resourceGetter.Get(tenantv1alpha1.ResourcePluralWorkspace, "", workspace)
if err != nil {
klog.Error(err)
return nil, err
}
return obj.(*tenantv1alpha1.Workspace), nil
}
func (t *tenantOperator) ListWorkspaceTemplates(user user.Info, queryParam *query.Query) (*api.ListResult, error) {
listWS := authorizer.AttributesRecord{
@@ -459,15 +476,111 @@ func (t *tenantOperator) PatchNamespace(workspace string, namespace *corev1.Name
return t.k8sclient.CoreV1().Namespaces().Patch(context.Background(), namespace.Name, types.MergePatchType, data, metav1.PatchOptions{})
}
func (t *tenantOperator) PatchWorkspaceTemplate(workspace string, data json.RawMessage) (*tenantv1alpha2.WorkspaceTemplate, error) {
return t.ksclient.TenantV1alpha2().WorkspaceTemplates().Patch(context.Background(), workspace, types.MergePatchType, data, metav1.PatchOptions{})
func (t *tenantOperator) PatchWorkspaceTemplate(user user.Info, workspace string, data json.RawMessage) (*tenantv1alpha2.WorkspaceTemplate, error) {
var manageWorkspaceTemplateRequest bool
clusterNames := sets.NewString()
patchs, err := jsonpatchutil.Parse(data)
if err != nil {
klog.Error(err)
return nil, err
}
if len(patchs) > 0 {
for _, patch := range patchs {
path, err := patch.Path()
if err != nil {
klog.Error(err)
return nil, err
}
// If the request path is cluster, just collecting cluster name to set and continue to check cluster permission later.
// Or indicate that want to manage the workspace templates, so check if user has the permission to manage workspace templates.
if strings.HasPrefix(path, "/spec/placement") {
if patch.Kind() != "add" && patch.Kind() != "remove" {
err := errors.NewBadRequest("not support operation type")
klog.Error(err)
return nil, err
}
clusterValue := make(map[string]interface{})
err := jsonpatchutil.GetValue(patch, &clusterValue)
if err != nil {
klog.Error(err)
return nil, err
}
// if the placement is empty, the first patch need fill with "clusters" field.
if cName := clusterValue["name"]; cName != nil {
cn, ok := cName.(string)
if ok {
clusterNames.Insert(cn)
}
} else if cluster := clusterValue["clusters"]; cluster != nil {
clusterRefrences := []typesv1beta1.GenericClusterReference{}
err := mapstructure.Decode(cluster, &clusterRefrences)
if err != nil {
klog.Error(err)
return nil, err
}
for _, v := range clusterRefrences {
clusterNames.Insert(v.Name)
}
}
} else {
manageWorkspaceTemplateRequest = true
}
}
}
if manageWorkspaceTemplateRequest {
err := t.checkWorkspaceTemplatePermission(user, workspace)
if err != nil {
klog.Error(err)
return nil, err
}
}
if clusterNames.Len() > 0 {
err := t.checkClusterPermission(user, clusterNames.List())
if err != nil {
klog.Error(err)
return nil, err
}
}
return t.ksclient.TenantV1alpha2().WorkspaceTemplates().Patch(context.Background(), workspace, types.JSONPatchType, data, metav1.PatchOptions{})
}
func (t *tenantOperator) CreateWorkspaceTemplate(workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error) {
func (t *tenantOperator) CreateWorkspaceTemplate(user user.Info, workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error) {
if len(workspace.Spec.Placement.Clusters) != 0 {
clusters := make([]string, 0)
for _, v := range workspace.Spec.Placement.Clusters {
clusters = append(clusters, v.Name)
}
err := t.checkClusterPermission(user, clusters)
if err != nil {
klog.Error(err)
return nil, err
}
}
return t.ksclient.TenantV1alpha2().WorkspaceTemplates().Create(context.Background(), workspace, metav1.CreateOptions{})
}
func (t *tenantOperator) UpdateWorkspaceTemplate(workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error) {
func (t *tenantOperator) UpdateWorkspaceTemplate(user user.Info, workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error) {
if len(workspace.Spec.Placement.Clusters) != 0 {
clusters := make([]string, 0)
for _, v := range workspace.Spec.Placement.Clusters {
clusters = append(clusters, v.Name)
}
err := t.checkClusterPermission(user, clusters)
if err != nil {
klog.Error(err)
return nil, err
}
}
return t.ksclient.TenantV1alpha2().WorkspaceTemplates().Update(context.Background(), workspace, metav1.UpdateOptions{})
}
@@ -493,7 +606,7 @@ func (t *tenantOperator) ListWorkspaceClusters(workspaceName string) (*api.ListR
for _, cluster := range workspace.Spec.Placement.Clusters {
obj, err := t.resourceGetter.Get(clusterv1alpha1.ResourcesPluralCluster, "", cluster.Name)
if err != nil {
klog.Error(err)
klog.Warning(err)
if errors.IsNotFound(err) {
continue
}
@@ -522,89 +635,69 @@ func (t *tenantOperator) ListWorkspaceClusters(workspaceName string) (*api.ListR
return &api.ListResult{Items: []interface{}{}, TotalItems: 0}, nil
}
func (t *tenantOperator) ListClusters(user user.Info) (*api.ListResult, error) {
func (t *tenantOperator) ListClusters(user user.Info, queryParam *query.Query) (*api.ListResult, error) {
listClustersInGlobalScope := authorizer.AttributesRecord{
User: user,
Verb: "list",
APIGroup: "cluster.kubesphere.io",
Resource: "clusters",
ResourceScope: request.GlobalScope,
ResourceRequest: true,
}
allowedListClustersInGlobalScope, _, err := t.authorizer.Authorize(listClustersInGlobalScope)
if err != nil {
klog.Error(err)
return nil, err
return nil, fmt.Errorf("failed to authorize: %s", err)
}
listWorkspacesInGlobalScope := authorizer.AttributesRecord{
User: user,
Verb: "list",
Resource: "workspaces",
ResourceScope: request.GlobalScope,
ResourceRequest: true,
if allowedListClustersInGlobalScope == authorizer.DecisionAllow {
return t.resourceGetter.List(clusterv1alpha1.ResourcesPluralCluster, "", queryParam)
}
allowedListWorkspacesInGlobalScope, _, err := t.authorizer.Authorize(listWorkspacesInGlobalScope)
userDetail, err := t.im.DescribeUser(user.GetName())
if err != nil {
klog.Error(err)
return nil, err
return nil, fmt.Errorf("failed to describe user: %s", err)
}
if allowedListClustersInGlobalScope == authorizer.DecisionAllow ||
allowedListWorkspacesInGlobalScope == authorizer.DecisionAllow {
result, err := t.resourceGetter.List(clusterv1alpha1.ResourcesPluralCluster, "", query.New())
grantedClustersAnnotation := userDetail.Annotations[iamv1alpha2.GrantedClustersAnnotation]
var grantedClusters sets.String
if len(grantedClustersAnnotation) > 0 {
grantedClusters = sets.NewString(strings.Split(grantedClustersAnnotation, ",")...)
} else {
grantedClusters = sets.NewString()
}
var clusters []*clusterv1alpha1.Cluster
for _, grantedCluster := range grantedClusters.List() {
obj, err := t.resourceGetter.Get(clusterv1alpha1.ResourcesPluralCluster, "", grantedCluster)
if err != nil {
klog.Error(err)
return nil, err
}
return result, nil
}
workspaceRoleBindings, err := t.am.ListWorkspaceRoleBindings(user.GetName(), user.GetGroups(), "")
if err != nil {
klog.Error(err)
return nil, err
}
clusters := map[string]*clusterv1alpha1.Cluster{}
for _, roleBinding := range workspaceRoleBindings {
workspaceName := roleBinding.Labels[tenantv1alpha1.WorkspaceLabel]
workspace, err := t.DescribeWorkspaceTemplate(workspaceName)
if err != nil {
klog.Error(err)
return nil, err
}
for _, grantedCluster := range workspace.Spec.Placement.Clusters {
// skip if cluster exist
if clusters[grantedCluster.Name] != nil {
if errors.IsNotFound(err) {
continue
}
obj, err := t.resourceGetter.Get(clusterv1alpha1.ResourcesPluralCluster, "", grantedCluster.Name)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {
continue
}
return nil, err
}
cluster := obj.(*clusterv1alpha1.Cluster)
clusters[cluster.Name] = cluster
return nil, fmt.Errorf("failed to fetch cluster: %s", err)
}
cluster := obj.(*clusterv1alpha1.Cluster)
clusters = append(clusters, cluster)
}
items := make([]interface{}, 0)
items := make([]runtime.Object, 0)
for _, cluster := range clusters {
items = append(items, cluster)
}
return &api.ListResult{Items: items, TotalItems: len(items)}, nil
// apply additional labelSelector
if queryParam.LabelSelector != "" {
queryParam.Filters[query.FieldLabel] = query.Value(queryParam.LabelSelector)
}
// use default pagination search logic
result := resources.DefaultList(items, queryParam, func(left runtime.Object, right runtime.Object, field query.Field) bool {
return resources.DefaultObjectMetaCompare(left.(*clusterv1alpha1.Cluster).ObjectMeta, right.(*clusterv1alpha1.Cluster).ObjectMeta, field)
}, func(workspace runtime.Object, filter query.Filter) bool {
return resources.DefaultObjectMetaFilter(workspace.(*clusterv1alpha1.Cluster).ObjectMeta, filter)
})
return result, nil
}
func (t *tenantOperator) DeleteWorkspaceTemplate(workspace string, opts metav1.DeleteOptions) error {
@@ -1090,6 +1183,16 @@ func (t *tenantOperator) MeteringHierarchy(user user.Info, queryParam *meteringv
return resourceStats, nil
}
func (t *tenantOperator) getClusterRoleBindingsByUser(clusterName, user string) (*rbacv1.ClusterRoleBindingList, error) {
kubernetesClientSet, err := t.clusterClient.GetKubernetesClientSet(clusterName)
if err != nil {
return nil, err
}
return kubernetesClientSet.RbacV1().ClusterRoleBindings().
List(context.Background(),
metav1.ListOptions{LabelSelector: labels.FormatLabels(map[string]string{"iam.kubesphere.io/user-ref": user})})
}
func contains(objects []runtime.Object, object runtime.Object) bool {
for _, item := range objects {
if item == object {
@@ -1115,3 +1218,78 @@ func stringContains(str string, subStrs []string) bool {
}
return false
}
func (t *tenantOperator) checkWorkspaceTemplatePermission(user user.Info, workspace string) error {
deleteWST := authorizer.AttributesRecord{
User: user,
Verb: authorizer.VerbDelete,
APIGroup: tenantv1alpha2.SchemeGroupVersion.Group,
APIVersion: tenantv1alpha2.SchemeGroupVersion.Version,
Resource: tenantv1alpha2.ResourcePluralWorkspaceTemplate,
ResourceRequest: true,
ResourceScope: request.GlobalScope,
}
authorize, reason, err := t.authorizer.Authorize(deleteWST)
if err != nil {
return err
}
if authorize != authorizer.DecisionAllow {
return errors.NewForbidden(tenantv1alpha2.Resource(tenantv1alpha2.ResourcePluralWorkspaceTemplate), workspace, fmt.Errorf(reason))
}
return nil
}
func (t *tenantOperator) checkClusterPermission(user user.Info, clusters []string) error {
// Checking whether the user can manage the cluster requires authentication from two aspects.
// First check whether the user has relevant global permissions,
// and then check whether the user has relevant cluster permissions in the target cluster
for _, clusterName := range clusters {
cluster, err := t.ksclient.ClusterV1alpha1().Clusters().Get(context.Background(), clusterName, metav1.GetOptions{})
if err != nil {
return err
}
if cluster.Labels["cluster.kubesphere.io/visibility"] == "public" {
continue
}
deleteCluster := authorizer.AttributesRecord{
User: user,
Verb: authorizer.VerbDelete,
APIGroup: clusterv1alpha1.SchemeGroupVersion.Group,
APIVersion: clusterv1alpha1.SchemeGroupVersion.Version,
Resource: clusterv1alpha1.ResourcesPluralCluster,
Cluster: clusterName,
ResourceRequest: true,
ResourceScope: request.GlobalScope,
}
authorize, _, err := t.authorizer.Authorize(deleteCluster)
if err != nil {
return err
}
if authorize == authorizer.DecisionAllow {
continue
}
list, err := t.getClusterRoleBindingsByUser(clusterName, user.GetName())
if err != nil {
return err
}
allowed := false
for _, clusterRolebinding := range list.Items {
if clusterRolebinding.RoleRef.Name == iamv1alpha2.ClusterAdmin {
allowed = true
break
}
}
if !allowed {
return errors.NewForbidden(clusterv1alpha1.Resource(clusterv1alpha1.ResourcesPluralCluster), clusterName, fmt.Errorf("user is not allowed to use the cluster %s", clusterName))
}
}
return nil
}

View File

@@ -544,5 +544,5 @@ func prepare() Interface {
amOperator := am.NewOperator(ksClient, k8sClient, fakeInformerFactory, nil)
authorizer := rbac.NewRBACAuthorizer(amOperator)
return New(fakeInformerFactory, k8sClient, ksClient, nil, nil, nil, amOperator, authorizer, nil, nil, nil)
return New(fakeInformerFactory, k8sClient, ksClient, nil, nil, nil, amOperator, nil, authorizer, nil, nil, nil)
}

View File

@@ -44,6 +44,8 @@ import (
const (
// Time allowed to write a message to the peer.
writeWait = 10 * time.Second
// ctrl+d to close terminal.
endOfTransmission = "\u0004"
)
// PtyHandler is what remotecommand expects from a pty
@@ -76,11 +78,14 @@ type TerminalMessage struct {
Rows, Cols uint16
}
// TerminalSize handles pty->process resize events
// Next handles pty->process resize events
// Called in a loop from remotecommand as long as the process is running
func (t TerminalSession) Next() *remotecommand.TerminalSize {
select {
case size := <-t.sizeChan:
if size.Height == 0 && size.Width == 0 {
return nil
}
return &size
}
}
@@ -92,7 +97,7 @@ func (t TerminalSession) Read(p []byte) (int, error) {
var msg TerminalMessage
err := t.conn.ReadJSON(&msg)
if err != nil {
return 0, err
return copy(p, endOfTransmission), err
}
switch msg.Op {
@@ -102,7 +107,7 @@ func (t TerminalSession) Read(p []byte) (int, error) {
t.sizeChan <- remotecommand.TerminalSize{Width: msg.Cols, Height: msg.Rows}
return 0, nil
default:
return 0, fmt.Errorf("unknown message type '%s'", msg.Op)
return copy(p, endOfTransmission), fmt.Errorf("unknown message type '%s'", msg.Op)
}
}
@@ -145,6 +150,7 @@ func (t TerminalSession) Toast(p string) error {
// For now the status code is unused and reason is shown to the user (unless "")
func (t TerminalSession) Close(status uint32, reason string) {
klog.Warning(status, reason)
close(t.sizeChan)
t.conn.Close()
}
@@ -211,7 +217,7 @@ func (n *NodeTerminaler) getNSEnterPod() (*v1.Pod, error) {
pod, err := n.client.CoreV1().Pods(n.Namespace).Get(context.Background(), n.PodName, metav1.GetOptions{})
if err != nil || (pod.Status.Phase != v1.PodRunning && pod.Status.Phase != v1.PodPending) {
//pod has timed out, but has not been cleaned up
// pod has timed out, but has not been cleaned up
if pod.Status.Phase == v1.PodSucceeded || pod.Status.Phase == v1.PodFailed {
err := n.client.CoreV1().Pods(n.Namespace).Delete(context.Background(), n.PodName, metav1.DeleteOptions{})
if err != nil {
@@ -324,7 +330,7 @@ func isValidShell(validShells []string, shell string) bool {
func (t *terminaler) HandleSession(shell, namespace, podName, containerName string, conn *websocket.Conn) {
var err error
validShells := []string{"sh", "bash"}
validShells := []string{"bash", "sh"}
session := &TerminalSession{conn: conn, sizeChan: make(chan remotecommand.TerminalSize)}

View File

@@ -16,7 +16,17 @@ limitations under the License.
package cache
import "time"
import (
"encoding/json"
"fmt"
"time"
"k8s.io/klog"
)
var (
cacheFactories = make(map[string]CacheFactory)
)
var NeverExpire = time.Duration(0)
@@ -39,3 +49,32 @@ type Interface interface {
// Expires updates object's expiration time, return err if key doesn't exist
Expire(key string, duration time.Duration) error
}
// DynamicOptions the options of the cache. For redis, options key can be "host", "port", "db", "password".
// For InMemoryCache, options key can be "cleanupperiod"
type DynamicOptions map[string]interface{}
func (o DynamicOptions) MarshalJSON() ([]byte, error) {
data, err := json.Marshal(o)
return data, err
}
func RegisterCacheFactory(factory CacheFactory) {
cacheFactories[factory.Type()] = factory
}
func New(option *Options, stopCh <-chan struct{}) (Interface, error) {
if cacheFactories[option.Type] == nil {
err := fmt.Errorf("cache with type %s is not supported", option.Type)
klog.Error(err)
return nil, err
}
cache, err := cacheFactories[option.Type].Create(option.Options, stopCh)
if err != nil {
klog.Errorf("failed to create cache, error: %v", err)
return nil, err
}
return cache, nil
}

8
pkg/simple/client/cache/factory.go vendored Normal file
View File

@@ -0,0 +1,8 @@
package cache
type CacheFactory interface {
// Type unique type of the cache
Type() string
// Create relevant caches by type
Create(options DynamicOptions, stopCh <-chan struct{}) (Interface, error)
}

View File

@@ -0,0 +1,200 @@
/*
Copyright 2019 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cache
import (
"regexp"
"strings"
"time"
"github.com/mitchellh/mapstructure"
"k8s.io/apimachinery/pkg/util/wait"
"kubesphere.io/kubesphere/pkg/server/errors"
)
var ErrNoSuchKey = errors.New("no such key")
const (
typeInMemoryCache = "InMemoryCache"
DefaultCacheType = typeInMemoryCache
defaultCleanupPeriod = 2 * time.Hour
)
type simpleObject struct {
value string
neverExpire bool
expiredAt time.Time
}
func (so *simpleObject) IsExpired() bool {
if so.neverExpire {
return false
}
if time.Now().After(so.expiredAt) {
return true
}
return false
}
// InMemoryCacheOptions used to create inMemoryCache in memory.
// CleanupPeriod specifies cleans up expired token every period.
// Note the SimpleCache cannot be used in multi-replicas apiserver,
// which will lead to data inconsistency.
type InMemoryCacheOptions struct {
CleanupPeriod time.Duration `json:"cleanupPeriod" yaml:"cleanupPeriod" mapstructure:"cleanupperiod"`
}
// imMemoryCache implements cache.Interface use memory objects, it should be used only for testing
type inMemoryCache struct {
store map[string]simpleObject
}
func NewInMemoryCache(options *InMemoryCacheOptions, stopCh <-chan struct{}) (Interface, error) {
var cleanupPeriod time.Duration
cache := &inMemoryCache{
store: make(map[string]simpleObject),
}
if options == nil || options.CleanupPeriod == 0 {
cleanupPeriod = defaultCleanupPeriod
} else {
cleanupPeriod = options.CleanupPeriod
}
go wait.Until(cache.cleanInvalidToken, cleanupPeriod, stopCh)
return cache, nil
}
func (s *inMemoryCache) cleanInvalidToken() {
for k, v := range s.store {
if v.IsExpired() {
delete(s.store, k)
}
}
}
func (s *inMemoryCache) Keys(pattern string) ([]string, error) {
// There is a little difference between go regexp and redis key pattern
// In redis, * means any character, while in go . means match everything.
pattern = strings.Replace(pattern, "*", ".", -1)
re, err := regexp.Compile(pattern)
if err != nil {
return nil, err
}
var keys []string
for k := range s.store {
if re.MatchString(k) {
keys = append(keys, k)
}
}
return keys, nil
}
func (s *inMemoryCache) Set(key string, value string, duration time.Duration) error {
sobject := simpleObject{
value: value,
neverExpire: false,
expiredAt: time.Now().Add(duration),
}
if duration == NeverExpire {
sobject.neverExpire = true
}
s.store[key] = sobject
return nil
}
func (s *inMemoryCache) Del(keys ...string) error {
for _, key := range keys {
delete(s.store, key)
}
return nil
}
func (s *inMemoryCache) Get(key string) (string, error) {
if sobject, ok := s.store[key]; ok {
if sobject.neverExpire || time.Now().Before(sobject.expiredAt) {
return sobject.value, nil
}
}
return "", ErrNoSuchKey
}
func (s *inMemoryCache) Exists(keys ...string) (bool, error) {
for _, key := range keys {
if _, ok := s.store[key]; !ok {
return false, nil
}
}
return true, nil
}
func (s *inMemoryCache) Expire(key string, duration time.Duration) error {
value, err := s.Get(key)
if err != nil {
return err
}
sobject := simpleObject{
value: value,
neverExpire: false,
expiredAt: time.Now().Add(duration),
}
if duration == NeverExpire {
sobject.neverExpire = true
}
s.store[key] = sobject
return nil
}
type inMemoryCacheFactory struct {
}
func (sf *inMemoryCacheFactory) Type() string {
return typeInMemoryCache
}
func (sf *inMemoryCacheFactory) Create(options DynamicOptions, stopCh <-chan struct{}) (Interface, error) {
var sOptions InMemoryCacheOptions
decoder, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{
DecodeHook: mapstructure.StringToTimeDurationHookFunc(),
WeaklyTypedInput: true,
Result: &sOptions,
})
if err != nil {
return nil, err
}
if err := decoder.Decode(options); err != nil {
return nil, err
}
return NewInMemoryCache(&sOptions, stopCh)
}
func init() {
RegisterCacheFactory(&inMemoryCacheFactory{})
}

View File

@@ -102,7 +102,7 @@ func TestDeleteAndExpireCache(t *testing.T) {
}
for _, testCase := range testCases {
cacheClient := NewSimpleCache()
cacheClient, _ := NewInMemoryCache(nil, nil)
t.Run(testCase.description, func(t *testing.T) {
err := load(cacheClient, dataSet)

View File

@@ -18,25 +18,19 @@ package cache
import (
"fmt"
"github.com/spf13/pflag"
)
type Options struct {
Host string `json:"host" yaml:"host"`
Port int `json:"port" yaml:"port"`
Password string `json:"password" yaml:"password"`
DB int `json:"db" yaml:"db"`
Type string `json:"type"`
Options DynamicOptions `json:"options"`
}
// NewRedisOptions returns options points to nowhere,
// NewCacheOptions returns options points to nowhere,
// because redis is not required for some components
func NewRedisOptions() *Options {
func NewCacheOptions() *Options {
return &Options{
Host: "",
Port: 0,
Password: "",
DB: 0,
Type: "",
Options: map[string]interface{}{},
}
}
@@ -44,20 +38,9 @@ func NewRedisOptions() *Options {
func (r *Options) Validate() []error {
errors := make([]error, 0)
if r.Port == 0 {
errors = append(errors, fmt.Errorf("invalid service port number"))
if r.Type == "" {
errors = append(errors, fmt.Errorf("invalid cache type"))
}
return errors
}
// AddFlags add option flags to command line flags,
// if redis-host left empty, the following options will be ignored.
func (r *Options) AddFlags(fs *pflag.FlagSet, s *Options) {
fs.StringVar(&r.Host, "redis-host", s.Host, "Redis connection URL. If left blank, means redis is unnecessary, "+
"redis will be disabled.")
fs.IntVar(&r.Port, "redis-port", s.Port, "")
fs.StringVar(&r.Password, "redis-password", s.Password, "")
fs.IntVar(&r.DB, "redis-db", s.DB, "")
}

View File

@@ -17,19 +17,31 @@ limitations under the License.
package cache
import (
"errors"
"fmt"
"time"
"github.com/go-redis/redis"
"github.com/mitchellh/mapstructure"
"k8s.io/klog"
)
type Client struct {
const typeRedis = "redis"
type redisClient struct {
client *redis.Client
}
func NewRedisClient(option *Options, stopCh <-chan struct{}) (Interface, error) {
var r Client
// redisOptions used to create a redis client.
type redisOptions struct {
Host string `json:"host" yaml:"host" mapstructure:"host"`
Port int `json:"port" yaml:"port" mapstructure:"port"`
Password string `json:"password" yaml:"password" mapstructure:"password"`
DB int `json:"db" yaml:"db" mapstructure:"db"`
}
func NewRedisClient(option *redisOptions, stopCh <-chan struct{}) (Interface, error) {
var r redisClient
redisOptions := &redis.Options{
Addr: fmt.Sprintf("%s:%d", option.Host, option.Port),
@@ -61,23 +73,23 @@ func NewRedisClient(option *Options, stopCh <-chan struct{}) (Interface, error)
return &r, nil
}
func (r *Client) Get(key string) (string, error) {
func (r *redisClient) Get(key string) (string, error) {
return r.client.Get(key).Result()
}
func (r *Client) Keys(pattern string) ([]string, error) {
func (r *redisClient) Keys(pattern string) ([]string, error) {
return r.client.Keys(pattern).Result()
}
func (r *Client) Set(key string, value string, duration time.Duration) error {
func (r *redisClient) Set(key string, value string, duration time.Duration) error {
return r.client.Set(key, value, duration).Err()
}
func (r *Client) Del(keys ...string) error {
func (r *redisClient) Del(keys ...string) error {
return r.client.Del(keys...).Err()
}
func (r *Client) Exists(keys ...string) (bool, error) {
func (r *redisClient) Exists(keys ...string) (bool, error) {
existedKeys, err := r.client.Exists(keys...).Result()
if err != nil {
return false, err
@@ -86,6 +98,34 @@ func (r *Client) Exists(keys ...string) (bool, error) {
return len(keys) == int(existedKeys), nil
}
func (r *Client) Expire(key string, duration time.Duration) error {
func (r *redisClient) Expire(key string, duration time.Duration) error {
return r.client.Expire(key, duration).Err()
}
type redisFactory struct{}
func (rf *redisFactory) Type() string {
return typeRedis
}
func (rf *redisFactory) Create(options DynamicOptions, stopCh <-chan struct{}) (Interface, error) {
var rOptions redisOptions
if err := mapstructure.Decode(options, &rOptions); err != nil {
return nil, err
}
if rOptions.Port == 0 {
return nil, errors.New("invalid service port number")
}
if len(rOptions.Host) == 0 {
return nil, errors.New("invalid service host")
}
client, err := NewRedisClient(&rOptions, stopCh)
if err != nil {
return nil, err
}
return client, nil
}
func init() {
RegisterCacheFactory(&redisFactory{})
}

View File

@@ -1,123 +0,0 @@
/*
Copyright 2019 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package cache
import (
"regexp"
"strings"
"time"
"kubesphere.io/kubesphere/pkg/server/errors"
)
var ErrNoSuchKey = errors.New("no such key")
type simpleObject struct {
value string
neverExpire bool
expiredAt time.Time
}
// SimpleCache implements cache.Interface use memory objects, it should be used only for testing
type simpleCache struct {
store map[string]simpleObject
}
func NewSimpleCache() Interface {
return &simpleCache{store: make(map[string]simpleObject)}
}
func (s *simpleCache) Keys(pattern string) ([]string, error) {
// There is a little difference between go regexp and redis key pattern
// In redis, * means any character, while in go . means match everything.
pattern = strings.Replace(pattern, "*", ".", -1)
re, err := regexp.Compile(pattern)
if err != nil {
return nil, err
}
var keys []string
for k := range s.store {
if re.MatchString(k) {
keys = append(keys, k)
}
}
return keys, nil
}
func (s *simpleCache) Set(key string, value string, duration time.Duration) error {
sobject := simpleObject{
value: value,
neverExpire: false,
expiredAt: time.Now().Add(duration),
}
if duration == NeverExpire {
sobject.neverExpire = true
}
s.store[key] = sobject
return nil
}
func (s *simpleCache) Del(keys ...string) error {
for _, key := range keys {
delete(s.store, key)
}
return nil
}
func (s *simpleCache) Get(key string) (string, error) {
if sobject, ok := s.store[key]; ok {
if sobject.neverExpire || time.Now().Before(sobject.expiredAt) {
return sobject.value, nil
}
}
return "", ErrNoSuchKey
}
func (s *simpleCache) Exists(keys ...string) (bool, error) {
for _, key := range keys {
if _, ok := s.store[key]; !ok {
return false, nil
}
}
return true, nil
}
func (s *simpleCache) Expire(key string, duration time.Duration) error {
value, err := s.Get(key)
if err != nil {
return err
}
sobject := simpleObject{
value: value,
neverExpire: false,
expiredAt: time.Now().Add(duration),
}
if duration == NeverExpire {
sobject.neverExpire = true
}
s.store[key] = sobject
return nil
}

View File

@@ -24,7 +24,7 @@ func (s *Options) Validate() []error {
}
func (s *Options) ApplyTo(options *Options) {
if len(s.Kinds) > 0 {
if s != nil && len(s.Kinds) > 0 {
options.Kinds = s.Kinds
}
}

View File

@@ -23,7 +23,6 @@ import (
promresourcesclient "github.com/prometheus-operator/prometheus-operator/pkg/client/versioned"
istioclient "istio.io/client-go/pkg/clientset/versioned"
apiextensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
"k8s.io/client-go/discovery"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
@@ -37,7 +36,6 @@ type Client interface {
Istio() istioclient.Interface
Snapshot() snapshotclient.Interface
ApiExtensions() apiextensionsclient.Interface
Discovery() discovery.DiscoveryInterface
Prometheus() promresourcesclient.Interface
Master() string
Config() *rest.Config
@@ -47,9 +45,6 @@ type kubernetesClient struct {
// kubernetes client interface
k8s kubernetes.Interface
// discovery client
discoveryClient *discovery.DiscoveryClient
// generated clientset
ks kubesphere.Interface
@@ -77,15 +72,14 @@ func NewKubernetesClientOrDie(options *KubernetesOptions) Client {
config.Burst = options.Burst
k := &kubernetesClient{
k8s: kubernetes.NewForConfigOrDie(config),
discoveryClient: discovery.NewDiscoveryClientForConfigOrDie(config),
ks: kubesphere.NewForConfigOrDie(config),
istio: istioclient.NewForConfigOrDie(config),
snapshot: snapshotclient.NewForConfigOrDie(config),
apiextensions: apiextensionsclient.NewForConfigOrDie(config),
prometheus: promresourcesclient.NewForConfigOrDie(config),
master: config.Host,
config: config,
k8s: kubernetes.NewForConfigOrDie(config),
ks: kubesphere.NewForConfigOrDie(config),
istio: istioclient.NewForConfigOrDie(config),
snapshot: snapshotclient.NewForConfigOrDie(config),
apiextensions: apiextensionsclient.NewForConfigOrDie(config),
prometheus: promresourcesclient.NewForConfigOrDie(config),
master: config.Host,
config: config,
}
if options.Master != "" {
@@ -116,11 +110,6 @@ func NewKubernetesClient(options *KubernetesOptions) (Client, error) {
return nil, err
}
k.discoveryClient, err = discovery.NewDiscoveryClientForConfig(config)
if err != nil {
return nil, err
}
k.ks, err = kubesphere.NewForConfig(config)
if err != nil {
return nil, err
@@ -157,10 +146,6 @@ func (k *kubernetesClient) Kubernetes() kubernetes.Interface {
return k.k8s
}
func (k *kubernetesClient) Discovery() discovery.DiscoveryInterface {
return k.discoveryClient
}
func (k *kubernetesClient) KubeSphere() kubesphere.Interface {
return k.ks
}

View File

@@ -21,7 +21,6 @@ import (
"encoding/json"
"io/ioutil"
"net/http"
"net/url"
"reflect"
"testing"
@@ -39,6 +38,11 @@ func TestClient_Get(t *testing.T) {
type args struct {
url string
}
inMemoryCache, err := cache.NewInMemoryCache(nil, nil)
if err != nil {
t.Fatal(err)
}
token, _ := json.Marshal(
&TokenResponse{
Username: "test",
@@ -58,7 +62,7 @@ func TestClient_Get(t *testing.T) {
Strategy: AuthStrategyAnonymous,
cache: nil,
client: &MockClient{
requestResult: "fake",
RequestResult: "fake",
},
ServiceToken: "token",
Host: "http://kiali.istio-system.svc",
@@ -76,8 +80,8 @@ func TestClient_Get(t *testing.T) {
Strategy: AuthStrategyToken,
cache: nil,
client: &MockClient{
tokenResult: token,
requestResult: "fake",
TokenResult: token,
RequestResult: "fake",
},
ServiceToken: "token",
Host: "http://kiali.istio-system.svc",
@@ -93,10 +97,10 @@ func TestClient_Get(t *testing.T) {
name: "Token",
fields: fields{
Strategy: AuthStrategyToken,
cache: cache.NewSimpleCache(),
cache: inMemoryCache,
client: &MockClient{
tokenResult: token,
requestResult: "fake",
TokenResult: token,
RequestResult: "fake",
},
ServiceToken: "token",
Host: "http://kiali.istio-system.svc",
@@ -129,22 +133,3 @@ func TestClient_Get(t *testing.T) {
})
}
}
type MockClient struct {
tokenResult []byte
requestResult string
}
func (c *MockClient) Do(req *http.Request) (*http.Response, error) {
return &http.Response{
StatusCode: 200,
Body: ioutil.NopCloser(bytes.NewReader([]byte(c.requestResult))),
}, nil
}
func (c *MockClient) PostForm(url string, data url.Values) (resp *http.Response, err error) {
return &http.Response{
StatusCode: 200,
Body: ioutil.NopCloser(bytes.NewReader(c.tokenResult)),
}, nil
}

View File

@@ -0,0 +1,27 @@
package kiali
import (
"bytes"
"io/ioutil"
"net/http"
"net/url"
)
type MockClient struct {
TokenResult []byte
RequestResult string
}
func (c *MockClient) Do(req *http.Request) (*http.Response, error) {
return &http.Response{
StatusCode: 200,
Body: ioutil.NopCloser(bytes.NewReader([]byte(c.RequestResult))),
}, nil
}
func (c *MockClient) PostForm(url string, data url.Values) (resp *http.Response, err error) {
return &http.Response{
StatusCode: 200,
Body: ioutil.NopCloser(bytes.NewReader(c.TokenResult)),
}, nil
}

View File

@@ -20,7 +20,6 @@ import (
"fmt"
"sort"
"strings"
"sync"
"time"
"github.com/go-ldap/ldap"
@@ -63,8 +62,6 @@ type ldapInterfaceImpl struct {
groupSearchBase string
managerDN string
managerPassword string
once sync.Once
}
var _ Interface = &ldapInterfaceImpl{}
@@ -95,7 +92,6 @@ func NewLdapClient(options *Options, stopCh <-chan struct{}) (Interface, error)
groupSearchBase: options.GroupSearchBase,
managerDN: options.ManagerDN,
managerPassword: options.ManagerPassword,
once: sync.Once{},
}
go func() {
@@ -103,9 +99,7 @@ func NewLdapClient(options *Options, stopCh <-chan struct{}) (Interface, error)
client.close()
}()
client.once.Do(func() {
_ = client.createSearchBase()
})
_ = client.createSearchBase()
return client, nil
}

View File

@@ -177,7 +177,7 @@ var promQLTemplates = map[string]string{
"ingress_success_rate": `sum(rate(nginx_ingress_controller_requests{$1,$2,status!~"[4-5].*"}[$3])) / sum(rate(nginx_ingress_controller_requests{$1,$2}[$3]))`,
"ingress_request_duration_average": `sum_over_time(nginx_ingress_controller_request_duration_seconds_sum{$1,$2}[$3])/sum_over_time(nginx_ingress_controller_request_duration_seconds_count{$1,$2}[$3])`,
"ingress_request_duration_50percentage": `histogram_quantile(0.50, sum by (le) (rate(nginx_ingress_controller_request_duration_seconds_bucket{$1,$2}[$3])))`,
"ingress_request_duration_95percentage": `histogram_quantile(0.90, sum by (le) (rate(nginx_ingress_controller_request_duration_seconds_bucket{$1,$2}[$3])))`,
"ingress_request_duration_95percentage": `histogram_quantile(0.95, sum by (le) (rate(nginx_ingress_controller_request_duration_seconds_bucket{$1,$2}[$3])))`,
"ingress_request_duration_99percentage": `histogram_quantile(0.99, sum by (le) (rate(nginx_ingress_controller_request_duration_seconds_bucket{$1,$2}[$3])))`,
"ingress_request_volume": `round(sum(irate(nginx_ingress_controller_requests{$1,$2}[$3])), 0.001)`,
"ingress_request_volume_by_ingress": `round(sum(irate(nginx_ingress_controller_requests{$1,$2}[$3])) by (ingress), 0.001)`,

Some files were not shown because too many files have changed in this diff Show More