Compare commits

...

59 Commits

Author SHA1 Message Date
KubeSphere CI Bot
7162d41310 [release-3.3] Check cluster permission for create/update workspacetemplate (#5310)
* add cluster authorization for create/update workspacetemplate

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

add cluster authorization for create/update workspacetemplate

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

* add handle forbidden err

* add forbidden error log

* allow to use clusters of public visibility

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>
Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-10-21 09:55:41 +08:00
KubeSphere CI Bot
6b10d346ca [release-3.3] fix #5267 by renaming yaml struct tag (#5275)
fix #5267 by renaming yaml struct tag

Signed-off-by: chavacava <salvadorcavadini+github@gmail.com>

Signed-off-by: chavacava <salvadorcavadini+github@gmail.com>
Co-authored-by: chavacava <salvadorcavadini+github@gmail.com>
2022-10-08 14:34:33 +08:00
KubeSphere CI Bot
6a0d5ba93c [release-3.3] Fix: Can not resolve the resource scope correctly (#5274)
Fix: can not resolve the resource scope of clusters.cluster.kubesphere.io correctly

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>
Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-10-08 13:58:57 +08:00
KubeSphere CI Bot
d87a782257 [release-3.3] Fix cluster gateway logs and resource status display exception (#5250)
Cluster gateway logs and resource status display exception

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>
Co-authored-by: hongzhouzi <hongzhouzi@kubesphere.io>
2022-09-28 00:11:23 +08:00
KubeSphere CI Bot
82e55578a8 [release-3.3] fix gateway upgrade validate error. (#5236)
gateway upgrade validate error.

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>

Signed-off-by: hongzhouzi <hongzhouzi@kubesphere.io>
Co-authored-by: hongzhouzi <hongzhouzi@kubesphere.io>
2022-09-21 17:13:17 +08:00
KubeSphere CI Bot
5b9c357160 [release-3.3] Fix: when placement is empty return error (#5218)
Fix: when placement is empty return error

Co-authored-by: Wenhao Zhou <wenhaozhou@yunfiy.com>
2022-09-15 19:38:47 +08:00
KubeSphere CI Bot
c385dd92e4 [release-3.3] Add authorization control for patching workspacetemplates (#5217)
* update patch workspacetemplate for supporting patch with JsonPatchType and change the authorization processing

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

* make goimports

* Fix: Of the type is not string will lead to panic

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

* Add jsonpatchutil for handling json patch data

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

* Updated patch workspacetemplate to to make the code run more efficiently

* fix: multiple clusterrolebindings cannot autorizate

* Correct wrong spelling

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>
Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-09-15 19:32:47 +08:00
KubeSphere CI Bot
1e1b2bd594 [release-3.3] support recording disable and enable users in auditing (#5202)
support recording disable and enable users in auditing

Signed-off-by: wanjunlei <wanjunlei@kubesphere.io>

Signed-off-by: wanjunlei <wanjunlei@kubesphere.io>
Co-authored-by: wanjunlei <wanjunlei@kubesphere.io>
2022-09-08 10:25:41 +08:00
KubeSphere CI Bot
951b86648c [release-3.3] fix bug helm repo paging query (#5201)
* fix bug helmrepo paging query

* fix bug helmrepo paging query

* fix bug helm repo paging query

Co-authored-by: mayongxing <mayongxing@cmsr.chinamobile.com>
2022-09-08 10:17:41 +08:00
KubeSphere CI Bot
04433c139d [release-3.3] Fix: index out of range when merging two repo indexes (#5169)
Fix: index out of range when merging two repo indexes

Co-authored-by: LiHui <andrewli@kubesphere.io>
2022-08-25 16:06:36 +08:00
KubeSphere CI Bot
3b8c28d21e [release-3.3] Support for filtering workspace roles using labelSelector (#5162)
Support for filtering workspace roles using labelSelector

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>

Signed-off-by: Wenhao Zhou <wenhaozhou@yunify.com>
Co-authored-by: Wenhao Zhou <wenhaozhou@yunify.com>
2022-08-23 10:30:21 +08:00
KubeSphere CI Bot
9489718270 [release-3.3] fill field status of helmrepo in response (#5158)
fill field status of helmrepo in response

Signed-off-by: x893675 <x893675@icloud.com>

Signed-off-by: x893675 <x893675@icloud.com>
Co-authored-by: x893675 <x893675@icloud.com>
2022-08-22 16:15:00 +08:00
KubeSphere CI Bot
54df6b8c8c [release-3.3] fix cluster ready condition always true (#5137)
fix cluster ready condition always true

Signed-off-by: x893675 <x893675@icloud.com>

Signed-off-by: x893675 <x893675@icloud.com>
Co-authored-by: x893675 <x893675@icloud.com>
2022-08-16 14:12:46 +08:00
KubeSphere CI Bot
d917905529 [release-3.3] Fix ingress P95 delay time promql statement (#5132)
Fix ingress P95 delay time promql statement

Co-authored-by: Xinzhao Xu <z2d@jifangcheng.com>
2022-08-14 16:49:35 +08:00
KubeSphere CI Bot
cd6f940f1d [release-3.3] Adjust container terminal priority: bash, sh (#5076)
Adjust container terminal priority: bash, sh

Co-authored-by: tal66 <77445020+tal66@users.noreply.github.com>
2022-07-21 11:16:29 +08:00
KubeSphere CI Bot
921a8f068b [release-3.3] skip generated code when fmt code (#5079)
skip generated code when fmt code

Co-authored-by: LiHui <andrewli@kubesphere.io>
2022-07-21 11:16:14 +08:00
KubeSphere CI Bot
641aa1dfcf [release-3.3] close remote terminal.(#5023) (#5028)
close remote terminal.(kubesphere#5023)

Co-authored-by: lixueduan <li.xueduan@99cloud.net>
2022-07-06 18:08:34 +08:00
Rick
4522c841af Add the corresponding label 'kind/bug' to the issue template (#4952) 2022-06-20 10:32:52 +08:00
Calvin Yu
8e906ed3de Create SECURITY.md 2022-06-15 10:12:21 +08:00
KubeSphere CI Bot
ac36ff5752 Merge pull request #4940 from xyz-li/sa_token
create default token for service account
2022-06-09 11:32:40 +08:00
LiHui
098b77fb4c add key to queue 2022-06-09 11:13:56 +08:00
LiHui
e97f27e580 create sa token 2022-06-09 10:28:55 +08:00
KubeSphere CI Bot
bc00b67a6e Merge pull request #4938 from qingwave/typo-fix
fix some typos
2022-06-08 10:25:00 +08:00
KubeSphere CI Bot
8b0f2674bd Merge pull request #4939 from iawia002/fix-sync
Promptly handle the cluster when it is deleted
2022-06-08 10:23:43 +08:00
KubeSphere CI Bot
108963f87b Merge pull request #4941 from SinTod/master
Unified call WriteEntity func
2022-06-08 10:20:59 +08:00
KubeSphere CI Bot
6525a3c3b3 Merge pull request #4937 from zhanghw0354/master
add unit test for GetServiceTracing
2022-06-08 10:02:00 +08:00
KubeSphere CI Bot
f0cc7f6430 Merge pull request #4928 from xyz-li/gops
Add agent to report additional information.
2022-06-07 10:51:38 +08:00
LiHui
47563af08c add gops agent to ks-apiserver&&controller-manager 2022-06-07 09:45:09 +08:00
SinTod
26b871ecf4 Unified call WriteEntity func 2022-06-06 15:30:11 +08:00
Xinzhao Xu
5e02f1b86b Promptly handle the cluster when it is deleted 2022-06-06 11:31:14 +08:00
qingwave
c78ab9039a fix some typos 2022-06-06 02:43:23 +00:00
zhanghaiwen
02e99365c7 add unit test for GetServiceTracing 2022-06-02 14:46:27 +08:00
KubeSphere CI Bot
0c2a419a5e Merge pull request #4936 from xyz-li/key
Fix kubeconfig generate bug
2022-06-02 11:58:54 +08:00
LiHui
77e0373777 fix gen key type 2022-06-02 11:19:45 +08:00
KubeSphere CI Bot
04d70b1db4 Merge pull request #4921 from xyz-li/master
complete the help doc
2022-06-01 16:06:53 +08:00
KubeSphere CI Bot
86beabdb32 Merge pull request #4927 from qingwave/gateway-log-context
gateway: avoid pod log connection leak
2022-06-01 16:02:31 +08:00
LiHui
1e8cea4971 add gops 2022-06-01 15:00:33 +08:00
qingwave
107e2ec64c fix: avoid gateway pod log connection leak 2022-06-01 02:14:19 +00:00
LiHui
17b97d7ada complete the help doc 2022-05-31 10:41:25 +08:00
KubeSphere CI Bot
2758e35a4e Merge pull request #4881 from suwliang3/master
feature:test functions in package resources/v1alpha3 by building restful's re…
2022-05-29 23:23:40 +08:00
KubeSphere CI Bot
305da3c0c5 Merge pull request #4918 from anhoder/master
fix:goroutine leak when open terminal
2022-05-29 23:18:51 +08:00
KubeSphere CI Bot
e5ac3608f6 Merge pull request #4916 from ONE7live/dev_test
add some unit test for models
2022-05-29 23:17:50 +08:00
anhoder
d0933055cb fix:goroutine leak when open terminal 2022-05-27 18:25:43 +08:00
KubeSphere CI Bot
fc7cdd7300 Merge pull request #4915 from wansir/master
chore: update vendor
2022-05-27 17:47:25 +08:00
hongming
52b7fb71b2 chore: update vendor 2022-05-27 16:42:26 +08:00
ONE7live
4247387144 add some unit test for models
Signed-off-by: ONE7live <wangqi_yewu@cmss.chinamobile.com>
2022-05-27 16:10:01 +08:00
KubeSphere CI Bot
da5e4cc247 Merge pull request #4904 from xyz-li/master
add workspace to review list
2022-05-25 14:34:31 +08:00
LiHui
73852a8a4b add workspace to review list 2022-05-25 11:58:21 +08:00
suwanliang
b2be653639 run make fmt and make goimports 2022-05-24 18:37:16 +08:00
KubeSphere CI Bot
0418277b57 Merge pull request #4896 from wansir/fix-4890
fix: cluster list granted to users is incorrect
2022-05-23 17:59:52 +08:00
hongming
382be8b16b fix: cluster list granted to users is incorrect 2022-05-23 17:06:19 +08:00
KubeSphere CI Bot
32ac94a7e5 Merge pull request #4889 from xyz-li/sync
cluster not found and repo not found
2022-05-23 15:48:13 +08:00
KubeSphere CI Bot
3e381c9ad5 Merge pull request #4879 from xiaoping378/patch-1
fix unformatted log
2022-05-23 11:56:51 +08:00
LiHui
35027a346b add openpitrix Client to apiserver 2022-05-20 17:37:52 +08:00
LiHui
32b85cd625 cluster clusters 2022-05-20 11:53:51 +08:00
KubeSphere CI Bot
559539275e Merge pull request #4888 from wansir/master
refactor: remove the useless CRD
2022-05-19 15:58:58 +08:00
hongming
211fb293e0 refactor: remove the useless CRD 2022-05-19 15:43:37 +08:00
suwanliang
530b358c94 test functions in package resources/v1alpha3 by building restful's res and res 2022-05-16 18:27:06 +08:00
xiaoping
aa78e3215c fix unformatted log 2022-05-15 21:05:58 +08:00
86 changed files with 2277 additions and 1124 deletions

View File

@@ -1,5 +1,6 @@
---
name: Bug report
labels: ["kind/bug"]
about: Create a report to help us improve
---

50
SECURITY.md Normal file
View File

@@ -0,0 +1,50 @@
# Security Policy
## Supported Versions
Use this section to tell people about which versions of your project are
currently being supported with security updates.
| Version | Supported |
| ------- | ------------------ |
| 3.2.x | :white_check_mark: |
| 3.1.x | :white_check_mark: |
| 3.0.x | :white_check_mark: |
| 2.1.x | :white_check_mark: |
| < 2.1.x | :x: |
## Reporting a Vulnerability
# Security Vulnerability Disclosure and Response Process
To ensure KubeSphere security, a security vulnerability disclosure and response process is adopted. And the security team is set up in KubeSphere community, also any issue and PR is welcome for every contributors.
The primary goal of this process is to reduce the total exposure time of users to publicly known vulnerabilities. To quickly fix vulnerabilities of KubeSphere, the security team is responsible for the entire vulnerability management process, including internal communication and external disclosure.
If you find a vulnerability or encounter a security incident involving vulnerabilities of KubeSphere, please report it as soon as possible to the KubeSphere security team (security@kubesphere.io).
Please kindly help provide as much vulnerability information as possible in the following format:
- Issue title(Please add 'Security' lable)*:
- Overview*:
- Affected components and version number*:
- CVE number (if any):
- Vulnerability verification process*:
- Contact information*:
The asterisk (*) indicates the required field.
# Response Time
The KubeSphere security team will confirm the vulnerabilities and contact you within 2 working days after your submission.
We will publicly thank you after fixing the security vulnerability. To avoid negative impact, please keep the vulnerability confidential until we fix it. We would appreciate it if you could obey the following code of conduct:
The vulnerability will not be disclosed until KubeSphere releases a patch for it.
The details of the vulnerability, for example, exploits code, will not be disclosed.

View File

@@ -488,9 +488,10 @@ func addAllControllers(mgr manager.Manager, client k8s.Client, informerFactory i
if cmOptions.MultiClusterOptions.Enable {
clusterController := cluster.NewClusterController(
client.Kubernetes(),
client.KubeSphere(),
client.Config(),
kubesphereInformer.Cluster().V1alpha1().Clusters(),
client.KubeSphere().ClusterV1alpha1().Clusters(),
kubesphereInformer.Iam().V1alpha2().Users().Lister(),
cmOptions.MultiClusterOptions.ClusterControllerResyncPeriod,
cmOptions.MultiClusterOptions.HostClusterName,
)

View File

@@ -82,6 +82,9 @@ type KubeSphereControllerManagerOptions struct {
// * has the lowest priority.
// e.g. *,-foo, means "disable 'foo'"
ControllerGates []string
// Enable gops or not.
GOPSEnabled bool
}
func NewKubeSphereControllerManagerOptions() *KubeSphereControllerManagerOptions {
@@ -144,6 +147,9 @@ func (s *KubeSphereControllerManagerOptions) Flags(allControllerNameSelectors []
"named 'foo', '-foo' disables the controller named 'foo'.\nAll controllers: %s",
strings.Join(allControllerNameSelectors, ", ")))
gfs.BoolVar(&s.GOPSEnabled, "gops", s.GOPSEnabled, "Whether to enable gops or not. When enabled this option, "+
"controller-manager will listen on a random port on 127.0.0.1, then you can use the gops tool to list and diagnose the controller-manager currently running.")
kfs := fss.FlagSet("klog")
local := flag.NewFlagSet("klog", flag.ExitOnError)
klog.InitFlags(local)

View File

@@ -21,6 +21,7 @@ import (
"fmt"
"os"
"github.com/google/gops/agent"
"github.com/spf13/cobra"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
@@ -73,12 +74,21 @@ func NewControllerManagerCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "controller-manager",
Long: `KubeSphere controller manager is a daemon that`,
Long: `KubeSphere controller manager is a daemon that embeds the control loops shipped with KubeSphere.`,
Run: func(cmd *cobra.Command, args []string) {
if errs := s.Validate(allControllers); len(errs) != 0 {
klog.Error(utilerrors.NewAggregate(errs))
os.Exit(1)
}
if s.GOPSEnabled {
// Add agent to report additional information such as the current stack trace, Go version, memory stats, etc.
// Bind to a random port on address 127.0.0.1
if err := agent.Listen(agent.Options{}); err != nil {
klog.Fatal(err)
}
}
if err = Run(s, controllerconfig.WatchConfigChange(), signals.SetupSignalHandler()); err != nil {
klog.Error(err)
os.Exit(1)

View File

@@ -21,6 +21,9 @@ import (
"flag"
"fmt"
openpitrixv1 "kubesphere.io/kubesphere/pkg/kapis/openpitrix/v1"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
"kubesphere.io/kubesphere/pkg/apiserver/authentication/token"
"k8s.io/client-go/kubernetes/scheme"
@@ -59,6 +62,9 @@ type ServerRunOptions struct {
//
DebugMode bool
// Enable gops or not.
GOPSEnabled bool
}
func NewServerRunOptions() *ServerRunOptions {
@@ -73,6 +79,8 @@ func NewServerRunOptions() *ServerRunOptions {
func (s *ServerRunOptions) Flags() (fss cliflag.NamedFlagSets) {
fs := fss.FlagSet("generic")
fs.BoolVar(&s.DebugMode, "debug", false, "Don't enable this if you don't know what it means.")
fs.BoolVar(&s.GOPSEnabled, "gops", false, "Whether to enable gops or not. When enabled this option, "+
"ks-apiserver will listen on a random port on 127.0.0.1, then you can use the gops tool to list and diagnose the ks-apiserver currently running.")
s.GenericServerRunOptions.AddFlags(fs, s.GenericServerRunOptions)
s.KubernetesOptions.AddFlags(fss.FlagSet("kubernetes"), s.KubernetesOptions)
s.AuthenticationOptions.AddFlags(fss.FlagSet("authentication"), s.AuthenticationOptions)
@@ -209,6 +217,13 @@ func (s *ServerRunOptions) NewAPIServer(stopCh <-chan struct{}) (*apiserver.APIS
apiServer.AlertingClient = alertingClient
}
if s.Config.MultiClusterOptions.Enable {
cc := clusterclient.NewClusterClient(informerFactory.KubeSphereSharedInformerFactory().Cluster().V1alpha1().Clusters())
apiServer.ClusterClient = cc
}
apiServer.OpenpitrixClient = openpitrixv1.NewOpenpitrixClient(informerFactory, apiServer.KubernetesClient.KubeSphere(), s.OpenPitrixOptions, apiServer.ClusterClient, stopCh)
server := &http.Server{
Addr: fmt.Sprintf(":%d", s.GenericServerRunOptions.InsecurePort),
}

View File

@@ -21,6 +21,7 @@ import (
"fmt"
"net/http"
"github.com/google/gops/agent"
"github.com/spf13/cobra"
utilerrors "k8s.io/apimachinery/pkg/util/errors"
cliflag "k8s.io/component-base/cli/flag"
@@ -57,6 +58,15 @@ cluster's shared state through which all other components interact.`,
if errs := s.Validate(); len(errs) != 0 {
return utilerrors.NewAggregate(errs)
}
if s.GOPSEnabled {
// Add agent to report additional information such as the current stack trace, Go version, memory stats, etc.
// Bind to a random port on address 127.0.0.1.
if err := agent.Listen(agent.Options{}); err != nil {
klog.Fatal(err)
}
}
return Run(s, apiserverconfig.WatchConfigChange(), signals.SetupSignalHandler())
},
SilenceUsage: true,

8
go.mod
View File

@@ -50,6 +50,7 @@ require (
github.com/golang/example v0.0.0-20170904185048-46695d81d1fa
github.com/google/go-cmp v0.5.6
github.com/google/go-containerregistry v0.6.0
github.com/google/gops v0.3.23
github.com/google/uuid v1.1.2
github.com/gorilla/handlers v1.4.0 // indirect
github.com/gorilla/websocket v1.4.2
@@ -81,6 +82,8 @@ require (
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/common v0.26.0
github.com/prometheus/prometheus v1.8.2-0.20200907175821-8219b442c864
github.com/shirou/gopsutil v0.0.0-20180427012116-c95755e4bcd7 // indirect
github.com/shirou/w32 v0.0.0-20160930032740-bb4de0191aa4 // indirect
github.com/sony/sonyflake v0.0.0-20181109022403-6d5bd6181009
github.com/speps/go-hashids v2.0.0+incompatible
github.com/spf13/cobra v1.2.1
@@ -258,6 +261,7 @@ replace (
github.com/coreos/pkg => github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f
github.com/cortexproject/cortex => github.com/cortexproject/cortex v1.3.1-0.20200901115931-255ff3306960
github.com/cpuguy83/go-md2man => github.com/cpuguy83/go-md2man v1.0.10
github.com/cpuguy83/go-md2man/v2 => github.com/cpuguy83/go-md2man/v2 v2.0.0
github.com/creack/pty => github.com/creack/pty v1.1.7
github.com/cyphar/filepath-securejoin => github.com/cyphar/filepath-securejoin v0.2.2
github.com/cznic/b => github.com/cznic/b v0.0.0-20180115125044-35e9bbe41f07
@@ -375,7 +379,6 @@ replace (
github.com/gobwas/pool => github.com/gobwas/pool v0.2.0
github.com/gobwas/ws => github.com/gobwas/ws v1.0.2
github.com/gocql/gocql => github.com/gocql/gocql v0.0.0-20200526081602-cd04bd7f22a7
github.com/gocraft/dbr => github.com/gocraft/dbr v0.0.0-20180507214907-a0fd650918f6
github.com/godbus/dbus => github.com/godbus/dbus v0.0.0-20190402143921-271e53dc4968
github.com/godror/godror => github.com/godror/godror v0.13.3
github.com/gofrs/flock => github.com/gofrs/flock v0.7.1
@@ -501,10 +504,10 @@ replace (
github.com/kr/pty => github.com/kr/pty v1.1.5
github.com/kr/text => github.com/kr/text v0.1.0
github.com/kshvakov/clickhouse => github.com/kshvakov/clickhouse v1.3.5
github.com/kubernetes-csi/external-snapshotter/client/v3 => github.com/kubernetes-csi/external-snapshotter/client/v3 v3.0.0
github.com/kubernetes-csi/external-snapshotter/client/v4 => github.com/kubernetes-csi/external-snapshotter/client/v4 v4.2.0
github.com/kubesphere/pvc-autoresizer => github.com/kubesphere/pvc-autoresizer v0.1.1
github.com/kubesphere/sonargo => github.com/kubesphere/sonargo v0.0.2
github.com/kubesphere/storageclass-accessor => github.com/kubesphere/storageclass-accessor v0.2.0
github.com/kylelemons/go-gypsy => github.com/kylelemons/go-gypsy v0.0.0-20160905020020-08cad365cd28
github.com/kylelemons/godebug => github.com/kylelemons/godebug v0.0.0-20160406211939-eadb3ce320cb
github.com/lann/builder => github.com/lann/builder v0.0.0-20180802200727-47ae307949d0
@@ -653,6 +656,7 @@ replace (
github.com/sergi/go-diff => github.com/sergi/go-diff v1.0.0
github.com/shopspring/decimal => github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24
github.com/shurcooL/httpfs => github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749
github.com/shurcooL/sanitized_anchor_name => github.com/shurcooL/sanitized_anchor_name v1.0.0
github.com/shurcooL/vfsgen => github.com/shurcooL/vfsgen v0.0.0-20181202132449-6a9ea43bcacd
github.com/siebenmann/go-kstat => github.com/siebenmann/go-kstat v0.0.0-20160321171754-d34789b79745
github.com/sirupsen/logrus => github.com/sirupsen/logrus v1.4.2

12
go.sum
View File

@@ -59,6 +59,7 @@ github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d h1:UrqY+r/O
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/StackExchange/wmi v1.2.1/go.mod h1:rcmrprowKIVzvc+NUiLncP2uuArMWLCbu9SBzvHz7e8=
github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g=
github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia65gfNATL8TAiHDNxPzPdmEL5uirI2Uyuz6c=
github.com/agnivade/levenshtein v1.0.1/go.mod h1:CURSv5d9Uaml+FovSIICkLbAUZ9S4RqaHDIsdSBg7lM=
@@ -287,6 +288,8 @@ github.com/go-logr/logr v0.4.0 h1:K7/B1jt6fIBQVd4Owv2MqGQClcgf0R266+7C/QjRcLc=
github.com/go-logr/logr v0.4.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/zapr v0.4.0 h1:uc1uML3hRYL9/ZZPdgHS/n8Nzo+eaYL/Efxkkamf7OM=
github.com/go-logr/zapr v0.4.0/go.mod h1:tabnROwaDl0UNxkVeFRbY8bwB37GwRv0P8lg6aAiEnk=
github.com/go-ole/go-ole v1.2.5/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.2.6-0.20210915003542-8b1f7f90f6b1/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-openapi/analysis v0.19.10 h1:5BHISBAXOc/aJK25irLZnx2D3s6WyYaY9D4gmuz9fdE=
github.com/go-openapi/analysis v0.19.10/go.mod h1:qmhS3VNFxBlquFJ0RGoDtylO9y4pgTAUNE9AEEMdlJQ=
github.com/go-openapi/errors v0.19.4 h1:fSGwO1tSYHFu70NKaWJt5Qh0qoBRtCm/mXS1yhf+0W0=
@@ -388,6 +391,8 @@ github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASu
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gops v0.3.23 h1:OjsHRINl5FiIyTc8jivIg4UN0GY6Nh32SL8KRbl8GQo=
github.com/google/gops v0.3.23/go.mod h1:7diIdLsqpCihPSX3fQagksT/Ku/y4RL9LHTlKyEUDl8=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20200417002340-c6e0a841f49a/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
@@ -501,6 +506,7 @@ github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dv
github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
github.com/kevinburke/ssh_config v0.0.0-20180830205328-81db2a75821e h1:RgQk53JHp/Cjunrr1WlsXSZpqXn+uREuHvUVcK82CV8=
github.com/kevinburke/ssh_config v0.0.0-20180830205328-81db2a75821e/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
github.com/keybase/go-ps v0.0.0-20190827175125-91aafc93ba19/go.mod h1:hY+WOq6m2FpbvyrI93sMaypsttvaIL5nhVR92dTMUcQ=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kisielk/sqlstruct v0.0.0-20150923205031-648daed35d49/go.mod h1:yyMNCyc/Ib3bDTKd379tNMpB/7/H5TjM2Y9QJ5THLbE=
@@ -741,6 +747,9 @@ github.com/segmentio/kafka-go v0.2.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfP
github.com/sercand/kuberesolver v2.4.0+incompatible/go.mod h1:lWF3GL0xptCB/vCiJPl/ZshwPsX/n4Y7u0CW9E7aQIQ=
github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shirou/gopsutil v0.0.0-20180427012116-c95755e4bcd7/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/shirou/gopsutil/v3 v3.21.9/go.mod h1:YWp/H8Qs5fVmf17v7JNZzA0mPJ+mS2e9JdiUF9LlKzQ=
github.com/shirou/w32 v0.0.0-20160930032740-bb4de0191aa4/go.mod h1:qsXQc7+bwAM3Q1u/4XEfrquwF8Lw7D7y5cD8CuHnfIc=
github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24/go.mod h1:M+9NzErvs504Cn4c5DxATwIqPbtswREoFCre64PpcG4=
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
@@ -784,6 +793,8 @@ github.com/thanos-io/thanos v0.13.1-0.20200910143741-e0b7f7b32e9c/go.mod h1:1Ize
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tinylib/msgp v1.1.0/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tklauser/go-sysconf v0.3.9/go.mod h1:11DU/5sG7UexIrp/O6g35hrWzu0JxlwQ3LSFUzyeuhs=
github.com/tklauser/numcpus v0.3.0/go.mod h1:yFGUr7TUHQRAhyqBcEg0Ge34zDBAsIvJJcyE6boqnA8=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5 h1:LnC5Kc/wtumK+WB441p7ynQJzVuNRJiqddSIE3IlSEQ=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
@@ -991,6 +1002,7 @@ k8s.io/utils v0.0.0-20200603063816-c1c6865ac451/go.mod h1:jPW/WVKK9YHAvNhRxK0md/
kubesphere.io/monitoring-dashboard v0.2.2 h1:aniATtXLgRAAvKOjd2UxWWHMh4/T7a0HoQ9bd+/bGcA=
kubesphere.io/monitoring-dashboard v0.2.2/go.mod h1:ksDjmOuoN0C0GuYp0s5X3186cPgk2asLUaO1WlEKISY=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/goversion v1.2.0/go.mod h1:Eih9y/uIBS3ulggl7KNJ09xGSLcuNaLgmvvqa07sgfo=
rsc.io/letsencrypt v0.0.1 h1:DV0d09Ne9E7UUa9ZqWktZ9L2VmybgTgfq7xlfFR/bbU=
rsc.io/letsencrypt v0.0.1/go.mod h1:buyQKZ6IXrRnB7TdkHP0RyEybLx18HHyOSoTyoOLqNY=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=

View File

@@ -39,6 +39,7 @@ find_files() {
-o -wholename '*/third_party/*' \
-o -wholename '*/vendor/*' \
-o -wholename './staging/src/kubesphere.io/client-go/*vendor/*' \
-o -wholename './staging/src/kubesphere.io/api/*/zz_generated.deepcopy.go' \
\) -prune \
\) -name '*.go'
}

1
hack/verify-gofmt.sh Normal file → Executable file
View File

@@ -44,6 +44,7 @@ find_files() {
-o -wholename '*/third_party/*' \
-o -wholename '*/vendor/*' \
-o -wholename './staging/src/kubesphere.io/client-go/*vendor/*' \
-o -wholename './staging/src/kubesphere.io/api/*/zz_generated.deepcopy.go' \
-o -wholename '*/bindata.go' \
\) -prune \
\) -name '*.go'

View File

@@ -26,28 +26,25 @@ import (
"sync"
"time"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/client-go/discovery"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/util/retry"
"github.com/emicklei/go-restful"
extv1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
"k8s.io/apimachinery/pkg/api/errors"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
urlruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/sets"
unionauth "k8s.io/apiserver/pkg/authentication/request/union"
"k8s.io/apiserver/pkg/endpoints/handlers/responsewriters"
"k8s.io/client-go/discovery"
"k8s.io/client-go/util/retry"
"k8s.io/klog"
runtimecache "sigs.k8s.io/controller-runtime/pkg/cache"
runtimeclient "sigs.k8s.io/controller-runtime/pkg/client"
clusterv1alpha1 "kubesphere.io/api/cluster/v1alpha1"
iamv1alpha2 "kubesphere.io/api/iam/v1alpha2"
notificationv2beta1 "kubesphere.io/api/notification/v2beta1"
tenantv1alpha1 "kubesphere.io/api/tenant/v1alpha1"
typesv1beta1 "kubesphere.io/api/types/v1beta1"
runtimecache "sigs.k8s.io/controller-runtime/pkg/cache"
runtimeclient "sigs.k8s.io/controller-runtime/pkg/client"
audit "kubesphere.io/kubesphere/pkg/apiserver/auditing"
"kubesphere.io/kubesphere/pkg/apiserver/authentication/authenticators/basic"
@@ -98,6 +95,7 @@ import (
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/group"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
"kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/loginrecord"
"kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/user"
"kubesphere.io/kubesphere/pkg/simple/client/alerting"
@@ -110,6 +108,7 @@ import (
"kubesphere.io/kubesphere/pkg/simple/client/monitoring"
"kubesphere.io/kubesphere/pkg/simple/client/s3"
"kubesphere.io/kubesphere/pkg/simple/client/sonarqube"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
"kubesphere.io/kubesphere/pkg/utils/iputil"
"kubesphere.io/kubesphere/pkg/utils/metrics"
)
@@ -164,6 +163,10 @@ type APIServer struct {
// controller-runtime client
RuntimeClient runtimeclient.Client
ClusterClient clusterclient.ClusterClients
OpenpitrixClient openpitrix.Interface
}
func (s *APIServer) PrepareRun(stopCh <-chan struct{}) error {
@@ -223,17 +226,17 @@ func (s *APIServer) installKubeSphereAPIs(stopCh <-chan struct{}) {
urlruntime.Must(configv1alpha2.AddToContainer(s.container, s.Config))
urlruntime.Must(resourcev1alpha3.AddToContainer(s.container, s.InformerFactory, s.RuntimeCache))
urlruntime.Must(monitoringv1alpha3.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.MonitoringClient, s.MetricsClient, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.Config.OpenPitrixOptions, s.RuntimeClient, stopCh))
urlruntime.Must(meteringv1alpha1.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.MonitoringClient, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.RuntimeCache, s.Config.MeteringOptions, nil, s.RuntimeClient, stopCh))
urlruntime.Must(openpitrixv1.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.Config.OpenPitrixOptions, stopCh))
urlruntime.Must(monitoringv1alpha3.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.MonitoringClient, s.MetricsClient, s.InformerFactory, s.OpenpitrixClient, s.RuntimeClient))
urlruntime.Must(meteringv1alpha1.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.MonitoringClient, s.InformerFactory, s.RuntimeCache, s.Config.MeteringOptions, s.OpenpitrixClient, s.RuntimeClient))
urlruntime.Must(openpitrixv1.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.Config.OpenPitrixOptions, s.OpenpitrixClient))
urlruntime.Must(openpitrixv2alpha1.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.KubeSphere(), s.Config.OpenPitrixOptions))
urlruntime.Must(operationsv1alpha2.AddToContainer(s.container, s.KubernetesClient.Kubernetes()))
urlruntime.Must(resourcesv1alpha2.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), s.InformerFactory,
s.KubernetesClient.Master()))
urlruntime.Must(tenantv1alpha2.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.Kubernetes(),
s.KubernetesClient.KubeSphere(), s.EventsClient, s.LoggingClient, s.AuditingClient, amOperator, rbacAuthorizer, s.MonitoringClient, s.RuntimeCache, s.Config.MeteringOptions, stopCh))
s.KubernetesClient.KubeSphere(), s.EventsClient, s.LoggingClient, s.AuditingClient, amOperator, imOperator, rbacAuthorizer, s.MonitoringClient, s.RuntimeCache, s.Config.MeteringOptions, s.OpenpitrixClient))
urlruntime.Must(tenantv1alpha3.AddToContainer(s.container, s.InformerFactory, s.KubernetesClient.Kubernetes(),
s.KubernetesClient.KubeSphere(), s.EventsClient, s.LoggingClient, s.AuditingClient, amOperator, rbacAuthorizer, s.MonitoringClient, s.RuntimeCache, s.Config.MeteringOptions, stopCh))
s.KubernetesClient.KubeSphere(), s.EventsClient, s.LoggingClient, s.AuditingClient, amOperator, imOperator, rbacAuthorizer, s.MonitoringClient, s.RuntimeCache, s.Config.MeteringOptions, s.OpenpitrixClient))
urlruntime.Must(terminalv1alpha2.AddToContainer(s.container, s.KubernetesClient.Kubernetes(), rbacAuthorizer, s.KubernetesClient.Config(), s.Config.TerminalOptions))
urlruntime.Must(clusterkapisv1alpha1.AddToContainer(s.container,
s.KubernetesClient.KubeSphere(),
@@ -346,7 +349,7 @@ func (s *APIServer) buildHandlerChain(stopCh <-chan struct{}) {
handler = filters.WithAuthorization(handler, authorizers)
if s.Config.MultiClusterOptions.Enable {
clusterDispatcher := dispatch.NewClusterDispatch(s.InformerFactory.KubeSphereSharedInformerFactory().Cluster().V1alpha1().Clusters())
clusterDispatcher := dispatch.NewClusterDispatch(s.ClusterClient)
handler = filters.WithMultipleClusterDispatcher(handler, clusterDispatcher)
}
@@ -531,7 +534,6 @@ func (s *APIServer) waitForResourceSync(ctx context.Context) error {
typesv1beta1.ResourcePluralFederatedConfigmap,
typesv1beta1.ResourcePluralFederatedStatefulSet,
typesv1beta1.ResourcePluralFederatedIngress,
typesv1beta1.ResourcePluralFederatedResourceQuota,
typesv1beta1.ResourcePluralFederatedPersistentVolumeClaim,
typesv1beta1.ResourcePluralFederatedApplication,
}

View File

@@ -33,8 +33,8 @@ import (
"k8s.io/apimachinery/pkg/types"
"k8s.io/apiserver/pkg/apis/audit"
"k8s.io/klog"
devopsv1alpha3 "kubesphere.io/api/devops/v1alpha3"
"kubesphere.io/api/iam/v1alpha2"
auditv1alpha1 "kubesphere.io/kubesphere/pkg/apiserver/auditing/v1alpha1"
"kubesphere.io/kubesphere/pkg/apiserver/query"
@@ -192,7 +192,7 @@ func (a *auditing) LogRequestObject(req *http.Request, info *request.RequestInfo
}
}
if (e.Level.GreaterOrEqual(audit.LevelRequest) || e.Verb == "create") && req.ContentLength > 0 {
if a.needAnalyzeRequestBody(e, req) {
body, err := ioutil.ReadAll(req.Body)
if err != nil {
klog.Error(err)
@@ -212,11 +212,45 @@ func (a *auditing) LogRequestObject(req *http.Request, info *request.RequestInfo
e.ObjectRef.Name = obj.Name
}
}
// for recording disable and enable user
if e.ObjectRef.Resource == "users" && e.Verb == "update" {
u := &v1alpha2.User{}
if err := json.Unmarshal(body, u); err == nil {
if u.Status.State == v1alpha2.UserActive {
e.Verb = "enable"
} else if u.Status.State == v1alpha2.UserDisabled {
e.Verb = "disable"
}
}
}
}
return e
}
func (a *auditing) needAnalyzeRequestBody(e *auditv1alpha1.Event, req *http.Request) bool {
if req.ContentLength <= 0 {
return false
}
if e.Level.GreaterOrEqual(audit.LevelRequest) {
return true
}
if e.Verb == "create" {
return true
}
// for recording disable and enable user
if e.ObjectRef.Resource == "users" && e.Verb == "update" {
return true
}
return false
}
func (a *auditing) LogResponseObject(e *auditv1alpha1.Event, resp *ResponseCapture) {
e.StageTimestamp = metav1.NowMicro()

View File

@@ -45,7 +45,7 @@ func init() {
type ldapProvider struct {
// Host and optional port of the LDAP server in the form "host:port".
// If the port is not supplied, 389 for insecure or StartTLS connections, 636
Host string `json:"host,omitempty" yaml:"managerDN"`
Host string `json:"host,omitempty" yaml:"host"`
// Timeout duration when reading data from remote server. Default to 15s.
ReadTimeout int `json:"readTimeout" yaml:"readTimeout"`
// If specified, connections will use the ldaps:// protocol

View File

@@ -30,7 +30,6 @@ import (
clusterv1alpha1 "kubesphere.io/api/cluster/v1alpha1"
"kubesphere.io/kubesphere/pkg/apiserver/request"
clusterinformer "kubesphere.io/kubesphere/pkg/client/informers/externalversions/cluster/v1alpha1"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
)
@@ -47,8 +46,8 @@ type clusterDispatch struct {
clusterclient.ClusterClients
}
func NewClusterDispatch(clusterInformer clusterinformer.ClusterInformer) Dispatcher {
return &clusterDispatch{clusterclient.NewClusterClient(clusterInformer)}
func NewClusterDispatch(cc clusterclient.ClusterClients) Dispatcher {
return &clusterDispatch{cc}
}
// Dispatch dispatch requests to designated cluster

View File

@@ -246,8 +246,6 @@ func (r *RequestInfoFactory) NewRequestInfo(req *http.Request) (*RequestInfo, er
// parsing successful, so we now know the proper value for .Parts
requestInfo.Parts = currentParts
requestInfo.ResourceScope = r.resolveResourceScope(requestInfo)
// parts look like: resource/resourceName/subresource/other/stuff/we/don't/interpret
switch {
case len(requestInfo.Parts) >= 3 && !specialVerbsNoSubresources.Has(requestInfo.Verb):
@@ -260,6 +258,8 @@ func (r *RequestInfoFactory) NewRequestInfo(req *http.Request) (*RequestInfo, er
requestInfo.Resource = requestInfo.Parts[0]
}
requestInfo.ResourceScope = r.resolveResourceScope(requestInfo)
// if there's no name on the request and we thought it was a get before, then the actual verb is a list or a watch
if len(requestInfo.Name) == 0 && requestInfo.Verb == "get" {
opts := metainternalversion.ListOptions{}

View File

@@ -1,142 +0,0 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package fake
import (
"context"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
labels "k8s.io/apimachinery/pkg/labels"
schema "k8s.io/apimachinery/pkg/runtime/schema"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
testing "k8s.io/client-go/testing"
v1beta1 "kubesphere.io/api/types/v1beta1"
)
// FakeFederatedResourceQuotas implements FederatedResourceQuotaInterface
type FakeFederatedResourceQuotas struct {
Fake *FakeTypesV1beta1
ns string
}
var federatedresourcequotasResource = schema.GroupVersionResource{Group: "types.kubefed.io", Version: "v1beta1", Resource: "federatedresourcequotas"}
var federatedresourcequotasKind = schema.GroupVersionKind{Group: "types.kubefed.io", Version: "v1beta1", Kind: "FederatedResourceQuota"}
// Get takes name of the federatedResourceQuota, and returns the corresponding federatedResourceQuota object, and an error if there is any.
func (c *FakeFederatedResourceQuotas) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.FederatedResourceQuota, err error) {
obj, err := c.Fake.
Invokes(testing.NewGetAction(federatedresourcequotasResource, c.ns, name), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}
// List takes label and field selectors, and returns the list of FederatedResourceQuotas that match those selectors.
func (c *FakeFederatedResourceQuotas) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.FederatedResourceQuotaList, err error) {
obj, err := c.Fake.
Invokes(testing.NewListAction(federatedresourcequotasResource, federatedresourcequotasKind, c.ns, opts), &v1beta1.FederatedResourceQuotaList{})
if obj == nil {
return nil, err
}
label, _, _ := testing.ExtractFromListOptions(opts)
if label == nil {
label = labels.Everything()
}
list := &v1beta1.FederatedResourceQuotaList{ListMeta: obj.(*v1beta1.FederatedResourceQuotaList).ListMeta}
for _, item := range obj.(*v1beta1.FederatedResourceQuotaList).Items {
if label.Matches(labels.Set(item.Labels)) {
list.Items = append(list.Items, item)
}
}
return list, err
}
// Watch returns a watch.Interface that watches the requested federatedResourceQuotas.
func (c *FakeFederatedResourceQuotas) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
return c.Fake.
InvokesWatch(testing.NewWatchAction(federatedresourcequotasResource, c.ns, opts))
}
// Create takes the representation of a federatedResourceQuota and creates it. Returns the server's representation of the federatedResourceQuota, and an error, if there is any.
func (c *FakeFederatedResourceQuotas) Create(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.CreateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
obj, err := c.Fake.
Invokes(testing.NewCreateAction(federatedresourcequotasResource, c.ns, federatedResourceQuota), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}
// Update takes the representation of a federatedResourceQuota and updates it. Returns the server's representation of the federatedResourceQuota, and an error, if there is any.
func (c *FakeFederatedResourceQuotas) Update(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
obj, err := c.Fake.
Invokes(testing.NewUpdateAction(federatedresourcequotasResource, c.ns, federatedResourceQuota), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *FakeFederatedResourceQuotas) UpdateStatus(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (*v1beta1.FederatedResourceQuota, error) {
obj, err := c.Fake.
Invokes(testing.NewUpdateSubresourceAction(federatedresourcequotasResource, "status", c.ns, federatedResourceQuota), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}
// Delete takes name of the federatedResourceQuota and deletes it. Returns an error if one occurs.
func (c *FakeFederatedResourceQuotas) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
_, err := c.Fake.
Invokes(testing.NewDeleteAction(federatedresourcequotasResource, c.ns, name), &v1beta1.FederatedResourceQuota{})
return err
}
// DeleteCollection deletes a collection of objects.
func (c *FakeFederatedResourceQuotas) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
action := testing.NewDeleteCollectionAction(federatedresourcequotasResource, c.ns, listOpts)
_, err := c.Fake.Invokes(action, &v1beta1.FederatedResourceQuotaList{})
return err
}
// Patch applies the patch and returns the patched federatedResourceQuota.
func (c *FakeFederatedResourceQuotas) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.FederatedResourceQuota, err error) {
obj, err := c.Fake.
Invokes(testing.NewPatchSubresourceAction(federatedresourcequotasResource, c.ns, name, pt, data, subresources...), &v1beta1.FederatedResourceQuota{})
if obj == nil {
return nil, err
}
return obj.(*v1beta1.FederatedResourceQuota), err
}

View File

@@ -76,10 +76,6 @@ func (c *FakeTypesV1beta1) FederatedPersistentVolumeClaims(namespace string) v1b
return &FakeFederatedPersistentVolumeClaims{c, namespace}
}
func (c *FakeTypesV1beta1) FederatedResourceQuotas(namespace string) v1beta1.FederatedResourceQuotaInterface {
return &FakeFederatedResourceQuotas{c, namespace}
}
func (c *FakeTypesV1beta1) FederatedSecrets(namespace string) v1beta1.FederatedSecretInterface {
return &FakeFederatedSecrets{c, namespace}
}

View File

@@ -1,195 +0,0 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by client-gen. DO NOT EDIT.
package v1beta1
import (
"context"
"time"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
types "k8s.io/apimachinery/pkg/types"
watch "k8s.io/apimachinery/pkg/watch"
rest "k8s.io/client-go/rest"
v1beta1 "kubesphere.io/api/types/v1beta1"
scheme "kubesphere.io/kubesphere/pkg/client/clientset/versioned/scheme"
)
// FederatedResourceQuotasGetter has a method to return a FederatedResourceQuotaInterface.
// A group's client should implement this interface.
type FederatedResourceQuotasGetter interface {
FederatedResourceQuotas(namespace string) FederatedResourceQuotaInterface
}
// FederatedResourceQuotaInterface has methods to work with FederatedResourceQuota resources.
type FederatedResourceQuotaInterface interface {
Create(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.CreateOptions) (*v1beta1.FederatedResourceQuota, error)
Update(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (*v1beta1.FederatedResourceQuota, error)
UpdateStatus(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (*v1beta1.FederatedResourceQuota, error)
Delete(ctx context.Context, name string, opts v1.DeleteOptions) error
DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error
Get(ctx context.Context, name string, opts v1.GetOptions) (*v1beta1.FederatedResourceQuota, error)
List(ctx context.Context, opts v1.ListOptions) (*v1beta1.FederatedResourceQuotaList, error)
Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error)
Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.FederatedResourceQuota, err error)
FederatedResourceQuotaExpansion
}
// federatedResourceQuotas implements FederatedResourceQuotaInterface
type federatedResourceQuotas struct {
client rest.Interface
ns string
}
// newFederatedResourceQuotas returns a FederatedResourceQuotas
func newFederatedResourceQuotas(c *TypesV1beta1Client, namespace string) *federatedResourceQuotas {
return &federatedResourceQuotas{
client: c.RESTClient(),
ns: namespace,
}
}
// Get takes name of the federatedResourceQuota, and returns the corresponding federatedResourceQuota object, and an error if there is any.
func (c *federatedResourceQuotas) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Get().
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(name).
VersionedParams(&options, scheme.ParameterCodec).
Do(ctx).
Into(result)
return
}
// List takes label and field selectors, and returns the list of FederatedResourceQuotas that match those selectors.
func (c *federatedResourceQuotas) List(ctx context.Context, opts v1.ListOptions) (result *v1beta1.FederatedResourceQuotaList, err error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
result = &v1beta1.FederatedResourceQuotaList{}
err = c.client.Get().
Namespace(c.ns).
Resource("federatedresourcequotas").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Do(ctx).
Into(result)
return
}
// Watch returns a watch.Interface that watches the requested federatedResourceQuotas.
func (c *federatedResourceQuotas) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
var timeout time.Duration
if opts.TimeoutSeconds != nil {
timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
}
opts.Watch = true
return c.client.Get().
Namespace(c.ns).
Resource("federatedresourcequotas").
VersionedParams(&opts, scheme.ParameterCodec).
Timeout(timeout).
Watch(ctx)
}
// Create takes the representation of a federatedResourceQuota and creates it. Returns the server's representation of the federatedResourceQuota, and an error, if there is any.
func (c *federatedResourceQuotas) Create(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.CreateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Post().
Namespace(c.ns).
Resource("federatedresourcequotas").
VersionedParams(&opts, scheme.ParameterCodec).
Body(federatedResourceQuota).
Do(ctx).
Into(result)
return
}
// Update takes the representation of a federatedResourceQuota and updates it. Returns the server's representation of the federatedResourceQuota, and an error, if there is any.
func (c *federatedResourceQuotas) Update(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Put().
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(federatedResourceQuota.Name).
VersionedParams(&opts, scheme.ParameterCodec).
Body(federatedResourceQuota).
Do(ctx).
Into(result)
return
}
// UpdateStatus was generated because the type contains a Status member.
// Add a +genclient:noStatus comment above the type to avoid generating UpdateStatus().
func (c *federatedResourceQuotas) UpdateStatus(ctx context.Context, federatedResourceQuota *v1beta1.FederatedResourceQuota, opts v1.UpdateOptions) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Put().
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(federatedResourceQuota.Name).
SubResource("status").
VersionedParams(&opts, scheme.ParameterCodec).
Body(federatedResourceQuota).
Do(ctx).
Into(result)
return
}
// Delete takes name of the federatedResourceQuota and deletes it. Returns an error if one occurs.
func (c *federatedResourceQuotas) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
return c.client.Delete().
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(name).
Body(&opts).
Do(ctx).
Error()
}
// DeleteCollection deletes a collection of objects.
func (c *federatedResourceQuotas) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
var timeout time.Duration
if listOpts.TimeoutSeconds != nil {
timeout = time.Duration(*listOpts.TimeoutSeconds) * time.Second
}
return c.client.Delete().
Namespace(c.ns).
Resource("federatedresourcequotas").
VersionedParams(&listOpts, scheme.ParameterCodec).
Timeout(timeout).
Body(&opts).
Do(ctx).
Error()
}
// Patch applies the patch and returns the patched federatedResourceQuota.
func (c *federatedResourceQuotas) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1beta1.FederatedResourceQuota, err error) {
result = &v1beta1.FederatedResourceQuota{}
err = c.client.Patch(pt).
Namespace(c.ns).
Resource("federatedresourcequotas").
Name(name).
SubResource(subresources...).
VersionedParams(&opts, scheme.ParameterCodec).
Body(data).
Do(ctx).
Into(result)
return
}

View File

@@ -42,8 +42,6 @@ type FederatedNamespaceExpansion interface{}
type FederatedPersistentVolumeClaimExpansion interface{}
type FederatedResourceQuotaExpansion interface{}
type FederatedSecretExpansion interface{}
type FederatedServiceExpansion interface{}

View File

@@ -38,7 +38,6 @@ type TypesV1beta1Interface interface {
FederatedLimitRangesGetter
FederatedNamespacesGetter
FederatedPersistentVolumeClaimsGetter
FederatedResourceQuotasGetter
FederatedSecretsGetter
FederatedServicesGetter
FederatedStatefulSetsGetter
@@ -97,10 +96,6 @@ func (c *TypesV1beta1Client) FederatedPersistentVolumeClaims(namespace string) F
return newFederatedPersistentVolumeClaims(c, namespace)
}
func (c *TypesV1beta1Client) FederatedResourceQuotas(namespace string) FederatedResourceQuotaInterface {
return newFederatedResourceQuotas(c, namespace)
}
func (c *TypesV1beta1Client) FederatedSecrets(namespace string) FederatedSecretInterface {
return newFederatedSecrets(c, namespace)
}

View File

@@ -188,8 +188,6 @@ func (f *sharedInformerFactory) ForResource(resource schema.GroupVersionResource
return &genericInformer{resource: resource.GroupResource(), informer: f.Types().V1beta1().FederatedNamespaces().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("federatedpersistentvolumeclaims"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Types().V1beta1().FederatedPersistentVolumeClaims().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("federatedresourcequotas"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Types().V1beta1().FederatedResourceQuotas().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("federatedsecrets"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Types().V1beta1().FederatedSecrets().Informer()}, nil
case v1beta1.SchemeGroupVersion.WithResource("federatedservices"):

View File

@@ -1,90 +0,0 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by informer-gen. DO NOT EDIT.
package v1beta1
import (
"context"
time "time"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
watch "k8s.io/apimachinery/pkg/watch"
cache "k8s.io/client-go/tools/cache"
typesv1beta1 "kubesphere.io/api/types/v1beta1"
versioned "kubesphere.io/kubesphere/pkg/client/clientset/versioned"
internalinterfaces "kubesphere.io/kubesphere/pkg/client/informers/externalversions/internalinterfaces"
v1beta1 "kubesphere.io/kubesphere/pkg/client/listers/types/v1beta1"
)
// FederatedResourceQuotaInformer provides access to a shared informer and lister for
// FederatedResourceQuotas.
type FederatedResourceQuotaInformer interface {
Informer() cache.SharedIndexInformer
Lister() v1beta1.FederatedResourceQuotaLister
}
type federatedResourceQuotaInformer struct {
factory internalinterfaces.SharedInformerFactory
tweakListOptions internalinterfaces.TweakListOptionsFunc
namespace string
}
// NewFederatedResourceQuotaInformer constructs a new informer for FederatedResourceQuota type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewFederatedResourceQuotaInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer {
return NewFilteredFederatedResourceQuotaInformer(client, namespace, resyncPeriod, indexers, nil)
}
// NewFilteredFederatedResourceQuotaInformer constructs a new informer for FederatedResourceQuota type.
// Always prefer using an informer factory to get a shared informer instead of getting an independent
// one. This reduces memory footprint and number of connections to the server.
func NewFilteredFederatedResourceQuotaInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer {
return cache.NewSharedIndexInformer(
&cache.ListWatch{
ListFunc: func(options v1.ListOptions) (runtime.Object, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.TypesV1beta1().FederatedResourceQuotas(namespace).List(context.TODO(), options)
},
WatchFunc: func(options v1.ListOptions) (watch.Interface, error) {
if tweakListOptions != nil {
tweakListOptions(&options)
}
return client.TypesV1beta1().FederatedResourceQuotas(namespace).Watch(context.TODO(), options)
},
},
&typesv1beta1.FederatedResourceQuota{},
resyncPeriod,
indexers,
)
}
func (f *federatedResourceQuotaInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer {
return NewFilteredFederatedResourceQuotaInformer(client, f.namespace, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions)
}
func (f *federatedResourceQuotaInformer) Informer() cache.SharedIndexInformer {
return f.factory.InformerFor(&typesv1beta1.FederatedResourceQuota{}, f.defaultInformer)
}
func (f *federatedResourceQuotaInformer) Lister() v1beta1.FederatedResourceQuotaLister {
return v1beta1.NewFederatedResourceQuotaLister(f.Informer().GetIndexer())
}

View File

@@ -48,8 +48,6 @@ type Interface interface {
FederatedNamespaces() FederatedNamespaceInformer
// FederatedPersistentVolumeClaims returns a FederatedPersistentVolumeClaimInformer.
FederatedPersistentVolumeClaims() FederatedPersistentVolumeClaimInformer
// FederatedResourceQuotas returns a FederatedResourceQuotaInformer.
FederatedResourceQuotas() FederatedResourceQuotaInformer
// FederatedSecrets returns a FederatedSecretInformer.
FederatedSecrets() FederatedSecretInformer
// FederatedServices returns a FederatedServiceInformer.
@@ -129,11 +127,6 @@ func (v *version) FederatedPersistentVolumeClaims() FederatedPersistentVolumeCla
return &federatedPersistentVolumeClaimInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}
}
// FederatedResourceQuotas returns a FederatedResourceQuotaInformer.
func (v *version) FederatedResourceQuotas() FederatedResourceQuotaInformer {
return &federatedResourceQuotaInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}
}
// FederatedSecrets returns a FederatedSecretInformer.
func (v *version) FederatedSecrets() FederatedSecretInformer {
return &federatedSecretInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}

View File

@@ -106,14 +106,6 @@ type FederatedPersistentVolumeClaimListerExpansion interface{}
// FederatedPersistentVolumeClaimNamespaceLister.
type FederatedPersistentVolumeClaimNamespaceListerExpansion interface{}
// FederatedResourceQuotaListerExpansion allows custom methods to be added to
// FederatedResourceQuotaLister.
type FederatedResourceQuotaListerExpansion interface{}
// FederatedResourceQuotaNamespaceListerExpansion allows custom methods to be added to
// FederatedResourceQuotaNamespaceLister.
type FederatedResourceQuotaNamespaceListerExpansion interface{}
// FederatedSecretListerExpansion allows custom methods to be added to
// FederatedSecretLister.
type FederatedSecretListerExpansion interface{}

View File

@@ -1,99 +0,0 @@
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by lister-gen. DO NOT EDIT.
package v1beta1
import (
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/tools/cache"
v1beta1 "kubesphere.io/api/types/v1beta1"
)
// FederatedResourceQuotaLister helps list FederatedResourceQuotas.
// All objects returned here must be treated as read-only.
type FederatedResourceQuotaLister interface {
// List lists all FederatedResourceQuotas in the indexer.
// Objects returned here must be treated as read-only.
List(selector labels.Selector) (ret []*v1beta1.FederatedResourceQuota, err error)
// FederatedResourceQuotas returns an object that can list and get FederatedResourceQuotas.
FederatedResourceQuotas(namespace string) FederatedResourceQuotaNamespaceLister
FederatedResourceQuotaListerExpansion
}
// federatedResourceQuotaLister implements the FederatedResourceQuotaLister interface.
type federatedResourceQuotaLister struct {
indexer cache.Indexer
}
// NewFederatedResourceQuotaLister returns a new FederatedResourceQuotaLister.
func NewFederatedResourceQuotaLister(indexer cache.Indexer) FederatedResourceQuotaLister {
return &federatedResourceQuotaLister{indexer: indexer}
}
// List lists all FederatedResourceQuotas in the indexer.
func (s *federatedResourceQuotaLister) List(selector labels.Selector) (ret []*v1beta1.FederatedResourceQuota, err error) {
err = cache.ListAll(s.indexer, selector, func(m interface{}) {
ret = append(ret, m.(*v1beta1.FederatedResourceQuota))
})
return ret, err
}
// FederatedResourceQuotas returns an object that can list and get FederatedResourceQuotas.
func (s *federatedResourceQuotaLister) FederatedResourceQuotas(namespace string) FederatedResourceQuotaNamespaceLister {
return federatedResourceQuotaNamespaceLister{indexer: s.indexer, namespace: namespace}
}
// FederatedResourceQuotaNamespaceLister helps list and get FederatedResourceQuotas.
// All objects returned here must be treated as read-only.
type FederatedResourceQuotaNamespaceLister interface {
// List lists all FederatedResourceQuotas in the indexer for a given namespace.
// Objects returned here must be treated as read-only.
List(selector labels.Selector) (ret []*v1beta1.FederatedResourceQuota, err error)
// Get retrieves the FederatedResourceQuota from the indexer for a given namespace and name.
// Objects returned here must be treated as read-only.
Get(name string) (*v1beta1.FederatedResourceQuota, error)
FederatedResourceQuotaNamespaceListerExpansion
}
// federatedResourceQuotaNamespaceLister implements the FederatedResourceQuotaNamespaceLister
// interface.
type federatedResourceQuotaNamespaceLister struct {
indexer cache.Indexer
namespace string
}
// List lists all FederatedResourceQuotas in the indexer for a given namespace.
func (s federatedResourceQuotaNamespaceLister) List(selector labels.Selector) (ret []*v1beta1.FederatedResourceQuota, err error) {
err = cache.ListAllByNamespace(s.indexer, s.namespace, selector, func(m interface{}) {
ret = append(ret, m.(*v1beta1.FederatedResourceQuota))
})
return ret, err
}
// Get retrieves the FederatedResourceQuota from the indexer for a given namespace and name.
func (s federatedResourceQuotaNamespaceLister) Get(name string) (*v1beta1.FederatedResourceQuota, error) {
obj, exists, err := s.indexer.GetByKey(s.namespace + "/" + name)
if err != nil {
return nil, err
}
if !exists {
return nil, errors.NewNotFound(v1beta1.Resource("federatedresourcequota"), name)
}
return obj.(*v1beta1.FederatedResourceQuota), nil
}

View File

@@ -196,13 +196,13 @@ func newDeployments(deploymentName, namespace string, labels map[string]string,
return deployment
}
func newService(serviceName, namesapce string, labels map[string]string) *corev1.Service {
func newService(serviceName, namespace string, labels map[string]string) *corev1.Service {
labels["app"] = serviceName
svc := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: serviceName,
Namespace: namesapce,
Namespace: namespace,
Labels: labels,
Annotations: map[string]string{
"servicemesh.kubesphere.io/enabled": "true",

View File

@@ -25,6 +25,7 @@ import (
"fmt"
"net/http"
"reflect"
"strings"
"time"
"gopkg.in/yaml.v2"
@@ -48,11 +49,13 @@ import (
fedv1b1 "sigs.k8s.io/kubefed/pkg/apis/core/v1beta1"
clusterv1alpha1 "kubesphere.io/api/cluster/v1alpha1"
iamv1alpha2 "kubesphere.io/api/iam/v1alpha2"
"kubesphere.io/kubesphere/pkg/apiserver/config"
clusterclient "kubesphere.io/kubesphere/pkg/client/clientset/versioned/typed/cluster/v1alpha1"
kubesphere "kubesphere.io/kubesphere/pkg/client/clientset/versioned"
clusterinformer "kubesphere.io/kubesphere/pkg/client/informers/externalversions/cluster/v1alpha1"
clusterlister "kubesphere.io/kubesphere/pkg/client/listers/cluster/v1alpha1"
iamv1alpha2listers "kubesphere.io/kubesphere/pkg/client/listers/iam/v1alpha2"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/simple/client/multicluster"
"kubesphere.io/kubesphere/pkg/utils/k8sutil"
@@ -126,12 +129,13 @@ type clusterController struct {
eventRecorder record.EventRecorder
// build this only for host cluster
client kubernetes.Interface
k8sClient kubernetes.Interface
hostConfig *rest.Config
clusterClient clusterclient.ClusterInterface
ksClient kubesphere.Interface
clusterLister clusterlister.ClusterLister
userLister iamv1alpha2listers.UserLister
clusterHasSynced cache.InformerSynced
queue workqueue.RateLimitingInterface
@@ -144,10 +148,11 @@ type clusterController struct {
}
func NewClusterController(
client kubernetes.Interface,
k8sClient kubernetes.Interface,
ksClient kubesphere.Interface,
config *rest.Config,
clusterInformer clusterinformer.ClusterInformer,
clusterClient clusterclient.ClusterInterface,
userLister iamv1alpha2listers.UserLister,
resyncPeriod time.Duration,
hostClusterName string,
) *clusterController {
@@ -156,19 +161,20 @@ func NewClusterController(
broadcaster.StartLogging(func(format string, args ...interface{}) {
klog.Info(fmt.Sprintf(format, args))
})
broadcaster.StartRecordingToSink(&corev1.EventSinkImpl{Interface: client.CoreV1().Events("")})
broadcaster.StartRecordingToSink(&corev1.EventSinkImpl{Interface: k8sClient.CoreV1().Events("")})
recorder := broadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: "cluster-controller"})
c := &clusterController{
eventBroadcaster: broadcaster,
eventRecorder: recorder,
client: client,
k8sClient: k8sClient,
ksClient: ksClient,
hostConfig: config,
clusterClient: clusterClient,
queue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "cluster"),
workerLoopPeriod: time.Second,
resyncPeriod: resyncPeriod,
hostClusterName: hostClusterName,
userLister: userLister,
}
c.clusterLister = clusterInformer.Lister()
c.clusterHasSynced = clusterInformer.Informer().HasSynced
@@ -178,7 +184,7 @@ func NewClusterController(
UpdateFunc: func(oldObj, newObj interface{}) {
oldCluster := oldObj.(*clusterv1alpha1.Cluster)
newCluster := newObj.(*clusterv1alpha1.Cluster)
if !reflect.DeepEqual(oldCluster.Spec, newCluster.Spec) {
if !reflect.DeepEqual(oldCluster.Spec, newCluster.Spec) || newCluster.DeletionTimestamp != nil {
c.enqueueCluster(newObj)
}
},
@@ -213,7 +219,7 @@ func (c *clusterController) Run(workers int, stopCh <-chan struct{}) error {
klog.Errorf("Error create host cluster, error %v", err)
}
if err := c.probeClusters(); err != nil {
if err := c.resyncClusters(); err != nil {
klog.Errorf("failed to reconcile cluster ready status, err: %v", err)
}
}, c.resyncPeriod, stopCh)
@@ -256,7 +262,7 @@ func (c *clusterController) reconcileHostCluster() error {
if len(clusters) == 0 {
hostCluster.Spec.Connection.KubeConfig = hostKubeConfig
hostCluster.Name = c.hostClusterName
_, err = c.clusterClient.Create(context.TODO(), hostCluster, metav1.CreateOptions{})
_, err = c.ksClient.ClusterV1alpha1().Clusters().Create(context.TODO(), hostCluster, metav1.CreateOptions{})
return err
} else if len(clusters) > 1 {
return fmt.Errorf("there MUST not be more than one host clusters, while there are %d", len(clusters))
@@ -280,19 +286,21 @@ func (c *clusterController) reconcileHostCluster() error {
}
// update host cluster config
_, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{})
_, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{})
return err
}
func (c *clusterController) probeClusters() error {
func (c *clusterController) resyncClusters() error {
clusters, err := c.clusterLister.List(labels.Everything())
if err != nil {
return err
}
for _, cluster := range clusters {
c.syncCluster(cluster.Name)
key, _ := cache.MetaNamespaceKeyFunc(cluster)
c.queue.Add(key)
}
return nil
}
@@ -328,7 +336,7 @@ func (c *clusterController) syncCluster(key string) error {
// registering our finalizer.
if !sets.NewString(cluster.ObjectMeta.Finalizers...).Has(clusterv1alpha1.Finalizer) {
cluster.ObjectMeta.Finalizers = append(cluster.ObjectMeta.Finalizers, clusterv1alpha1.Finalizer)
if cluster, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{}); err != nil {
if cluster, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{}); err != nil {
return err
}
}
@@ -338,17 +346,21 @@ func (c *clusterController) syncCluster(key string) error {
// need to unJoin federation first, before there are
// some cleanup work to do in member cluster which depends
// agent to proxy traffic
err = c.unJoinFederation(nil, name)
if err != nil {
if err = c.unJoinFederation(nil, name); err != nil {
klog.Errorf("Failed to unjoin federation for cluster %s, error %v", name, err)
return err
}
// cleanup after cluster has been deleted
if err := c.syncClusterMembers(nil, cluster); err != nil {
klog.Errorf("Failed to sync cluster members for %s: %v", name, err)
return err
}
// remove our cluster finalizer
finalizers := sets.NewString(cluster.ObjectMeta.Finalizers...)
finalizers.Delete(clusterv1alpha1.Finalizer)
cluster.ObjectMeta.Finalizers = finalizers.List()
if _, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{}); err != nil {
if _, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{}); err != nil {
return err
}
}
@@ -406,8 +418,17 @@ func (c *clusterController) syncCluster(key string) error {
Message: "Cluster can not join federation control plane",
}
c.updateClusterCondition(cluster, federationNotReadyCondition)
notReadyCondition := clusterv1alpha1.ClusterCondition{
Type: clusterv1alpha1.ClusterReady,
Status: v1.ConditionFalse,
LastUpdateTime: metav1.Now(),
LastTransitionTime: metav1.Now(),
Reason: "Cluster join federation control plane failed",
Message: "Cluster is Not Ready now",
}
c.updateClusterCondition(cluster, notReadyCondition)
_, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{})
_, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{})
if err != nil {
klog.Errorf("Failed to update cluster status, %#v", err)
}
@@ -496,8 +517,8 @@ func (c *clusterController) syncCluster(key string) error {
return err
}
if !reflect.DeepEqual(oldCluster, cluster) {
_, err = c.clusterClient.Update(context.TODO(), cluster, metav1.UpdateOptions{})
if !reflect.DeepEqual(oldCluster.Status, cluster.Status) {
_, err = c.ksClient.ClusterV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{})
if err != nil {
klog.Errorf("Failed to update cluster status, %#v", err)
return err
@@ -508,6 +529,10 @@ func (c *clusterController) syncCluster(key string) error {
return err
}
if err = c.syncClusterMembers(clusterClient, cluster); err != nil {
return fmt.Errorf("failed to sync cluster membership for %s: %s", cluster.Name, err)
}
return nil
}
@@ -541,7 +566,7 @@ func (c *clusterController) setClusterNameInConfigMap(client kubernetes.Interfac
}
func (c *clusterController) checkIfClusterIsHostCluster(memberClusterNodes *v1.NodeList) bool {
hostNodes, err := c.client.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
hostNodes, err := c.k8sClient.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
if err != nil {
return false
}
@@ -786,3 +811,55 @@ func (c *clusterController) updateKubeConfigExpirationDateCondition(cluster *clu
})
return nil
}
// syncClusterMembers Sync granted clusters for users periodically
func (c *clusterController) syncClusterMembers(clusterClient *kubernetes.Clientset, cluster *clusterv1alpha1.Cluster) error {
users, err := c.userLister.List(labels.Everything())
if err != nil {
return fmt.Errorf("failed to list users: %s", err)
}
grantedUsers := sets.NewString()
clusterName := cluster.Name
if cluster.DeletionTimestamp.IsZero() {
list, err := clusterClient.RbacV1().ClusterRoleBindings().List(context.Background(),
metav1.ListOptions{LabelSelector: iamv1alpha2.UserReferenceLabel})
if err != nil {
return fmt.Errorf("failed to list clusterrolebindings: %s", err)
}
for _, clusterRoleBinding := range list.Items {
for _, sub := range clusterRoleBinding.Subjects {
if sub.Kind == iamv1alpha2.ResourceKindUser {
grantedUsers.Insert(sub.Name)
}
}
}
}
for _, user := range users {
user = user.DeepCopy()
grantedClustersAnnotation := user.Annotations[iamv1alpha2.GrantedClustersAnnotation]
var grantedClusters sets.String
if len(grantedClustersAnnotation) > 0 {
grantedClusters = sets.NewString(strings.Split(grantedClustersAnnotation, ",")...)
} else {
grantedClusters = sets.NewString()
}
if grantedUsers.Has(user.Name) && !grantedClusters.Has(clusterName) {
grantedClusters.Insert(clusterName)
} else if !grantedUsers.Has(user.Name) && grantedClusters.Has(clusterName) {
grantedClusters.Delete(clusterName)
}
grantedClustersAnnotation = strings.Join(grantedClusters.List(), ",")
if user.Annotations[iamv1alpha2.GrantedClustersAnnotation] != grantedClustersAnnotation {
if user.Annotations == nil {
user.Annotations = make(map[string]string, 0)
}
user.Annotations[iamv1alpha2.GrantedClustersAnnotation] = grantedClustersAnnotation
if _, err := c.ksClient.IamV1alpha2().Users().Update(context.Background(), user, metav1.UpdateOptions{}); err != nil {
return fmt.Errorf("failed to update user %s: %s", user.Name, err)
}
}
}
return nil
}

View File

@@ -18,6 +18,7 @@ package cluster
import (
"context"
"fmt"
"reflect"
"time"
@@ -268,7 +269,7 @@ func createAuthorizedServiceAccount(joiningClusterClientset kubeclient.Interface
klog.V(2).Infof("Creating service account in joining cluster: %s", joiningClusterName)
saName, err := createServiceAccount(joiningClusterClientset, namespace,
saName, err := createServiceAccountWithSecret(joiningClusterClientset, namespace,
joiningClusterName, hostClusterName, dryRun, errorOnExisting)
if err != nil {
klog.V(2).Infof("Error creating service account: %s in joining cluster: %s due to: %v",
@@ -320,31 +321,75 @@ func createAuthorizedServiceAccount(joiningClusterClientset kubeclient.Interface
return saName, nil
}
// createServiceAccount creates a service account in the cluster associated
// createServiceAccountWithSecret creates a service account and secret in the cluster associated
// with clusterClientset with credentials that will be used by the host cluster
// to access its API server.
func createServiceAccount(clusterClientset kubeclient.Interface, namespace,
func createServiceAccountWithSecret(clusterClientset kubeclient.Interface, namespace,
joiningClusterName, hostClusterName string, dryRun, errorOnExisting bool) (string, error) {
saName := util.ClusterServiceAccountName(joiningClusterName, hostClusterName)
sa := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: saName,
Namespace: namespace,
},
}
if dryRun {
return saName, nil
}
// Create a new service account.
_, err := clusterClientset.CoreV1().ServiceAccounts(namespace).Create(context.Background(), sa, metav1.CreateOptions{})
switch {
case apierrors.IsAlreadyExists(err) && errorOnExisting:
klog.V(2).Infof("Service account %s/%s already exists in target cluster %s", namespace, saName, joiningClusterName)
ctx := context.Background()
sa, err := clusterClientset.CoreV1().ServiceAccounts(namespace).Get(ctx, saName, metav1.GetOptions{})
if err != nil {
if apierrors.IsNotFound(err) {
sa = &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: saName,
Namespace: namespace,
},
}
// We must create the sa first, then create the associated secret, and update the sa at last.
// Or the kube-controller-manager will delete the secret.
sa, err = clusterClientset.CoreV1().ServiceAccounts(namespace).Create(ctx, sa, metav1.CreateOptions{})
switch {
case apierrors.IsAlreadyExists(err) && errorOnExisting:
klog.V(2).Infof("Service account %s/%s already exists in target cluster %s", namespace, saName, joiningClusterName)
return "", err
case err != nil && !apierrors.IsAlreadyExists(err):
klog.V(2).Infof("Could not create service account %s/%s in target cluster %s due to: %v", namespace, saName, joiningClusterName, err)
return "", err
}
} else {
return "", err
}
}
if len(sa.Secrets) > 0 {
return saName, nil
}
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
GenerateName: fmt.Sprintf("%s-token-", saName),
Namespace: namespace,
Annotations: map[string]string{
corev1.ServiceAccountNameKey: saName,
},
},
Type: corev1.SecretTypeServiceAccountToken,
}
// After kubernetes v1.24, kube-controller-manger will not create the default secret for
// service account. http://kep.k8s.io/2800
// Create a default secret.
secret, err = clusterClientset.CoreV1().Secrets(namespace).Create(ctx, secret, metav1.CreateOptions{})
if err != nil && !apierrors.IsAlreadyExists(err) {
klog.V(2).Infof("Could not create secret for service account %s/%s in target cluster %s due to: %v", namespace, saName, joiningClusterName, err)
return "", err
case err != nil && !apierrors.IsAlreadyExists(err):
klog.V(2).Infof("Could not create service account %s/%s in target cluster %s due to: %v", namespace, saName, joiningClusterName, err)
}
// At last, update the service account.
sa.Secrets = append(sa.Secrets, corev1.ObjectReference{Name: secret.Name})
_, err = clusterClientset.CoreV1().ServiceAccounts(namespace).Update(ctx, sa, metav1.UpdateOptions{})
switch {
case err != nil:
klog.Infof("Could not update service account %s/%s in target cluster %s due to: %v", namespace, saName, joiningClusterName, err)
return "", err
default:
return saName, nil

View File

@@ -73,7 +73,7 @@ func (r *Reconciler) SetupWithManager(mgr ctrl.Manager) error {
if err := r.SetupWithManager(mgr); err != nil {
return err
}
klog.Info("configured watch", "gvk", w.GroupVersionKind, "chartPath", w.ChartPath, "maxConcurrentReconciles", maxConcurrentReconciles, "reconcilePeriod", reconcilePeriod)
klog.Infoln("configured watch", "gvk", w.GroupVersionKind, "chartPath", w.ChartPath, "maxConcurrentReconciles", maxConcurrentReconciles, "reconcilePeriod", reconcilePeriod)
}
return nil
}

View File

@@ -1,3 +1,17 @@
// Copyright 2022 The KubeSphere Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
package webhooks
import (

View File

@@ -17,7 +17,6 @@ limitations under the License.
package v1alpha1
import (
"context"
"fmt"
"time"
@@ -177,7 +176,7 @@ func (h *handler) PodLog(request *restful.Request, response *restful.Response) {
}
fw := flushwriter.Wrap(response.ResponseWriter)
err := h.gw.GetPodLogs(context.TODO(), podNamespace, podID, logOptions, fw)
err := h.gw.GetPodLogs(request.Request.Context(), podNamespace, podID, logOptions, fw)
if err != nil {
api.HandleError(response, request, err)
return
@@ -196,7 +195,7 @@ func (h *handler) PodLogSearch(request *restful.Request, response *restful.Respo
api.HandleError(response, request, err)
return
}
// ES log will be filted by pods and namespace by default.
// ES log will be filtered by pods and namespace by default.
pods, err := h.gw.GetPods(ns, &query.Query{})
if err != nil {
api.HandleError(response, request, err)

View File

@@ -380,6 +380,7 @@ func (h *iamHandler) ListWorkspaceRoles(request *restful.Request, response *rest
queryParam.Filters[iamv1alpha2.ScopeWorkspace] = query.Value(workspace)
// shared workspace role template
if string(queryParam.Filters[query.FieldLabel]) == fmt.Sprintf("%s=%s", iamv1alpha2.RoleTemplateLabel, "true") ||
strings.Contains(queryParam.LabelSelector, iamv1alpha2.RoleTemplateLabel) ||
queryParam.Filters[iamv1alpha2.AggregateTo] != "" {
delete(queryParam.Filters, iamv1alpha2.ScopeWorkspace)
}

View File

@@ -21,12 +21,10 @@ package v1alpha1
import (
"github.com/emicklei/go-restful"
"k8s.io/client-go/kubernetes"
openpitrixoptions "kubesphere.io/kubesphere/pkg/simple/client/openpitrix"
runtimeclient "sigs.k8s.io/controller-runtime/pkg/client"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
"kubesphere.io/kubesphere/pkg/informers"
monitorhle "kubesphere.io/kubesphere/pkg/kapis/monitoring/v1alpha3"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
@@ -47,6 +45,6 @@ type meterHandler interface {
HandlePVCMeterQuery(req *restful.Request, resp *restful.Response)
}
func newHandler(k kubernetes.Interface, m monitoring.Interface, f informers.InformerFactory, ksClient versioned.Interface, resourceGetter *resourcev1alpha3.ResourceGetter, meteringOptions *meteringclient.Options, opOptions *openpitrixoptions.Options, rtClient runtimeclient.Client, stopCh <-chan struct{}) meterHandler {
return monitorhle.NewHandler(k, m, nil, f, ksClient, resourceGetter, meteringOptions, opOptions, rtClient, stopCh)
func newHandler(k kubernetes.Interface, m monitoring.Interface, f informers.InformerFactory, resourceGetter *resourcev1alpha3.ResourceGetter, meteringOptions *meteringclient.Options, opClient openpitrix.Interface, rtClient runtimeclient.Client) meterHandler {
return monitorhle.NewHandler(k, m, nil, f, resourceGetter, meteringOptions, opClient, rtClient)
}

View File

@@ -20,9 +20,7 @@ package v1alpha1
import (
"net/http"
openpitrixoptions "kubesphere.io/kubesphere/pkg/simple/client/openpitrix"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
"github.com/emicklei/go-restful"
restfulspec "github.com/emicklei/go-restful-openapi"
@@ -49,10 +47,10 @@ const (
var GroupVersion = schema.GroupVersion{Group: groupName, Version: "v1alpha1"}
func AddToContainer(c *restful.Container, k8sClient kubernetes.Interface, meteringClient monitoring.Interface, factory informers.InformerFactory, ksClient versioned.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, opOptions *openpitrixoptions.Options, rtClient runtimeclient.Client, stopCh <-chan struct{}) error {
func AddToContainer(c *restful.Container, k8sClient kubernetes.Interface, meteringClient monitoring.Interface, factory informers.InformerFactory, cache cache.Cache, meteringOptions *meteringclient.Options, opClient openpitrix.Interface, rtClient runtimeclient.Client) error {
ws := runtime.NewWebService(GroupVersion)
h := newHandler(k8sClient, meteringClient, factory, ksClient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opOptions, rtClient, stopCh)
h := newHandler(k8sClient, meteringClient, factory, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opClient, rtClient)
ws.Route(ws.GET("/cluster").
To(h.HandleClusterMeterQuery).

View File

@@ -27,14 +27,8 @@ import (
"regexp"
"strings"
"k8s.io/klog"
converter "kubesphere.io/monitoring-dashboard/tools/converter"
openpitrixoptions "kubesphere.io/kubesphere/pkg/simple/client/openpitrix"
"kubesphere.io/kubesphere/pkg/simple/client/s3"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
"github.com/emicklei/go-restful"
@@ -60,27 +54,16 @@ type handler struct {
rtClient runtimeclient.Client
}
func NewHandler(k kubernetes.Interface, monitoringClient monitoring.Interface, metricsClient monitoring.Interface, f informers.InformerFactory, ksClient versioned.Interface, resourceGetter *resourcev1alpha3.ResourceGetter, meteringOptions *meteringclient.Options, opOptions *openpitrixoptions.Options, rtClient runtimeclient.Client, stopCh <-chan struct{}) *handler {
var opRelease openpitrix.Interface
var s3Client s3.Interface
if opOptions != nil && opOptions.S3Options != nil && len(opOptions.S3Options.Endpoint) != 0 {
var err error
s3Client, err = s3.NewS3Client(opOptions.S3Options)
if err != nil {
klog.Errorf("failed to connect to storage, please check storage service status, error: %v", err)
}
}
if ksClient != nil {
opRelease = openpitrix.NewOpenpitrixOperator(f, ksClient, s3Client, stopCh)
}
func NewHandler(k kubernetes.Interface, monitoringClient monitoring.Interface, metricsClient monitoring.Interface, f informers.InformerFactory, resourceGetter *resourcev1alpha3.ResourceGetter, meteringOptions *meteringclient.Options, opClient openpitrix.Interface, rtClient runtimeclient.Client) *handler {
if meteringOptions == nil || meteringOptions.RetentionDay == "" {
meteringOptions = &meteringclient.DefaultMeteringOption
}
return &handler{
k: k,
mo: model.NewMonitoringOperator(monitoringClient, metricsClient, k, f, resourceGetter, opRelease),
opRelease: opRelease,
mo: model.NewMonitoringOperator(monitoringClient, metricsClient, k, f, resourceGetter, opClient),
opRelease: opClient,
meteringOptions: meteringOptions,
rtClient: rtClient,
}

View File

@@ -373,7 +373,7 @@ func TestParseRequestParams(t *testing.T) {
fakeInformerFactory.KubeSphereSharedInformerFactory()
handler := NewHandler(client, nil, nil, fakeInformerFactory, ksClient, nil, nil, nil, nil, nil)
handler := NewHandler(client, nil, nil, fakeInformerFactory, nil, nil, nil, nil)
result, err := handler.makeQueryOptions(tt.params, tt.lvl)
if err != nil {

View File

@@ -20,12 +20,10 @@ package v1alpha3
import (
"net/http"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
monitoringdashboardv1alpha2 "kubesphere.io/monitoring-dashboard/api/v1alpha2"
openpitrixoptions "kubesphere.io/kubesphere/pkg/simple/client/openpitrix"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"github.com/emicklei/go-restful"
restfulspec "github.com/emicklei/go-restful-openapi"
"k8s.io/apimachinery/pkg/runtime/schema"
@@ -47,10 +45,10 @@ const (
var GroupVersion = schema.GroupVersion{Group: groupName, Version: "v1alpha3"}
func AddToContainer(c *restful.Container, k8sClient kubernetes.Interface, monitoringClient monitoring.Interface, metricsClient monitoring.Interface, factory informers.InformerFactory, ksClient versioned.Interface, opOptions *openpitrixoptions.Options, rtClient runtimeclient.Client, stopCh <-chan struct{}) error {
func AddToContainer(c *restful.Container, k8sClient kubernetes.Interface, monitoringClient monitoring.Interface, metricsClient monitoring.Interface, factory informers.InformerFactory, opClient openpitrix.Interface, rtClient runtimeclient.Client) error {
ws := runtime.NewWebService(GroupVersion)
h := NewHandler(k8sClient, monitoringClient, metricsClient, factory, ksClient, nil, nil, opOptions, rtClient, stopCh)
h := NewHandler(k8sClient, monitoringClient, metricsClient, factory, nil, nil, opClient, rtClient)
ws.Route(ws.GET("/kubesphere").
To(h.handleKubeSphereMetricsQuery).

View File

@@ -23,6 +23,8 @@ import (
"strings"
"time"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
restful "github.com/emicklei/go-restful"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
@@ -50,7 +52,7 @@ type openpitrixHandler struct {
openpitrix openpitrix.Interface
}
func newOpenpitrixHandler(ksInformers informers.InformerFactory, ksClient versioned.Interface, option *openpitrixoptions.Options, stopCh <-chan struct{}) *openpitrixHandler {
func NewOpenpitrixClient(ksInformers informers.InformerFactory, ksClient versioned.Interface, option *openpitrixoptions.Options, cc clusterclient.ClusterClients, stopCh <-chan struct{}) openpitrix.Interface {
var s3Client s3.Interface
if option != nil && option.S3Options != nil && len(option.S3Options.Endpoint) != 0 {
var err error
@@ -60,9 +62,7 @@ func newOpenpitrixHandler(ksInformers informers.InformerFactory, ksClient versio
}
}
return &openpitrixHandler{
openpitrix.NewOpenpitrixOperator(ksInformers, ksClient, s3Client, stopCh),
}
return openpitrix.NewOpenpitrixOperator(ksInformers, ksClient, s3Client, cc, stopCh)
}
func (h *openpitrixHandler) CreateRepo(req *restful.Request, resp *restful.Response) {
@@ -753,7 +753,7 @@ func (h *openpitrixHandler) ListApplications(req *restful.Request, resp *restful
return
}
resp.WriteAsJson(result)
resp.WriteEntity(result)
}
func (h *openpitrixHandler) UpgradeApplication(req *restful.Request, resp *restful.Response) {

View File

@@ -38,11 +38,13 @@ const (
var GroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1"}
func AddToContainer(c *restful.Container, ksInfomrers informers.InformerFactory, ksClient versioned.Interface, options *openpitrixoptions.Options, stopCh <-chan struct{}) error {
func AddToContainer(c *restful.Container, ksInfomrers informers.InformerFactory, ksClient versioned.Interface, options *openpitrixoptions.Options, opClient openpitrix.Interface) error {
mimePatch := []string{restful.MIME_JSON, runtime.MimeJsonPatchJson, runtime.MimeMergePatchJson}
webservice := runtime.NewWebService(GroupVersion)
handler := newOpenpitrixHandler(ksInfomrers, ksClient, options, stopCh)
handler := &openpitrixHandler{
opClient,
}
webservice.Route(webservice.POST("/repos").
To(handler.CreateRepo).

View File

@@ -17,7 +17,16 @@ limitations under the License.
package v1alpha3
import (
"fmt"
"io"
"net/http"
"net/http/httptest"
"reflect"
"testing"
"unsafe"
"github.com/emicklei/go-restful"
"k8s.io/klog"
"github.com/google/go-cmp/cmp"
fakesnapshot "github.com/kubernetes-csi/external-snapshotter/client/v4/clientset/versioned/fake"
@@ -87,13 +96,11 @@ func TestResourceV1alpha2Fallback(t *testing.T) {
},
}
factory, err := prepare()
handler, err := prepare()
if err != nil {
t.Fatal(err)
}
handler := New(resourcev1alpha3.NewResourceGetter(factory, nil), resourcev1alpha2.NewResourceGetter(factory), components.NewComponentsGetter(factory.KubernetesSharedInformerFactory()))
for _, test := range tests {
got, err := listResources(test.namespace, test.resource, test.query, handler)
@@ -175,12 +182,31 @@ var (
ReadyReplicas: 0,
},
}
apiServerService = &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "ks-apiserver",
Namespace: "istio-system",
},
Spec: corev1.ServiceSpec{
Selector: map[string]string{"app": "ks-apiserver-app"},
},
}
ksControllerService = &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "ks-controller",
Namespace: "kubesphere-system",
},
Spec: corev1.ServiceSpec{
Selector: map[string]string{"app": "ks-controller-app"},
},
}
deployments = []interface{}{redisDeployment, nginxDeployment}
namespaces = []interface{}{defaultNamespace, kubesphereNamespace}
secrets = []interface{}{secretFoo1, secretFoo2}
services = []interface{}{apiServerService, ksControllerService}
)
func prepare() (informers.InformerFactory, error) {
func prepare() (*Handler, error) {
ksClient := fakeks.NewSimpleClientset()
k8sClient := fakek8s.NewSimpleClientset()
@@ -210,6 +236,91 @@ func prepare() (informers.InformerFactory, error) {
return nil, err
}
}
for _, service := range services {
err := k8sInformerFactory.Core().V1().Services().Informer().GetIndexer().Add(service)
if err != nil {
return nil, err
}
}
return fakeInformerFactory, nil
handler := New(resourcev1alpha3.NewResourceGetter(fakeInformerFactory, nil), resourcev1alpha2.NewResourceGetter(fakeInformerFactory), components.NewComponentsGetter(fakeInformerFactory.KubernetesSharedInformerFactory()))
return handler, nil
}
func TestHandleGetComponentStatus(t *testing.T) {
param := map[string]string{
"component": "ks-controller",
}
request, response, err := buildReqAndRes("GET", "/kapis/resources.kubesphere.io/v1alpha3/components/{component}", param, nil)
if err != nil {
t.Fatal("build res or req failed ")
}
handler, err := prepare()
if err != nil {
t.Fatal("init handler failed")
}
handler.handleGetComponentStatus(request, response)
if status := response.StatusCode(); status != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v",
status, http.StatusOK)
}
}
func TestHandleGetComponents(t *testing.T) {
request, response, err := buildReqAndRes("GET", "/kapis/resources.kubesphere.io/v1alpha3/components", nil, nil)
if err != nil {
t.Fatal("build res or req failed ")
}
handler, err := prepare()
if err != nil {
t.Fatal("init handler failed")
}
handler.handleGetComponents(request, response)
if status := response.StatusCode(); status != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v",
status, http.StatusOK)
}
}
//build req and res in *restful
func buildReqAndRes(method, target string, param map[string]string, body io.Reader) (*restful.Request, *restful.Response, error) {
//build req
request := httptest.NewRequest(method, target, body)
newRequest := restful.NewRequest(request)
if param != nil {
err := setUnExportedFields(newRequest, "pathParameters", param)
if err != nil {
klog.Error("set pathParameters failed ")
return nil, nil, err
}
}
//build res
response := httptest.NewRecorder()
newResponse := restful.NewResponse(response)
// assign Key:routeProduces a value of "application/json"
err := setUnExportedFields(newResponse, "routeProduces", []string{"application/json"})
if err != nil {
klog.Error("set routeProduces failed ")
return nil, nil, err
}
return newRequest, newResponse, nil
}
//Setting unexported fields by using reflect
func setUnExportedFields(ptr interface{}, filedName string, newFiledValue interface{}) (err error) {
v := reflect.ValueOf(ptr).Elem().FieldByName(filedName)
v = reflect.NewAt(v.Type(), unsafe.Pointer(v.UnsafeAddr())).Elem()
nv := reflect.ValueOf(newFiledValue)
if v.Kind() != nv.Kind() {
return fmt.Errorf("kind error")
}
v.Set(nv)
return nil
}

View File

@@ -0,0 +1,142 @@
package v1alpha2
import (
"context"
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"github.com/emicklei/go-restful"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
fakek8s "k8s.io/client-go/kubernetes/fake"
"k8s.io/klog"
"kubesphere.io/kubesphere/pkg/simple/client/kiali"
"kubesphere.io/kubesphere/pkg/simple/client/servicemesh"
"kubesphere.io/kubesphere/pkg/utils/reflectutils"
)
func prepare() (*Handler, error) {
var namespaceName = "kubesphere-system"
var serviceAccountName = "kubesphere"
var secretName = "kiali"
clientset := fakek8s.NewSimpleClientset()
ctx := context.Background()
namespacesClient := clientset.CoreV1().Namespaces()
ns := &corev1.Namespace{
ObjectMeta: metav1.ObjectMeta{
Name: namespaceName,
},
}
_, err := namespacesClient.Create(ctx, ns, metav1.CreateOptions{})
if err != nil {
klog.Errorf("create namespace failed ")
return nil, err
}
secret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: namespaceName,
},
}
object := &corev1.ObjectReference{
Name: secretName,
}
sa := &corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: serviceAccountName,
Namespace: namespaceName,
},
Secrets: []corev1.ObjectReference{*object},
}
serviceAccountClient := clientset.CoreV1().ServiceAccounts(namespaceName)
_, err = serviceAccountClient.Create(ctx, sa, metav1.CreateOptions{})
if err != nil {
klog.Errorf("create serviceAccount failed ")
return nil, err
}
secretClient := clientset.CoreV1().Secrets(namespaceName)
_, err = secretClient.Create(ctx, secret, metav1.CreateOptions{})
if err != nil {
klog.Errorf("create secret failed ")
return nil, err
}
// mock jaeger server
ts := httptest.NewServer(http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
writer.WriteHeader(http.StatusOK)
}))
options := &servicemesh.Options{
IstioPilotHost: "",
KialiQueryHost: "",
JaegerQueryHost: ts.URL,
ServicemeshPrometheusHost: "",
}
handler := NewHandler(options, clientset, nil)
token, _ := json.Marshal(
&kiali.TokenResponse{
Username: "test",
Token: "test",
},
)
mc := &kiali.MockClient{
TokenResult: token,
RequestResult: "fake",
}
client := kiali.NewClient("token", nil, mc, "token", options.KialiQueryHost)
err = reflectutils.SetUnExportedField(handler, "client", client)
if err != nil {
klog.Errorf("apply mock client failed")
return nil, err
}
return handler, nil
}
func TestGetServiceTracing(t *testing.T) {
handler, err := prepare()
if err != nil {
t.Fatalf("init handler failed")
}
namespaceName := "namespace-test"
serviceName := "service-test"
url := fmt.Sprintf("/namespaces/%s/services/%s/traces", namespaceName, serviceName)
request, _ := http.NewRequest("GET", url, nil)
query := request.URL.Query()
query.Add("start", "1650167872000000")
query.Add("end", "1650211072000000")
query.Add("limit", "10")
request.URL.RawQuery = query.Encode()
restfulRequest := restful.NewRequest(request)
pathMap := make(map[string]string)
pathMap["namespace"] = namespaceName
pathMap["service"] = serviceName
if err := reflectutils.SetUnExportedField(restfulRequest, "pathParameters", pathMap); err != nil {
t.Fatalf("set pathParameters failed")
}
recorder := httptest.NewRecorder()
restfulResponse := restful.NewResponse(recorder)
restfulResponse.SetRequestAccepts("application/json")
handler.GetServiceTracing(restfulRequest, restfulResponse)
if status := restfulResponse.StatusCode(); status != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v", status, http.StatusOK)
}
}

View File

@@ -41,6 +41,8 @@ import (
kubesphere "kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/informers"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
"kubesphere.io/kubesphere/pkg/models/tenant"
servererr "kubesphere.io/kubesphere/pkg/server/errors"
@@ -58,16 +60,16 @@ type tenantHandler struct {
func NewTenantHandler(factory informers.InformerFactory, k8sclient kubernetes.Interface, ksclient kubesphere.Interface,
evtsClient events.Client, loggingClient logging.Client, auditingclient auditing.Client,
am am.AccessManagementInterface, authorizer authorizer.Authorizer,
am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, resourceGetter *resourcev1alpha3.ResourceGetter,
meteringOptions *meteringclient.Options, stopCh <-chan struct{}) *tenantHandler {
meteringOptions *meteringclient.Options, opClient openpitrix.Interface) *tenantHandler {
if meteringOptions == nil || meteringOptions.RetentionDay == "" {
meteringOptions = &meteringclient.DefaultMeteringOption
}
return &tenantHandler{
tenant: tenant.New(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourceGetter, stopCh),
tenant: tenant.New(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourceGetter, opClient),
meteringOptions: meteringOptions,
}
}
@@ -200,30 +202,40 @@ func (h *tenantHandler) CreateNamespace(request *restful.Request, response *rest
response.WriteEntity(created)
}
func (h *tenantHandler) CreateWorkspaceTemplate(request *restful.Request, response *restful.Response) {
func (h *tenantHandler) CreateWorkspaceTemplate(req *restful.Request, resp *restful.Response) {
var workspace tenantv1alpha2.WorkspaceTemplate
err := request.ReadEntity(&workspace)
err := req.ReadEntity(&workspace)
if err != nil {
klog.Error(err)
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
requestUser, ok := request.UserFrom(req.Request.Context())
if !ok {
err := fmt.Errorf("cannot obtain user info")
klog.Errorln(err)
api.HandleForbidden(resp, req, err)
}
created, err := h.tenant.CreateWorkspaceTemplate(&workspace)
created, err := h.tenant.CreateWorkspaceTemplate(requestUser, &workspace)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {
api.HandleNotFound(response, request, err)
api.HandleNotFound(resp, req, err)
return
}
api.HandleBadRequest(response, request, err)
if errors.IsForbidden(err) {
api.HandleForbidden(resp, req, err)
return
}
api.HandleBadRequest(resp, req, err)
return
}
response.WriteEntity(created)
resp.WriteEntity(created)
}
func (h *tenantHandler) DeleteWorkspaceTemplate(request *restful.Request, response *restful.Response) {
@@ -251,42 +263,53 @@ func (h *tenantHandler) DeleteWorkspaceTemplate(request *restful.Request, respon
response.WriteEntity(servererr.None)
}
func (h *tenantHandler) UpdateWorkspaceTemplate(request *restful.Request, response *restful.Response) {
workspaceName := request.PathParameter("workspace")
func (h *tenantHandler) UpdateWorkspaceTemplate(req *restful.Request, resp *restful.Response) {
workspaceName := req.PathParameter("workspace")
var workspace tenantv1alpha2.WorkspaceTemplate
err := request.ReadEntity(&workspace)
err := req.ReadEntity(&workspace)
if err != nil {
klog.Error(err)
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
if workspaceName != workspace.Name {
err := fmt.Errorf("the name of the object (%s) does not match the name on the URL (%s)", workspace.Name, workspaceName)
klog.Errorf("%+v", err)
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
updated, err := h.tenant.UpdateWorkspaceTemplate(&workspace)
requestUser, ok := request.UserFrom(req.Request.Context())
if !ok {
err := fmt.Errorf("cannot obtain user info")
klog.Errorln(err)
api.HandleForbidden(resp, req, err)
}
updated, err := h.tenant.UpdateWorkspaceTemplate(requestUser, &workspace)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {
api.HandleNotFound(response, request, err)
api.HandleNotFound(resp, req, err)
return
}
if errors.IsBadRequest(err) {
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
api.HandleInternalError(response, request, err)
if errors.IsForbidden(err) {
api.HandleForbidden(resp, req, err)
return
}
api.HandleInternalError(resp, req, err)
return
}
response.WriteEntity(updated)
resp.WriteEntity(updated)
}
func (h *tenantHandler) DescribeWorkspaceTemplate(request *restful.Request, response *restful.Response) {
@@ -518,33 +541,44 @@ func (h *tenantHandler) PatchNamespace(request *restful.Request, response *restf
response.WriteEntity(patched)
}
func (h *tenantHandler) PatchWorkspaceTemplate(request *restful.Request, response *restful.Response) {
workspaceName := request.PathParameter("workspace")
func (h *tenantHandler) PatchWorkspaceTemplate(req *restful.Request, resp *restful.Response) {
workspaceName := req.PathParameter("workspace")
var data json.RawMessage
err := request.ReadEntity(&data)
err := req.ReadEntity(&data)
if err != nil {
klog.Error(err)
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
patched, err := h.tenant.PatchWorkspaceTemplate(workspaceName, data)
requestUser, ok := request.UserFrom(req.Request.Context())
if !ok {
err := fmt.Errorf("cannot obtain user info")
klog.Errorln(err)
api.HandleForbidden(resp, req, err)
}
patched, err := h.tenant.PatchWorkspaceTemplate(requestUser, workspaceName, data)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {
api.HandleNotFound(response, request, err)
api.HandleNotFound(resp, req, err)
return
}
if errors.IsBadRequest(err) {
api.HandleBadRequest(response, request, err)
api.HandleBadRequest(resp, req, err)
return
}
api.HandleInternalError(response, request, err)
if errors.IsNotFound(err) {
api.HandleForbidden(resp, req, err)
return
}
api.HandleInternalError(resp, req, err)
return
}
response.WriteEntity(patched)
resp.WriteEntity(patched)
}
func (h *tenantHandler) ListClusters(r *restful.Request, response *restful.Response) {
@@ -555,8 +589,8 @@ func (h *tenantHandler) ListClusters(r *restful.Request, response *restful.Respo
return
}
result, err := h.tenant.ListClusters(user)
queryParam := query.ParseQueryParameter(r)
result, err := h.tenant.ListClusters(user, queryParam)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {

View File

@@ -40,8 +40,10 @@ import (
"kubesphere.io/kubesphere/pkg/informers"
"kubesphere.io/kubesphere/pkg/models"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/metering"
"kubesphere.io/kubesphere/pkg/models/monitoring"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
"kubesphere.io/kubesphere/pkg/server/errors"
"kubesphere.io/kubesphere/pkg/simple/client/auditing"
@@ -63,12 +65,12 @@ func Resource(resource string) schema.GroupResource {
func AddToContainer(c *restful.Container, factory informers.InformerFactory, k8sclient kubernetes.Interface,
ksclient kubesphere.Interface, evtsClient events.Client, loggingClient logging.Client,
auditingclient auditing.Client, am am.AccessManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, stopCh <-chan struct{}) error {
auditingclient auditing.Client, am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, opClient openpitrix.Interface) error {
mimePatch := []string{restful.MIME_JSON, runtime.MimeMergePatchJson, runtime.MimeJsonPatchJson}
ws := runtime.NewWebService(GroupVersion)
handler := NewTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, stopCh)
handler := NewTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opClient)
ws.Route(ws.GET("/clusters").
To(handler.ListClusters).

View File

@@ -31,6 +31,8 @@ import (
kubesphere "kubesphere.io/kubesphere/pkg/client/clientset/versioned"
"kubesphere.io/kubesphere/pkg/informers"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
"kubesphere.io/kubesphere/pkg/models/tenant"
"kubesphere.io/kubesphere/pkg/simple/client/auditing"
@@ -47,16 +49,16 @@ type tenantHandler struct {
func newTenantHandler(factory informers.InformerFactory, k8sclient kubernetes.Interface, ksclient kubesphere.Interface,
evtsClient events.Client, loggingClient logging.Client, auditingclient auditing.Client,
am am.AccessManagementInterface, authorizer authorizer.Authorizer,
am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, resourceGetter *resourcev1alpha3.ResourceGetter,
meteringOptions *meteringclient.Options, stopCh <-chan struct{}) *tenantHandler {
meteringOptions *meteringclient.Options, opClient openpitrix.Interface) *tenantHandler {
if meteringOptions == nil || meteringOptions.RetentionDay == "" {
meteringOptions = &meteringclient.DefaultMeteringOption
}
return &tenantHandler{
tenant: tenant.New(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourceGetter, stopCh),
tenant: tenant.New(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourceGetter, opClient),
meteringOptions: meteringOptions,
}
}

View File

@@ -37,6 +37,8 @@ import (
"kubesphere.io/kubesphere/pkg/kapis/tenant/v1alpha2"
"kubesphere.io/kubesphere/pkg/models"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/openpitrix"
resourcev1alpha3 "kubesphere.io/kubesphere/pkg/models/resources/v1alpha3/resource"
"kubesphere.io/kubesphere/pkg/server/errors"
"kubesphere.io/kubesphere/pkg/simple/client/auditing"
@@ -58,13 +60,13 @@ func Resource(resource string) schema.GroupResource {
func AddToContainer(c *restful.Container, factory informers.InformerFactory, k8sclient kubernetes.Interface,
ksclient kubesphere.Interface, evtsClient events.Client, loggingClient logging.Client,
auditingclient auditing.Client, am am.AccessManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, stopCh <-chan struct{}) error {
auditingclient auditing.Client, am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer,
monitoringclient monitoringclient.Interface, cache cache.Cache, meteringOptions *meteringclient.Options, opClient openpitrix.Interface) error {
mimePatch := []string{restful.MIME_JSON, runtime.MimeMergePatchJson, runtime.MimeJsonPatchJson}
ws := runtime.NewWebService(GroupVersion)
v1alpha2Handler := v1alpha2.NewTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, stopCh)
handler := newTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, stopCh)
v1alpha2Handler := v1alpha2.NewTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opClient)
handler := newTenantHandler(factory, k8sclient, ksclient, evtsClient, loggingClient, auditingclient, am, im, authorizer, monitoringclient, resourcev1alpha3.NewResourceGetter(factory, cache), meteringOptions, opClient)
ws.Route(ws.POST("/workspacetemplates").
To(v1alpha2Handler.CreateWorkspaceTemplate).

View File

@@ -47,12 +47,13 @@ import (
)
const (
MasterLabel = "node-role.kubernetes.io/master"
SidecarInject = "sidecar.istio.io/inject"
gatewayPrefix = "kubesphere-router-"
workingNamespace = "kubesphere-controls-system"
globalGatewayname = gatewayPrefix + "kubesphere-system"
helmPatch = `{"metadata":{"annotations":{"meta.helm.sh/release-name":"%s-ingress","meta.helm.sh/release-namespace":"%s"},"labels":{"helm.sh/chart":"ingress-nginx-3.35.0","app.kubernetes.io/managed-by":"Helm","app":null,"component":null,"tier":null}},"spec":{"selector":null}}`
MasterLabel = "node-role.kubernetes.io/master"
SidecarInject = "sidecar.istio.io/inject"
gatewayPrefix = "kubesphere-router-"
workingNamespace = "kubesphere-controls-system"
globalGatewayNameSuffix = "kubesphere-system"
globalGatewayName = gatewayPrefix + globalGatewayNameSuffix
helmPatch = `{"metadata":{"annotations":{"meta.helm.sh/release-name":"%s-ingress","meta.helm.sh/release-namespace":"%s"},"labels":{"helm.sh/chart":"ingress-nginx-3.35.0","app.kubernetes.io/managed-by":"Helm","app":null,"component":null,"tier":null}},"spec":{"selector":null}}`
)
type GatewayOperator interface {
@@ -62,7 +63,7 @@ type GatewayOperator interface {
UpdateGateway(namespace string, obj *v1alpha1.Gateway) (*v1alpha1.Gateway, error)
UpgradeGateway(namespace string) (*v1alpha1.Gateway, error)
ListGateways(query *query.Query) (*api.ListResult, error)
GetPods(namesapce string, query *query.Query) (*api.ListResult, error)
GetPods(namespace string, query *query.Query) (*api.ListResult, error)
GetPodLogs(ctx context.Context, namespace string, podName string, logOptions *corev1.PodLogOptions, responseWriter io.Writer) error
}
@@ -86,10 +87,14 @@ func NewGatewayOperator(client client.Client, cache cache.Cache, options *gatewa
func (c *gatewayOperator) getWorkingNamespace(namespace string) string {
ns := c.options.Namespace
// Set the working namespace to watching namespace when the Gatway's Namsapce Option is empty
// Set the working namespace to watching namespace when the Gateway's Namespace Option is empty
if ns == "" {
ns = namespace
}
// Convert the global gateway query parameter
if namespace == globalGatewayNameSuffix {
ns = workingNamespace
}
return ns
}
@@ -97,7 +102,7 @@ func (c *gatewayOperator) getWorkingNamespace(namespace string) string {
func (c *gatewayOperator) overrideDefaultValue(gateway *v1alpha1.Gateway, namespace string) *v1alpha1.Gateway {
// override default name
gateway.Name = fmt.Sprint(gatewayPrefix, namespace)
if gateway.Name != globalGatewayname {
if gateway.Name != globalGatewayName {
gateway.Spec.Controller.Scope = v1alpha1.Scope{Enabled: true, Namespace: namespace}
}
gateway.Namespace = c.getWorkingNamespace(namespace)
@@ -108,7 +113,7 @@ func (c *gatewayOperator) overrideDefaultValue(gateway *v1alpha1.Gateway, namesp
func (c *gatewayOperator) getGlobalGateway() *v1alpha1.Gateway {
globalkey := types.NamespacedName{
Namespace: workingNamespace,
Name: globalGatewayname,
Name: globalGatewayName,
}
global := &v1alpha1.Gateway{}
@@ -317,7 +322,7 @@ func (c *gatewayOperator) DeleteGateway(namespace string) error {
// Update Gateway
func (c *gatewayOperator) UpdateGateway(namespace string, obj *v1alpha1.Gateway) (*v1alpha1.Gateway, error) {
if c.options.Namespace == "" && obj.Namespace != namespace || c.options.Namespace != "" && c.options.Namespace != obj.Namespace {
return nil, fmt.Errorf("namepsace doesn't match with origin namesapce")
return nil, fmt.Errorf("namespace doesn't match with origin namespace")
}
c.overrideDefaultValue(obj, namespace)
err := c.client.Update(context.TODO(), obj)
@@ -331,7 +336,7 @@ func (c *gatewayOperator) UpgradeGateway(namespace string) (*v1alpha1.Gateway, e
if l == nil {
return nil, fmt.Errorf("invalid operation, no legacy gateway was found")
}
if l.Namespace != c.options.Namespace {
if l.Namespace != c.getWorkingNamespace(namespace) {
return nil, fmt.Errorf("invalid operation, can't upgrade legacy gateway when working namespace changed")
}
@@ -345,7 +350,7 @@ func (c *gatewayOperator) UpgradeGateway(namespace string) (*v1alpha1.Gateway, e
}()
}
// Delete old deployment, because it's not compatile with the deployment in the helm chart.
// Delete old deployment, because it's not compatible with the deployment in the helm chart.
// We can't defer here, there's a potential race condition causing gateway operator fails.
d := &appsv1.Deployment{
ObjectMeta: v1.ObjectMeta{
@@ -451,7 +456,7 @@ func (c *gatewayOperator) compare(left runtime.Object, right runtime.Object, fie
func (c *gatewayOperator) filter(object runtime.Object, filter query.Filter) bool {
var objMeta v1.ObjectMeta
var namesapce string
var namespace string
gateway, ok := object.(*v1alpha1.Gateway)
if !ok {
@@ -459,31 +464,31 @@ func (c *gatewayOperator) filter(object runtime.Object, filter query.Filter) boo
if !ok {
return false
}
namesapce = svc.Labels["project"]
namespace = svc.Labels["project"]
objMeta = svc.ObjectMeta
} else {
namesapce = gateway.Spec.Controller.Scope.Namespace
namespace = gateway.Spec.Controller.Scope.Namespace
objMeta = gateway.ObjectMeta
}
switch filter.Field {
case query.FieldNamespace:
return strings.Compare(namesapce, string(filter.Value)) == 0
return strings.Compare(namespace, string(filter.Value)) == 0
default:
return v1alpha3.DefaultObjectMetaFilter(objMeta, filter)
}
}
func (c *gatewayOperator) GetPods(namesapce string, query *query.Query) (*api.ListResult, error) {
func (c *gatewayOperator) GetPods(namespace string, query *query.Query) (*api.ListResult, error) {
podGetter := pod.New(c.factory.KubernetesSharedInformerFactory())
//TODO: move the selector string to options
selector, err := labels.Parse(fmt.Sprintf("app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/instance=kubesphere-router-%s-ingress", namesapce))
selector, err := labels.Parse(fmt.Sprintf("app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/instance=kubesphere-router-%s-ingress", namespace))
if err != nil {
return nil, fmt.Errorf("invild selector config")
return nil, fmt.Errorf("invaild selector config")
}
query.LabelSelector = selector.String()
return podGetter.List(c.getWorkingNamespace(namesapce), query)
return podGetter.List(c.getWorkingNamespace(namespace), query)
}
func (c *gatewayOperator) GetPodLogs(ctx context.Context, namespace string, podName string, logOptions *corev1.PodLogOptions, responseWriter io.Writer) error {

View File

@@ -92,7 +92,7 @@ func Test_gatewayOperator_GetGateways(t *testing.T) {
},
},
args: args{
namespace: "projct1",
namespace: "project1",
},
},
{
@@ -105,7 +105,7 @@ func Test_gatewayOperator_GetGateways(t *testing.T) {
},
},
args: args{
namespace: "projct1",
namespace: "project1",
},
},
{
@@ -336,7 +336,7 @@ func Test_gatewayOperator_CreateGateway(t *testing.T) {
},
},
args: args{
namespace: "projct1",
namespace: "project1",
obj: &v1alpha1.Gateway{
TypeMeta: v1.TypeMeta{
Kind: "Gateway",
@@ -346,7 +346,7 @@ func Test_gatewayOperator_CreateGateway(t *testing.T) {
Controller: v1alpha1.ControllerSpec{
Scope: v1alpha1.Scope{
Enabled: true,
Namespace: "projct1",
Namespace: "project1",
},
},
},
@@ -367,7 +367,7 @@ func Test_gatewayOperator_CreateGateway(t *testing.T) {
},
},
args: args{
namespace: "projct2",
namespace: "project2",
obj: &v1alpha1.Gateway{
TypeMeta: v1.TypeMeta{
Kind: "Gateway",
@@ -377,7 +377,7 @@ func Test_gatewayOperator_CreateGateway(t *testing.T) {
Controller: v1alpha1.ControllerSpec{
Scope: v1alpha1.Scope{
Enabled: true,
Namespace: "projct2",
Namespace: "project2",
},
},
},
@@ -593,7 +593,7 @@ func Test_gatewayOperator_UpgradeGateway(t *testing.T) {
},
},
args: args{
namespace: "projct1",
namespace: "project1",
},
wantErr: true,
},

View File

@@ -220,7 +220,7 @@ func (o *operator) createCSR(username string) error {
}
var csrBuffer, keyBuffer bytes.Buffer
if err = pem.Encode(&keyBuffer, &pem.Block{Type: "PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(x509key)}); err != nil {
if err = pem.Encode(&keyBuffer, &pem.Block{Type: "RSA PRIVATE KEY", Bytes: x509.MarshalPKCS1PrivateKey(x509key)}); err != nil {
klog.Errorln(err)
return err
}

View File

@@ -21,6 +21,8 @@ import (
"encoding/base64"
"testing"
"kubesphere.io/kubesphere/pkg/utils/reposcache"
"github.com/go-openapi/strfmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
@@ -172,5 +174,5 @@ func prepareAppOperator() ApplicationInterface {
k8sClient = fakek8s.NewSimpleClientset()
fakeInformerFactory = informers.NewInformerFactories(k8sClient, ksClient, nil, nil, nil, nil)
return newApplicationOperator(cachedReposData, fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient, fake.NewFakeS3())
return newApplicationOperator(reposcache.NewReposCache(), fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient, fake.NewFakeS3())
}

View File

@@ -17,11 +17,11 @@ limitations under the License.
package openpitrix
import (
"sync"
"k8s.io/client-go/tools/cache"
"k8s.io/klog"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
"kubesphere.io/api/application/v1alpha1"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned"
@@ -46,51 +46,42 @@ type openpitrixOperator struct {
CategoryInterface
}
var cachedReposData reposcache.ReposCache
var helmReposInformer cache.SharedIndexInformer
var once sync.Once
func init() {
cachedReposData = reposcache.NewReposCache()
}
func NewOpenpitrixOperator(ksInformers ks_informers.InformerFactory, ksClient versioned.Interface, s3Client s3.Interface, stopCh <-chan struct{}) Interface {
once.Do(func() {
klog.Infof("start helm repo informer")
helmReposInformer = ksInformers.KubeSphereSharedInformerFactory().Application().V1alpha1().HelmRepos().Informer()
helmReposInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
r := obj.(*v1alpha1.HelmRepo)
cachedReposData.AddRepo(r)
},
UpdateFunc: func(oldObj, newObj interface{}) {
oldRepo := oldObj.(*v1alpha1.HelmRepo)
newRepo := newObj.(*v1alpha1.HelmRepo)
cachedReposData.UpdateRepo(oldRepo, newRepo)
},
DeleteFunc: func(obj interface{}) {
r := obj.(*v1alpha1.HelmRepo)
cachedReposData.DeleteRepo(r)
},
})
ctgInformer := ksInformers.KubeSphereSharedInformerFactory().Application().V1alpha1().HelmCategories().Informer()
ctgInformer.AddIndexers(map[string]cache.IndexFunc{
reposcache.CategoryIndexer: func(obj interface{}) ([]string, error) {
ctg, _ := obj.(*v1alpha1.HelmCategory)
return []string{ctg.Spec.Name}, nil
},
})
indexer := ctgInformer.GetIndexer()
cachedReposData.SetCategoryIndexer(indexer)
func NewOpenpitrixOperator(ksInformers ks_informers.InformerFactory, ksClient versioned.Interface, s3Client s3.Interface, cc clusterclient.ClusterClients, stopCh <-chan struct{}) Interface {
klog.Infof("start helm repo informer")
cachedReposData := reposcache.NewReposCache()
helmReposInformer := ksInformers.KubeSphereSharedInformerFactory().Application().V1alpha1().HelmRepos().Informer()
helmReposInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
r := obj.(*v1alpha1.HelmRepo)
cachedReposData.AddRepo(r)
},
UpdateFunc: func(oldObj, newObj interface{}) {
oldRepo := oldObj.(*v1alpha1.HelmRepo)
newRepo := newObj.(*v1alpha1.HelmRepo)
cachedReposData.UpdateRepo(oldRepo, newRepo)
},
DeleteFunc: func(obj interface{}) {
r := obj.(*v1alpha1.HelmRepo)
cachedReposData.DeleteRepo(r)
},
})
ctgInformer := ksInformers.KubeSphereSharedInformerFactory().Application().V1alpha1().HelmCategories().Informer()
ctgInformer.AddIndexers(map[string]cache.IndexFunc{
reposcache.CategoryIndexer: func(obj interface{}) ([]string, error) {
ctg, _ := obj.(*v1alpha1.HelmCategory)
return []string{ctg.Spec.Name}, nil
},
})
indexer := ctgInformer.GetIndexer()
cachedReposData.SetCategoryIndexer(indexer)
return &openpitrixOperator{
AttachmentInterface: newAttachmentOperator(s3Client),
ApplicationInterface: newApplicationOperator(cachedReposData, ksInformers.KubeSphereSharedInformerFactory(), ksClient, s3Client),
RepoInterface: newRepoOperator(cachedReposData, ksInformers.KubeSphereSharedInformerFactory(), ksClient),
ReleaseInterface: newReleaseOperator(cachedReposData, ksInformers.KubernetesSharedInformerFactory(), ksInformers.KubeSphereSharedInformerFactory(), ksClient),
ReleaseInterface: newReleaseOperator(cachedReposData, ksInformers.KubernetesSharedInformerFactory(), ksInformers.KubeSphereSharedInformerFactory(), ksClient, cc),
CategoryInterface: newCategoryOperator(cachedReposData, ksInformers.KubeSphereSharedInformerFactory(), ksClient),
}
}

View File

@@ -70,13 +70,13 @@ type releaseOperator struct {
clusterClients clusterclient.ClusterClients
}
func newReleaseOperator(cached reposcache.ReposCache, k8sFactory informers.SharedInformerFactory, ksFactory externalversions.SharedInformerFactory, ksClient versioned.Interface) ReleaseInterface {
func newReleaseOperator(cached reposcache.ReposCache, k8sFactory informers.SharedInformerFactory, ksFactory externalversions.SharedInformerFactory, ksClient versioned.Interface, cc clusterclient.ClusterClients) ReleaseInterface {
c := &releaseOperator{
informers: k8sFactory,
rlsClient: ksClient.ApplicationV1alpha1().HelmReleases(),
rlsLister: ksFactory.Application().V1alpha1().HelmReleases().Lister(),
cachedRepos: cached,
clusterClients: clusterclient.NewClusterClient(ksFactory.Cluster().V1alpha1().Clusters()),
clusterClients: cc,
appVersionLister: ksFactory.Application().V1alpha1().HelmApplicationVersions().Lister(),
}

View File

@@ -21,6 +21,8 @@ import (
"encoding/base64"
"testing"
"kubesphere.io/kubesphere/pkg/utils/reposcache"
"github.com/go-openapi/strfmt"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/klog"
@@ -69,7 +71,7 @@ func TestOpenPitrixRelease(t *testing.T) {
}
}
rlsOperator := newReleaseOperator(cachedReposData, fakeInformerFactory.KubernetesSharedInformerFactory(), fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient)
rlsOperator := newReleaseOperator(reposcache.NewReposCache(), fakeInformerFactory.KubernetesSharedInformerFactory(), fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient, nil)
req := CreateClusterRequest{
Name: "test-rls",

View File

@@ -20,6 +20,8 @@ import (
"context"
"testing"
"kubesphere.io/kubesphere/pkg/utils/reposcache"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
fakek8s "k8s.io/client-go/kubernetes/fake"
"k8s.io/klog"
@@ -111,5 +113,5 @@ func prepareRepoOperator() RepoInterface {
k8sClient = fakek8s.NewSimpleClientset()
fakeInformerFactory = informers.NewInformerFactories(k8sClient, ksClient, nil, nil, nil, nil)
return newRepoOperator(cachedReposData, fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient)
return newRepoOperator(reposcache.NewReposCache(), fakeInformerFactory.KubeSphereSharedInformerFactory(), ksClient)
}

View File

@@ -302,7 +302,7 @@ func (c *repoOperator) ListRepos(conditions *params.Conditions, orderBy string,
start, end := (&query.Pagination{Limit: limit, Offset: offset}).GetValidPagination(totalCount)
repos = repos[start:end]
items := make([]interface{}, 0, len(repos))
for i, j := offset, 0; i < len(repos) && j < limit; i, j = i+1, j+1 {
for i := range repos {
items = append(items, convertRepo(repos[i]))
}
return &models.PageableResponse{Items: items, TotalCount: totalCount}, nil

View File

@@ -280,6 +280,9 @@ type AppVersionReview struct {
// version type
VersionType string `json:"version_type,omitempty"`
// Workspace of the app version
Workspace string `json:"workspace,omitempty"`
}
type CreateAppRequest struct {
@@ -710,7 +713,7 @@ type Repo struct {
// selectors
Selectors RepoSelectors `json:"selectors"`
// status eg.[active|deleted]
// status eg.[successful|failed|syncing]
Status string `json:"status,omitempty"`
// record status changed time

View File

@@ -431,6 +431,10 @@ func convertRepo(in *v1alpha1.HelmRepo) *Repo {
out.Name = in.GetTrueName()
out.Status = in.Status.State
// set default status `syncing` when helmrepo not reconcile yet
if out.Status == "" {
out.Status = v1alpha1.RepoStateSyncing
}
date := strfmt.DateTime(time.Unix(in.CreationTimestamp.Unix(), 0))
out.CreateTime = &date
@@ -817,6 +821,7 @@ func convertAppVersionReview(app *v1alpha1.HelmApplication, appVersion *v1alpha1
review.VersionID = appVersion.GetHelmApplicationVersionId()
review.Phase = AppVersionReviewPhaseOAIGen{}
review.VersionName = appVersion.GetVersionName()
review.Workspace = appVersion.GetWorkspace()
review.StatusTime = strfmt.DateTime(status.Audit[0].Time.Time)
review.AppName = app.GetTrueName()

View File

@@ -0,0 +1,240 @@
package persistentvolumeclaim
import (
"testing"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes/fake"
"github.com/google/go-cmp/cmp"
snapshot "github.com/kubernetes-csi/external-snapshotter/client/v4/apis/volumesnapshot/v1"
snapshotefakeclient "github.com/kubernetes-csi/external-snapshotter/client/v4/clientset/versioned/fake"
snapshotinformers "github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions"
"kubesphere.io/kubesphere/pkg/models/resources/v1alpha2"
"kubesphere.io/kubesphere/pkg/server/params"
)
var (
testStorageClassName = "sc1"
)
var (
pvc1 = &corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-1",
Namespace: "default",
Annotations: map[string]string{
"kubesphere.io/in-use": "false",
"kubesphere.io/allow-snapshot": "false",
},
},
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
Status: corev1.PersistentVolumeClaimStatus{
Phase: corev1.ClaimPending,
},
}
pvc2 = &corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-2",
Namespace: "default",
Annotations: map[string]string{
"kubesphere.io/in-use": "false",
"kubesphere.io/allow-snapshot": "false",
},
},
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
Status: corev1.PersistentVolumeClaimStatus{
Phase: corev1.ClaimLost,
},
}
pvc3 = &corev1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-3",
Namespace: "default",
Annotations: map[string]string{
"kubesphere.io/in-use": "true",
"kubesphere.io/allow-snapshot": "false",
},
},
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
Status: corev1.PersistentVolumeClaimStatus{
Phase: corev1.ClaimBound,
},
}
pod1 = &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "pod-1",
Namespace: "default",
},
Spec: corev1.PodSpec{
Volumes: []corev1.Volume{
{
Name: "data",
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: pvc3.Name,
},
},
},
},
},
}
vsc1 = &snapshot.VolumeSnapshotClass{
ObjectMeta: metav1.ObjectMeta{
Name: "VolumeSnapshotClass-1",
Namespace: "default",
},
Driver: testStorageClassName,
}
persistentVolumeClaims = []interface{}{pvc1, pvc2, pvc3}
pods = []interface{}{pod1}
volumeSnapshotClasses = []interface{}{vsc1}
)
func prepare() (v1alpha2.Interface, error) {
client := fake.NewSimpleClientset()
informer := informers.NewSharedInformerFactory(client, 0)
snapshotClient := snapshotefakeclient.NewSimpleClientset()
snapshotInformers := snapshotinformers.NewSharedInformerFactory(snapshotClient, 0)
for _, persistentVolumeClaim := range persistentVolumeClaims {
err := informer.Core().V1().PersistentVolumeClaims().Informer().GetIndexer().Add(persistentVolumeClaim)
if err != nil {
return nil, err
}
}
for _, pod := range pods {
err := informer.Core().V1().Pods().Informer().GetIndexer().Add(pod)
if err != nil {
return nil, err
}
}
for _, volumeSnapshotClass := range volumeSnapshotClasses {
err := snapshotInformers.Snapshot().V1().VolumeSnapshotClasses().Informer().GetIndexer().Add(volumeSnapshotClass)
if err != nil {
return nil, err
}
}
return NewPersistentVolumeClaimSearcher(informer, snapshotInformers), nil
}
func TestGet(t *testing.T) {
tests := []struct {
Namespace string
Name string
Expected interface{}
ExpectedErr error
}{
{
"default",
"pvc-1",
pvc1,
nil,
},
}
getter, err := prepare()
if err != nil {
t.Fatal(err)
}
for _, test := range tests {
got, err := getter.Get(test.Namespace, test.Name)
if test.ExpectedErr != nil && err != test.ExpectedErr {
t.Errorf("expected error, got nothing")
} else if err != nil {
t.Fatal(err)
}
diff := cmp.Diff(got, test.Expected)
if diff != "" {
t.Errorf("%T differ (-got, +want): %s", test.Expected, diff)
}
}
}
func TestSearch(t *testing.T) {
tests := []struct {
Namespace string
Conditions *params.Conditions
OrderBy string
Reverse bool
Expected []interface{}
ExpectedErr error
}{
{
Namespace: "default",
Conditions: &params.Conditions{
Match: map[string]string{
v1alpha2.Status: v1alpha2.StatusPending,
},
Fuzzy: nil,
},
OrderBy: "name",
Reverse: false,
Expected: []interface{}{pvc1},
ExpectedErr: nil,
},
{
Namespace: "default",
Conditions: &params.Conditions{
Match: map[string]string{
v1alpha2.Status: v1alpha2.StatusLost,
},
Fuzzy: nil,
},
OrderBy: "name",
Reverse: false,
Expected: []interface{}{pvc2},
ExpectedErr: nil,
},
{
Namespace: "default",
Conditions: &params.Conditions{
Match: map[string]string{
v1alpha2.Status: v1alpha2.StatusBound,
},
Fuzzy: nil,
},
OrderBy: "name",
Reverse: false,
Expected: []interface{}{pvc3},
ExpectedErr: nil,
},
}
searcher, err := prepare()
if err != nil {
t.Fatal(err)
}
for _, test := range tests {
got, err := searcher.Search(test.Namespace, test.Conditions, test.OrderBy, test.Reverse)
if test.ExpectedErr != nil && err != test.ExpectedErr {
t.Errorf("expected error, got nothing")
} else if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(got, test.Expected); diff != "" {
t.Errorf("%T differ (-got, +want): %s", test.Expected, diff)
}
}
}

View File

@@ -0,0 +1,217 @@
package federatedpersistentvolumeclaim
import (
"testing"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"github.com/google/go-cmp/cmp"
fedv1beta1 "kubesphere.io/api/types/v1beta1"
"kubesphere.io/kubesphere/pkg/api"
"kubesphere.io/kubesphere/pkg/apiserver/query"
"kubesphere.io/kubesphere/pkg/client/clientset/versioned/fake"
"kubesphere.io/kubesphere/pkg/client/informers/externalversions"
"kubesphere.io/kubesphere/pkg/models/resources/v1alpha3"
)
var (
testStorageClassName = "sc1"
)
var (
pvc1 = &fedv1beta1.FederatedPersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-1",
Namespace: "default",
},
}
pvc2 = &fedv1beta1.FederatedPersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-2",
Namespace: "default",
},
Spec: fedv1beta1.FederatedPersistentVolumeClaimSpec{
Template: fedv1beta1.PersistentVolumeClaimTemplate{
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
},
},
}
pvc3 = &fedv1beta1.FederatedPersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: "pvc-3",
Namespace: "default",
Labels: map[string]string{
"kubesphere.io/in-use": "false",
},
},
Spec: fedv1beta1.FederatedPersistentVolumeClaimSpec{
Template: fedv1beta1.PersistentVolumeClaimTemplate{
Spec: corev1.PersistentVolumeClaimSpec{
StorageClassName: &testStorageClassName,
},
},
},
}
federatedPersistentVolumeClaims = []*fedv1beta1.FederatedPersistentVolumeClaim{pvc1, pvc2, pvc3}
)
func fedPVCsToInterface(federatedPersistentVolumeClaims ...*fedv1beta1.FederatedPersistentVolumeClaim) []interface{} {
items := make([]interface{}, 0)
for _, fedPVC := range federatedPersistentVolumeClaims {
items = append(items, fedPVC)
}
return items
}
func fedPVCsToRuntimeObject(federatedPersistentVolumeClaims ...*fedv1beta1.FederatedPersistentVolumeClaim) []runtime.Object {
items := make([]runtime.Object, 0)
for _, fedPVC := range federatedPersistentVolumeClaims {
items = append(items, fedPVC)
}
return items
}
func prepare() (v1alpha3.Interface, error) {
client := fake.NewSimpleClientset(fedPVCsToRuntimeObject(federatedPersistentVolumeClaims...)...)
informer := externalversions.NewSharedInformerFactory(client, 0)
for _, fedPVC := range federatedPersistentVolumeClaims {
err := informer.Types().V1beta1().FederatedPersistentVolumeClaims().Informer().GetIndexer().Add(fedPVC)
if err != nil {
return nil, err
}
}
return New(informer), nil
}
func TestGet(t *testing.T) {
tests := []struct {
namespace string
name string
expected runtime.Object
expectedErr error
}{
{
namespace: "default",
name: "pvc-1",
expected: fedPVCsToRuntimeObject(pvc1)[0],
expectedErr: nil,
},
}
getter, err := prepare()
if err != nil {
t.Fatal(err)
}
for _, test := range tests {
pvc, err := getter.Get(test.namespace, test.name)
if test.expectedErr != nil && err != test.expectedErr {
t.Errorf("expected error, got nothing")
} else if err != nil {
t.Fatal(err)
}
diff := cmp.Diff(pvc, test.expected)
if diff != "" {
t.Errorf("%T differ (-got, +want): %s", test.expected, diff)
}
}
}
func TestList(t *testing.T) {
tests := []struct {
description string
namespace string
query *query.Query
expected *api.ListResult
expectedErr error
}{
{
description: "test name filter",
namespace: "default",
query: &query.Query{
Pagination: &query.Pagination{
Limit: 10,
Offset: 0,
},
SortBy: query.FieldName,
Ascending: false,
Filters: map[query.Field]query.Value{query.FieldName: query.Value(pvc1.Name)},
},
expected: &api.ListResult{
Items: fedPVCsToInterface(federatedPersistentVolumeClaims[0]),
TotalItems: 1,
},
expectedErr: nil,
},
{
description: "test storageClass filter",
namespace: "default",
query: &query.Query{
Pagination: &query.Pagination{
Limit: 10,
Offset: 0,
},
SortBy: query.FieldName,
Ascending: false,
Filters: map[query.Field]query.Value{query.Field(storageClassName): query.Value(*pvc2.Spec.Template.Spec.StorageClassName)},
},
expected: &api.ListResult{
Items: fedPVCsToInterface(federatedPersistentVolumeClaims[2], federatedPersistentVolumeClaims[1]),
TotalItems: 2,
},
expectedErr: nil,
},
{
description: "test label filter",
namespace: "default",
query: &query.Query{
Pagination: &query.Pagination{
Limit: 10,
Offset: 0,
},
SortBy: query.FieldName,
Ascending: false,
LabelSelector: "kubesphere.io/in-use=false",
Filters: map[query.Field]query.Value{query.Field(storageClassName): query.Value(*pvc2.Spec.Template.Spec.StorageClassName)},
},
expected: &api.ListResult{
Items: fedPVCsToInterface(federatedPersistentVolumeClaims[2]),
TotalItems: 1,
},
expectedErr: nil,
},
}
lister, err := prepare()
if err != nil {
t.Fatal(err)
}
for _, test := range tests {
got, err := lister.List(test.namespace, test.query)
if test.expectedErr != nil && err != test.expectedErr {
t.Errorf("expected error, got nothing")
} else if err != nil {
t.Fatal(err)
}
if diff := cmp.Diff(got, test.expected); diff != "" {
t.Errorf("[%s] %T differ (-got, +want): %s", test.description, test.expected, diff)
}
}
}

View File

@@ -20,6 +20,9 @@ import (
"sort"
"strings"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/klog"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
@@ -150,33 +153,13 @@ func DefaultObjectMetaFilter(item metav1.ObjectMeta, filter query.Filter) bool {
}
}
func labelMatch(labels map[string]string, filter string) bool {
fields := strings.SplitN(filter, "=", 2)
var key, value string
var opposite bool
if len(fields) == 2 {
key = fields[0]
if strings.HasSuffix(key, "!") {
key = strings.TrimSuffix(key, "!")
opposite = true
}
value = fields[1]
} else {
key = fields[0]
value = "*"
func labelMatch(m map[string]string, filter string) bool {
labelSelector, err := labels.Parse(filter)
if err != nil {
klog.Warningf("invalid labelSelector %s: %s", filter, err)
return false
}
for k, v := range labels {
if opposite {
if (k == key) && v != value {
return true
}
} else {
if (k == key) && (value == "*" || v == value) {
return true
}
}
}
return false
return labelSelector.Matches(labels.Set(m))
}
func objectsToInterfaces(objs []runtime.Object) []interface{} {

View File

@@ -24,12 +24,15 @@ import (
"strings"
"time"
"github.com/mitchellh/mapstructure"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apiserver/pkg/authentication/user"
"k8s.io/client-go/kubernetes"
"k8s.io/klog"
@@ -40,6 +43,8 @@ import (
tenantv1alpha2 "kubesphere.io/api/tenant/v1alpha2"
typesv1beta1 "kubesphere.io/api/types/v1beta1"
iamv1alpha2 "kubesphere.io/api/iam/v1alpha2"
"kubesphere.io/kubesphere/pkg/api"
auditingv1alpha1 "kubesphere.io/kubesphere/pkg/api/auditing/v1alpha1"
eventsv1alpha1 "kubesphere.io/kubesphere/pkg/api/events/v1alpha1"
@@ -53,6 +58,7 @@ import (
"kubesphere.io/kubesphere/pkg/models/auditing"
"kubesphere.io/kubesphere/pkg/models/events"
"kubesphere.io/kubesphere/pkg/models/iam/am"
"kubesphere.io/kubesphere/pkg/models/iam/im"
"kubesphere.io/kubesphere/pkg/models/logging"
"kubesphere.io/kubesphere/pkg/models/metering"
"kubesphere.io/kubesphere/pkg/models/monitoring"
@@ -65,6 +71,8 @@ import (
loggingclient "kubesphere.io/kubesphere/pkg/simple/client/logging"
meteringclient "kubesphere.io/kubesphere/pkg/simple/client/metering"
monitoringclient "kubesphere.io/kubesphere/pkg/simple/client/monitoring"
"kubesphere.io/kubesphere/pkg/utils/clusterclient"
jsonpatchutil "kubesphere.io/kubesphere/pkg/utils/josnpatchutil"
"kubesphere.io/kubesphere/pkg/utils/stringutils"
)
@@ -74,10 +82,10 @@ type Interface interface {
ListWorkspaces(user user.Info, queryParam *query.Query) (*api.ListResult, error)
GetWorkspace(workspace string) (*tenantv1alpha1.Workspace, error)
ListWorkspaceTemplates(user user.Info, query *query.Query) (*api.ListResult, error)
CreateWorkspaceTemplate(workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error)
CreateWorkspaceTemplate(user user.Info, workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error)
DeleteWorkspaceTemplate(workspace string, opts metav1.DeleteOptions) error
UpdateWorkspaceTemplate(workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error)
PatchWorkspaceTemplate(workspace string, data json.RawMessage) (*tenantv1alpha2.WorkspaceTemplate, error)
UpdateWorkspaceTemplate(user user.Info, workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error)
PatchWorkspaceTemplate(user user.Info, workspace string, data json.RawMessage) (*tenantv1alpha2.WorkspaceTemplate, error)
DescribeWorkspaceTemplate(workspace string) (*tenantv1alpha2.WorkspaceTemplate, error)
ListNamespaces(user user.Info, workspace string, query *query.Query) (*api.ListResult, error)
ListDevOpsProjects(user user.Info, workspace string, query *query.Query) (*api.ListResult, error)
@@ -92,7 +100,7 @@ type Interface interface {
DeleteNamespace(workspace, namespace string) error
UpdateNamespace(workspace string, namespace *corev1.Namespace) (*corev1.Namespace, error)
PatchNamespace(workspace string, namespace *corev1.Namespace) (*corev1.Namespace, error)
ListClusters(info user.Info) (*api.ListResult, error)
ListClusters(info user.Info, queryParam *query.Query) (*api.ListResult, error)
Metering(user user.Info, queryParam *meteringv1alpha1.Query, priceInfo meteringclient.PriceInfo) (monitoring.Metrics, error)
MeteringHierarchy(user user.Info, queryParam *meteringv1alpha1.Query, priceInfo meteringclient.PriceInfo) (metering.ResourceStatistic, error)
CreateWorkspaceResourceQuota(workspace string, resourceQuota *quotav1alpha2.ResourceQuota) (*quotav1alpha2.ResourceQuota, error)
@@ -103,6 +111,7 @@ type Interface interface {
type tenantOperator struct {
am am.AccessManagementInterface
im im.IdentityManagementInterface
authorizer authorizer.Authorizer
k8sclient kubernetes.Interface
ksclient kubesphere.Interface
@@ -112,16 +121,13 @@ type tenantOperator struct {
auditing auditing.Interface
mo monitoring.MonitoringOperator
opRelease openpitrix.ReleaseInterface
clusterClient clusterclient.ClusterClients
}
func New(informers informers.InformerFactory, k8sclient kubernetes.Interface, ksclient kubesphere.Interface, evtsClient eventsclient.Client, loggingClient loggingclient.Client, auditingclient auditingclient.Client, am am.AccessManagementInterface, authorizer authorizer.Authorizer, monitoringclient monitoringclient.Interface, resourceGetter *resourcev1alpha3.ResourceGetter, stopCh <-chan struct{}) Interface {
var openpitrixRelease openpitrix.ReleaseInterface
if ksclient != nil {
openpitrixRelease = openpitrix.NewOpenpitrixOperator(informers, ksclient, nil, stopCh)
}
func New(informers informers.InformerFactory, k8sclient kubernetes.Interface, ksclient kubesphere.Interface, evtsClient eventsclient.Client, loggingClient loggingclient.Client, auditingclient auditingclient.Client, am am.AccessManagementInterface, im im.IdentityManagementInterface, authorizer authorizer.Authorizer, monitoringclient monitoringclient.Interface, resourceGetter *resourcev1alpha3.ResourceGetter, opClient openpitrix.Interface) Interface {
return &tenantOperator{
am: am,
im: im,
authorizer: authorizer,
resourceGetter: resourcesv1alpha3.NewResourceGetter(informers, nil),
k8sclient: k8sclient,
@@ -130,7 +136,8 @@ func New(informers informers.InformerFactory, k8sclient kubernetes.Interface, ks
lo: logging.NewLoggingOperator(loggingClient),
auditing: auditing.NewEventsOperator(auditingclient),
mo: monitoring.NewMonitoringOperator(monitoringclient, nil, k8sclient, informers, resourceGetter, nil),
opRelease: openpitrixRelease,
opRelease: opClient,
clusterClient: clusterclient.NewClusterClient(informers.KubeSphereSharedInformerFactory().Cluster().V1alpha1().Clusters()),
}
}
@@ -469,15 +476,111 @@ func (t *tenantOperator) PatchNamespace(workspace string, namespace *corev1.Name
return t.k8sclient.CoreV1().Namespaces().Patch(context.Background(), namespace.Name, types.MergePatchType, data, metav1.PatchOptions{})
}
func (t *tenantOperator) PatchWorkspaceTemplate(workspace string, data json.RawMessage) (*tenantv1alpha2.WorkspaceTemplate, error) {
return t.ksclient.TenantV1alpha2().WorkspaceTemplates().Patch(context.Background(), workspace, types.MergePatchType, data, metav1.PatchOptions{})
func (t *tenantOperator) PatchWorkspaceTemplate(user user.Info, workspace string, data json.RawMessage) (*tenantv1alpha2.WorkspaceTemplate, error) {
var manageWorkspaceTemplateRequest bool
clusterNames := sets.NewString()
patchs, err := jsonpatchutil.Parse(data)
if err != nil {
klog.Error(err)
return nil, err
}
if len(patchs) > 0 {
for _, patch := range patchs {
path, err := patch.Path()
if err != nil {
klog.Error(err)
return nil, err
}
// If the request path is cluster, just collecting cluster name to set and continue to check cluster permission later.
// Or indicate that want to manage the workspace templates, so check if user has the permission to manage workspace templates.
if strings.HasPrefix(path, "/spec/placement") {
if patch.Kind() != "add" && patch.Kind() != "remove" {
err := errors.NewBadRequest("not support operation type")
klog.Error(err)
return nil, err
}
clusterValue := make(map[string]interface{})
err := jsonpatchutil.GetValue(patch, &clusterValue)
if err != nil {
klog.Error(err)
return nil, err
}
// if the placement is empty, the first patch need fill with "clusters" field.
if cName := clusterValue["name"]; cName != nil {
cn, ok := cName.(string)
if ok {
clusterNames.Insert(cn)
}
} else if cluster := clusterValue["clusters"]; cluster != nil {
clusterRefrences := []typesv1beta1.GenericClusterReference{}
err := mapstructure.Decode(cluster, &clusterRefrences)
if err != nil {
klog.Error(err)
return nil, err
}
for _, v := range clusterRefrences {
clusterNames.Insert(v.Name)
}
}
} else {
manageWorkspaceTemplateRequest = true
}
}
}
if manageWorkspaceTemplateRequest {
err := t.checkWorkspaceTemplatePermission(user, workspace)
if err != nil {
klog.Error(err)
return nil, err
}
}
if clusterNames.Len() > 0 {
err := t.checkClusterPermission(user, clusterNames.List())
if err != nil {
klog.Error(err)
return nil, err
}
}
return t.ksclient.TenantV1alpha2().WorkspaceTemplates().Patch(context.Background(), workspace, types.JSONPatchType, data, metav1.PatchOptions{})
}
func (t *tenantOperator) CreateWorkspaceTemplate(workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error) {
func (t *tenantOperator) CreateWorkspaceTemplate(user user.Info, workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error) {
if len(workspace.Spec.Placement.Clusters) != 0 {
clusters := make([]string, 0)
for _, v := range workspace.Spec.Placement.Clusters {
clusters = append(clusters, v.Name)
}
err := t.checkClusterPermission(user, clusters)
if err != nil {
klog.Error(err)
return nil, err
}
}
return t.ksclient.TenantV1alpha2().WorkspaceTemplates().Create(context.Background(), workspace, metav1.CreateOptions{})
}
func (t *tenantOperator) UpdateWorkspaceTemplate(workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error) {
func (t *tenantOperator) UpdateWorkspaceTemplate(user user.Info, workspace *tenantv1alpha2.WorkspaceTemplate) (*tenantv1alpha2.WorkspaceTemplate, error) {
if len(workspace.Spec.Placement.Clusters) != 0 {
clusters := make([]string, 0)
for _, v := range workspace.Spec.Placement.Clusters {
clusters = append(clusters, v.Name)
}
err := t.checkClusterPermission(user, clusters)
if err != nil {
klog.Error(err)
return nil, err
}
}
return t.ksclient.TenantV1alpha2().WorkspaceTemplates().Update(context.Background(), workspace, metav1.UpdateOptions{})
}
@@ -503,7 +606,7 @@ func (t *tenantOperator) ListWorkspaceClusters(workspaceName string) (*api.ListR
for _, cluster := range workspace.Spec.Placement.Clusters {
obj, err := t.resourceGetter.Get(clusterv1alpha1.ResourcesPluralCluster, "", cluster.Name)
if err != nil {
klog.Error(err)
klog.Warning(err)
if errors.IsNotFound(err) {
continue
}
@@ -532,89 +635,69 @@ func (t *tenantOperator) ListWorkspaceClusters(workspaceName string) (*api.ListR
return &api.ListResult{Items: []interface{}{}, TotalItems: 0}, nil
}
func (t *tenantOperator) ListClusters(user user.Info) (*api.ListResult, error) {
func (t *tenantOperator) ListClusters(user user.Info, queryParam *query.Query) (*api.ListResult, error) {
listClustersInGlobalScope := authorizer.AttributesRecord{
User: user,
Verb: "list",
APIGroup: "cluster.kubesphere.io",
Resource: "clusters",
ResourceScope: request.GlobalScope,
ResourceRequest: true,
}
allowedListClustersInGlobalScope, _, err := t.authorizer.Authorize(listClustersInGlobalScope)
if err != nil {
klog.Error(err)
return nil, err
return nil, fmt.Errorf("failed to authorize: %s", err)
}
listWorkspacesInGlobalScope := authorizer.AttributesRecord{
User: user,
Verb: "list",
Resource: "workspaces",
ResourceScope: request.GlobalScope,
ResourceRequest: true,
if allowedListClustersInGlobalScope == authorizer.DecisionAllow {
return t.resourceGetter.List(clusterv1alpha1.ResourcesPluralCluster, "", queryParam)
}
allowedListWorkspacesInGlobalScope, _, err := t.authorizer.Authorize(listWorkspacesInGlobalScope)
userDetail, err := t.im.DescribeUser(user.GetName())
if err != nil {
klog.Error(err)
return nil, err
return nil, fmt.Errorf("failed to describe user: %s", err)
}
if allowedListClustersInGlobalScope == authorizer.DecisionAllow ||
allowedListWorkspacesInGlobalScope == authorizer.DecisionAllow {
result, err := t.resourceGetter.List(clusterv1alpha1.ResourcesPluralCluster, "", query.New())
grantedClustersAnnotation := userDetail.Annotations[iamv1alpha2.GrantedClustersAnnotation]
var grantedClusters sets.String
if len(grantedClustersAnnotation) > 0 {
grantedClusters = sets.NewString(strings.Split(grantedClustersAnnotation, ",")...)
} else {
grantedClusters = sets.NewString()
}
var clusters []*clusterv1alpha1.Cluster
for _, grantedCluster := range grantedClusters.List() {
obj, err := t.resourceGetter.Get(clusterv1alpha1.ResourcesPluralCluster, "", grantedCluster)
if err != nil {
klog.Error(err)
return nil, err
}
return result, nil
}
workspaceRoleBindings, err := t.am.ListWorkspaceRoleBindings(user.GetName(), user.GetGroups(), "")
if err != nil {
klog.Error(err)
return nil, err
}
clusters := map[string]*clusterv1alpha1.Cluster{}
for _, roleBinding := range workspaceRoleBindings {
workspaceName := roleBinding.Labels[tenantv1alpha1.WorkspaceLabel]
workspace, err := t.DescribeWorkspaceTemplate(workspaceName)
if err != nil {
klog.Error(err)
return nil, err
}
for _, grantedCluster := range workspace.Spec.Placement.Clusters {
// skip if cluster exist
if clusters[grantedCluster.Name] != nil {
if errors.IsNotFound(err) {
continue
}
obj, err := t.resourceGetter.Get(clusterv1alpha1.ResourcesPluralCluster, "", grantedCluster.Name)
if err != nil {
klog.Error(err)
if errors.IsNotFound(err) {
continue
}
return nil, err
}
cluster := obj.(*clusterv1alpha1.Cluster)
clusters[cluster.Name] = cluster
return nil, fmt.Errorf("failed to fetch cluster: %s", err)
}
cluster := obj.(*clusterv1alpha1.Cluster)
clusters = append(clusters, cluster)
}
items := make([]interface{}, 0)
items := make([]runtime.Object, 0)
for _, cluster := range clusters {
items = append(items, cluster)
}
return &api.ListResult{Items: items, TotalItems: len(items)}, nil
// apply additional labelSelector
if queryParam.LabelSelector != "" {
queryParam.Filters[query.FieldLabel] = query.Value(queryParam.LabelSelector)
}
// use default pagination search logic
result := resources.DefaultList(items, queryParam, func(left runtime.Object, right runtime.Object, field query.Field) bool {
return resources.DefaultObjectMetaCompare(left.(*clusterv1alpha1.Cluster).ObjectMeta, right.(*clusterv1alpha1.Cluster).ObjectMeta, field)
}, func(workspace runtime.Object, filter query.Filter) bool {
return resources.DefaultObjectMetaFilter(workspace.(*clusterv1alpha1.Cluster).ObjectMeta, filter)
})
return result, nil
}
func (t *tenantOperator) DeleteWorkspaceTemplate(workspace string, opts metav1.DeleteOptions) error {
@@ -1100,6 +1183,16 @@ func (t *tenantOperator) MeteringHierarchy(user user.Info, queryParam *meteringv
return resourceStats, nil
}
func (t *tenantOperator) getClusterRoleBindingsByUser(clusterName, user string) (*rbacv1.ClusterRoleBindingList, error) {
kubernetesClientSet, err := t.clusterClient.GetKubernetesClientSet(clusterName)
if err != nil {
return nil, err
}
return kubernetesClientSet.RbacV1().ClusterRoleBindings().
List(context.Background(),
metav1.ListOptions{LabelSelector: labels.FormatLabels(map[string]string{"iam.kubesphere.io/user-ref": user})})
}
func contains(objects []runtime.Object, object runtime.Object) bool {
for _, item := range objects {
if item == object {
@@ -1125,3 +1218,78 @@ func stringContains(str string, subStrs []string) bool {
}
return false
}
func (t *tenantOperator) checkWorkspaceTemplatePermission(user user.Info, workspace string) error {
deleteWST := authorizer.AttributesRecord{
User: user,
Verb: authorizer.VerbDelete,
APIGroup: tenantv1alpha2.SchemeGroupVersion.Group,
APIVersion: tenantv1alpha2.SchemeGroupVersion.Version,
Resource: tenantv1alpha2.ResourcePluralWorkspaceTemplate,
ResourceRequest: true,
ResourceScope: request.GlobalScope,
}
authorize, reason, err := t.authorizer.Authorize(deleteWST)
if err != nil {
return err
}
if authorize != authorizer.DecisionAllow {
return errors.NewForbidden(tenantv1alpha2.Resource(tenantv1alpha2.ResourcePluralWorkspaceTemplate), workspace, fmt.Errorf(reason))
}
return nil
}
func (t *tenantOperator) checkClusterPermission(user user.Info, clusters []string) error {
// Checking whether the user can manage the cluster requires authentication from two aspects.
// First check whether the user has relevant global permissions,
// and then check whether the user has relevant cluster permissions in the target cluster
for _, clusterName := range clusters {
cluster, err := t.ksclient.ClusterV1alpha1().Clusters().Get(context.Background(), clusterName, metav1.GetOptions{})
if err != nil {
return err
}
if cluster.Labels["cluster.kubesphere.io/visibility"] == "public" {
continue
}
deleteCluster := authorizer.AttributesRecord{
User: user,
Verb: authorizer.VerbDelete,
APIGroup: clusterv1alpha1.SchemeGroupVersion.Version,
APIVersion: clusterv1alpha1.SchemeGroupVersion.Version,
Resource: clusterv1alpha1.ResourcesPluralCluster,
Cluster: clusterName,
ResourceRequest: true,
ResourceScope: request.GlobalScope,
}
authorize, _, err := t.authorizer.Authorize(deleteCluster)
if err != nil {
return err
}
if authorize == authorizer.DecisionAllow {
continue
}
list, err := t.getClusterRoleBindingsByUser(clusterName, user.GetName())
if err != nil {
return err
}
allowed := false
for _, clusterRolebinding := range list.Items {
if clusterRolebinding.RoleRef.Name == iamv1alpha2.ClusterAdmin {
allowed = true
break
}
}
if !allowed {
return errors.NewForbidden(clusterv1alpha1.Resource(clusterv1alpha1.ResourcesPluralCluster), clusterName, fmt.Errorf("user is not allowed to use the cluster %s", clusterName))
}
}
return nil
}

View File

@@ -544,5 +544,5 @@ func prepare() Interface {
amOperator := am.NewOperator(ksClient, k8sClient, fakeInformerFactory, nil)
authorizer := rbac.NewRBACAuthorizer(amOperator)
return New(fakeInformerFactory, k8sClient, ksClient, nil, nil, nil, amOperator, authorizer, nil, nil, nil)
return New(fakeInformerFactory, k8sClient, ksClient, nil, nil, nil, amOperator, nil, authorizer, nil, nil, nil)
}

View File

@@ -44,6 +44,8 @@ import (
const (
// Time allowed to write a message to the peer.
writeWait = 10 * time.Second
// ctrl+d to close terminal.
endOfTransmission = "\u0004"
)
// PtyHandler is what remotecommand expects from a pty
@@ -76,11 +78,14 @@ type TerminalMessage struct {
Rows, Cols uint16
}
// TerminalSize handles pty->process resize events
// Next handles pty->process resize events
// Called in a loop from remotecommand as long as the process is running
func (t TerminalSession) Next() *remotecommand.TerminalSize {
select {
case size := <-t.sizeChan:
if size.Height == 0 && size.Width == 0 {
return nil
}
return &size
}
}
@@ -92,7 +97,7 @@ func (t TerminalSession) Read(p []byte) (int, error) {
var msg TerminalMessage
err := t.conn.ReadJSON(&msg)
if err != nil {
return 0, err
return copy(p, endOfTransmission), err
}
switch msg.Op {
@@ -102,7 +107,7 @@ func (t TerminalSession) Read(p []byte) (int, error) {
t.sizeChan <- remotecommand.TerminalSize{Width: msg.Cols, Height: msg.Rows}
return 0, nil
default:
return 0, fmt.Errorf("unknown message type '%s'", msg.Op)
return copy(p, endOfTransmission), fmt.Errorf("unknown message type '%s'", msg.Op)
}
}
@@ -145,6 +150,7 @@ func (t TerminalSession) Toast(p string) error {
// For now the status code is unused and reason is shown to the user (unless "")
func (t TerminalSession) Close(status uint32, reason string) {
klog.Warning(status, reason)
close(t.sizeChan)
t.conn.Close()
}
@@ -211,7 +217,7 @@ func (n *NodeTerminaler) getNSEnterPod() (*v1.Pod, error) {
pod, err := n.client.CoreV1().Pods(n.Namespace).Get(context.Background(), n.PodName, metav1.GetOptions{})
if err != nil || (pod.Status.Phase != v1.PodRunning && pod.Status.Phase != v1.PodPending) {
//pod has timed out, but has not been cleaned up
// pod has timed out, but has not been cleaned up
if pod.Status.Phase == v1.PodSucceeded || pod.Status.Phase == v1.PodFailed {
err := n.client.CoreV1().Pods(n.Namespace).Delete(context.Background(), n.PodName, metav1.DeleteOptions{})
if err != nil {
@@ -324,7 +330,7 @@ func isValidShell(validShells []string, shell string) bool {
func (t *terminaler) HandleSession(shell, namespace, podName, containerName string, conn *websocket.Conn) {
var err error
validShells := []string{"sh", "bash"}
validShells := []string{"bash", "sh"}
session := &TerminalSession{conn: conn, sizeChan: make(chan remotecommand.TerminalSize)}

View File

@@ -21,7 +21,6 @@ import (
"encoding/json"
"io/ioutil"
"net/http"
"net/url"
"reflect"
"testing"
@@ -58,7 +57,7 @@ func TestClient_Get(t *testing.T) {
Strategy: AuthStrategyAnonymous,
cache: nil,
client: &MockClient{
requestResult: "fake",
RequestResult: "fake",
},
ServiceToken: "token",
Host: "http://kiali.istio-system.svc",
@@ -76,8 +75,8 @@ func TestClient_Get(t *testing.T) {
Strategy: AuthStrategyToken,
cache: nil,
client: &MockClient{
tokenResult: token,
requestResult: "fake",
TokenResult: token,
RequestResult: "fake",
},
ServiceToken: "token",
Host: "http://kiali.istio-system.svc",
@@ -95,8 +94,8 @@ func TestClient_Get(t *testing.T) {
Strategy: AuthStrategyToken,
cache: cache.NewSimpleCache(),
client: &MockClient{
tokenResult: token,
requestResult: "fake",
TokenResult: token,
RequestResult: "fake",
},
ServiceToken: "token",
Host: "http://kiali.istio-system.svc",
@@ -129,22 +128,3 @@ func TestClient_Get(t *testing.T) {
})
}
}
type MockClient struct {
tokenResult []byte
requestResult string
}
func (c *MockClient) Do(req *http.Request) (*http.Response, error) {
return &http.Response{
StatusCode: 200,
Body: ioutil.NopCloser(bytes.NewReader([]byte(c.requestResult))),
}, nil
}
func (c *MockClient) PostForm(url string, data url.Values) (resp *http.Response, err error) {
return &http.Response{
StatusCode: 200,
Body: ioutil.NopCloser(bytes.NewReader(c.tokenResult)),
}, nil
}

View File

@@ -0,0 +1,27 @@
package kiali
import (
"bytes"
"io/ioutil"
"net/http"
"net/url"
)
type MockClient struct {
TokenResult []byte
RequestResult string
}
func (c *MockClient) Do(req *http.Request) (*http.Response, error) {
return &http.Response{
StatusCode: 200,
Body: ioutil.NopCloser(bytes.NewReader([]byte(c.RequestResult))),
}, nil
}
func (c *MockClient) PostForm(url string, data url.Values) (resp *http.Response, err error) {
return &http.Response{
StatusCode: 200,
Body: ioutil.NopCloser(bytes.NewReader(c.TokenResult)),
}, nil
}

View File

@@ -177,7 +177,7 @@ var promQLTemplates = map[string]string{
"ingress_success_rate": `sum(rate(nginx_ingress_controller_requests{$1,$2,status!~"[4-5].*"}[$3])) / sum(rate(nginx_ingress_controller_requests{$1,$2}[$3]))`,
"ingress_request_duration_average": `sum_over_time(nginx_ingress_controller_request_duration_seconds_sum{$1,$2}[$3])/sum_over_time(nginx_ingress_controller_request_duration_seconds_count{$1,$2}[$3])`,
"ingress_request_duration_50percentage": `histogram_quantile(0.50, sum by (le) (rate(nginx_ingress_controller_request_duration_seconds_bucket{$1,$2}[$3])))`,
"ingress_request_duration_95percentage": `histogram_quantile(0.90, sum by (le) (rate(nginx_ingress_controller_request_duration_seconds_bucket{$1,$2}[$3])))`,
"ingress_request_duration_95percentage": `histogram_quantile(0.95, sum by (le) (rate(nginx_ingress_controller_request_duration_seconds_bucket{$1,$2}[$3])))`,
"ingress_request_duration_99percentage": `histogram_quantile(0.99, sum by (le) (rate(nginx_ingress_controller_request_duration_seconds_bucket{$1,$2}[$3])))`,
"ingress_request_volume": `round(sum(irate(nginx_ingress_controller_requests{$1,$2}[$3])), 0.001)`,
"ingress_request_volume_by_ingress": `round(sum(irate(nginx_ingress_controller_requests{$1,$2}[$3])) by (ingress), 0.001)`,

View File

@@ -99,6 +99,9 @@ func MergeRepoIndex(repo *v1alpha1.HelmRepo, index *helmrepo.IndexFile, existsSa
allAppNames := make(map[string]struct{}, len(index.Entries))
for name, versions := range index.Entries {
if len(versions) == 0 {
continue
}
// add new applications
if application, exists := saved.Applications[name]; !exists {
application = &Application{

View File

@@ -50,5 +50,102 @@ func TestLoadRepo(t *testing.T) {
_ = chartData
break
}
}
var indexData1 = `
apiVersion: v1
entries:
apisix: []
apisix-dashboard:
- apiVersion: v2
appVersion: 2.9.0
created: "2021-11-15T08:23:00.343784368Z"
description: A Helm chart for Apache APISIX Dashboard
digest: 76f794b1300f7bfb756ede352fe71eb863b89f1995b495e8b683990709e310ad
icon: https://apache.org/logos/res/apisix/apisix.png
maintainers:
- email: zhangjintao@apache.org
name: tao12345666333
name: apisix-dashboard
type: application
urls:
- https://charts.kubesphere.io/main/apisix-dashboard-0.3.0.tgz
version: 0.3.0
`
var indexData2 = `
apiVersion: v1
entries:
apisix:
- apiVersion: v2
appVersion: 2.10.0
created: "2021-11-15T08:23:00.343234584Z"
dependencies:
- condition: etcd.enabled
name: etcd
repository: https://charts.bitnami.com/bitnami
version: 6.2.6
- alias: dashboard
condition: dashboard.enabled
name: apisix-dashboard
repository: https://charts.apiseven.com
version: 0.3.0
- alias: ingress-controller
condition: ingress-controller.enabled
name: apisix-ingress-controller
repository: https://charts.apiseven.com
version: 0.8.0
description: A Helm chart for Apache APISIX
digest: fed38a11c0fb54d385144767227e43cb2961d1b50d36ea207fdd122bddd3de28
icon: https://apache.org/logos/res/apisix/apisix.png
maintainers:
- email: zhangjintao@apache.org
name: tao12345666333
name: apisix
type: application
urls:
- https://charts.kubesphere.io/main/apisix-0.7.2.tgz
version: 0.7.2
apisix-dashboard:
- apiVersion: v2
appVersion: 2.9.0
created: "2021-11-15T08:23:00.343784368Z"
description: A Helm chart for Apache APISIX Dashboard
digest: 76f794b1300f7bfb756ede352fe71eb863b89f1995b495e8b683990709e310ad
icon: https://apache.org/logos/res/apisix/apisix.png
maintainers:
- email: zhangjintao@apache.org
name: tao12345666333
name: apisix-dashboard
type: application
urls:
- https://charts.kubesphere.io/main/apisix-dashboard-0.3.0.tgz
version: 0.3.0
`
func TestMergeRepo(t *testing.T) {
repoIndex1, err := loadIndex([]byte(indexData1))
if err != nil {
t.Errorf("failed to load repo index")
t.Failed()
}
existsSavedIndex := &SavedIndex{}
repoCR := &v1alpha1.HelmRepo{}
savedIndex1 := MergeRepoIndex(repoCR, repoIndex1, existsSavedIndex)
if len(savedIndex1.Applications) != 1 {
t.Errorf("faied to merge repo index with empty repo")
t.Failed()
}
repoIndex2, err := loadIndex([]byte(indexData2))
if err != nil {
t.Errorf("failed to load repo index")
t.Failed()
}
savedIndex2 := MergeRepoIndex(repoCR, repoIndex2, savedIndex1)
if len(savedIndex2.Applications) != 2 {
t.Errorf("faied to merge two repo index")
t.Failed()
}
}

View File

@@ -23,6 +23,7 @@ import (
"sync"
corev1 "k8s.io/api/core/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
@@ -30,6 +31,7 @@ import (
clusterv1alpha1 "kubesphere.io/api/cluster/v1alpha1"
kubesphere "kubesphere.io/kubesphere/pkg/client/clientset/versioned"
clusterinformer "kubesphere.io/kubesphere/pkg/client/informers/externalversions/cluster/v1alpha1"
clusterlister "kubesphere.io/kubesphere/pkg/client/listers/cluster/v1alpha1"
)
@@ -54,35 +56,30 @@ type ClusterClients interface {
GetClusterKubeconfig(string) (string, error)
Get(string) (*clusterv1alpha1.Cluster, error)
GetInnerCluster(string) *innerCluster
GetKubernetesClientSet(string) (*kubernetes.Clientset, error)
GetKubeSphereClientSet(string) (*kubesphere.Clientset, error)
}
var (
once sync.Once
c *clusterClients
)
func NewClusterClient(clusterInformer clusterinformer.ClusterInformer) ClusterClients {
once.Do(func() {
c = &clusterClients{
innerClusters: make(map[string]*innerCluster),
clusterLister: clusterInformer.Lister(),
}
c := &clusterClients{
innerClusters: make(map[string]*innerCluster),
clusterLister: clusterInformer.Lister(),
}
clusterInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
c.addCluster(obj)
},
UpdateFunc: func(oldObj, newObj interface{}) {
oldCluster := oldObj.(*clusterv1alpha1.Cluster)
newCluster := newObj.(*clusterv1alpha1.Cluster)
if !reflect.DeepEqual(oldCluster.Spec, newCluster.Spec) {
c.addCluster(newObj)
}
},
DeleteFunc: func(obj interface{}) {
c.removeCluster(obj)
},
})
clusterInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
c.addCluster(obj)
},
UpdateFunc: func(oldObj, newObj interface{}) {
oldCluster := oldObj.(*clusterv1alpha1.Cluster)
newCluster := newObj.(*clusterv1alpha1.Cluster)
if !reflect.DeepEqual(oldCluster.Spec, newCluster.Spec) {
c.addCluster(newObj)
}
},
DeleteFunc: func(obj interface{}) {
c.removeCluster(obj)
},
})
return c
}
@@ -189,3 +186,45 @@ func (c *clusterClients) IsHostCluster(cluster *clusterv1alpha1.Cluster) bool {
}
return false
}
func (c *clusterClients) GetKubeSphereClientSet(name string) (*kubesphere.Clientset, error) {
kubeconfig, err := c.GetClusterKubeconfig(name)
if err != nil {
return nil, err
}
restConfig, err := newRestConfigFromString(kubeconfig)
if err != nil {
return nil, err
}
clientSet, err := kubesphere.NewForConfig(restConfig)
if err != nil {
return nil, err
}
return clientSet, nil
}
func (c *clusterClients) GetKubernetesClientSet(name string) (*kubernetes.Clientset, error) {
kubeconfig, err := c.GetClusterKubeconfig(name)
if err != nil {
return nil, err
}
restConfig, err := newRestConfigFromString(kubeconfig)
if err != nil {
return nil, err
}
clientSet, err := kubernetes.NewForConfig(restConfig)
if err != nil {
return nil, err
}
return clientSet, nil
}
func newRestConfigFromString(kubeconfig string) (*rest.Config, error) {
bytes, err := clientcmd.NewClientConfigFromBytes([]byte(kubeconfig))
if err != nil {
return nil, err
}
return bytes.ClientConfig()
}

View File

@@ -0,0 +1,22 @@
package josnpatchutil
import (
jsonpatch "github.com/evanphx/json-patch"
"github.com/mitchellh/mapstructure"
)
func Parse(raw []byte) (jsonpatch.Patch, error) {
return jsonpatch.DecodePatch(raw)
}
func GetValue(patch jsonpatch.Operation, value interface{}) error {
valueInterface, err := patch.ValueInterface()
if err != nil {
return err
}
if err := mapstructure.Decode(valueInterface, value); err != nil {
return err
}
return nil
}

View File

@@ -17,7 +17,9 @@ limitations under the License.
package reflectutils
import (
"fmt"
"reflect"
"unsafe"
)
func In(value interface{}, container interface{}) bool {
@@ -60,3 +62,15 @@ func Override(left interface{}, right interface{}) {
}
}
}
func SetUnExportedField(ptr interface{}, filedName string, newFiledValue interface{}) (err error) {
v := reflect.ValueOf(ptr).Elem().FieldByName(filedName)
v = reflect.NewAt(v.Type(), unsafe.Pointer(v.UnsafeAddr())).Elem()
nv := reflect.ValueOf(newFiledValue)
if v.Kind() != nv.Kind() {
return fmt.Errorf("kind error")
}
v.Set(nv)
return nil
}

View File

@@ -1,4 +1,3 @@
//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*

View File

@@ -58,6 +58,7 @@ const (
GlobalRoleAnnotation = "iam.kubesphere.io/globalrole"
WorkspaceRoleAnnotation = "iam.kubesphere.io/workspacerole"
ClusterRoleAnnotation = "iam.kubesphere.io/clusterrole"
GrantedClustersAnnotation = "iam.kubesphere.io/granted-clusters"
UninitializedAnnotation = "iam.kubesphere.io/uninitialized"
LastPasswordChangeTimeAnnotation = "iam.kubesphere.io/last-password-change-time"
RoleAnnotation = "iam.kubesphere.io/role"

View File

@@ -1,58 +0,0 @@
/*
Copyright 2020 KubeSphere Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1beta1
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
ResourcePluralFederatedResourceQuota = "federatedresourcequotas"
ResourceSingularFederatedResourceQuota = "federatedresourcequota"
FederatedResourceQuotaKind = "FederatedResourceQuota"
)
// +genclient
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// +k8s:openapi-gen=true
type FederatedResourceQuota struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec FederatedResourceQuotaSpec `json:"spec"`
Status *GenericFederatedStatus `json:"status,omitempty"`
}
type FederatedResourceQuotaSpec struct {
Template ResourceQuotaTemplate `json:"template"`
Placement GenericPlacementFields `json:"placement"`
Overrides []GenericOverrideItem `json:"overrides,omitempty"`
}
type ResourceQuotaTemplate struct {
Spec corev1.ResourceQuotaSpec `json:"spec,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// FederatedResourceQuotaList contains a list of federatedresourcequotalists
type FederatedResourceQuotaList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []FederatedResourceQuota `json:"items"`
}

View File

@@ -67,8 +67,6 @@ func init() {
&FederatedNotificationReceiverList{},
&FederatedPersistentVolumeClaim{},
&FederatedPersistentVolumeClaimList{},
&FederatedResourceQuota{},
&FederatedResourceQuotaList{},
&FederatedSecret{},
&FederatedSecretList{},
&FederatedService{},

View File

@@ -1425,96 +1425,6 @@ func (in *FederatedPersistentVolumeClaimSpec) DeepCopy() *FederatedPersistentVol
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FederatedResourceQuota) DeepCopyInto(out *FederatedResourceQuota) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
if in.Status != nil {
in, out := &in.Status, &out.Status
*out = new(GenericFederatedStatus)
(*in).DeepCopyInto(*out)
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FederatedResourceQuota.
func (in *FederatedResourceQuota) DeepCopy() *FederatedResourceQuota {
if in == nil {
return nil
}
out := new(FederatedResourceQuota)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *FederatedResourceQuota) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FederatedResourceQuotaList) DeepCopyInto(out *FederatedResourceQuotaList) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ListMeta.DeepCopyInto(&out.ListMeta)
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]FederatedResourceQuota, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FederatedResourceQuotaList.
func (in *FederatedResourceQuotaList) DeepCopy() *FederatedResourceQuotaList {
if in == nil {
return nil
}
out := new(FederatedResourceQuotaList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *FederatedResourceQuotaList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FederatedResourceQuotaSpec) DeepCopyInto(out *FederatedResourceQuotaSpec) {
*out = *in
in.Template.DeepCopyInto(&out.Template)
in.Placement.DeepCopyInto(&out.Placement)
if in.Overrides != nil {
in, out := &in.Overrides, &out.Overrides
*out = make([]GenericOverrideItem, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FederatedResourceQuotaSpec.
func (in *FederatedResourceQuotaSpec) DeepCopy() *FederatedResourceQuotaSpec {
if in == nil {
return nil
}
out := new(FederatedResourceQuotaSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *FederatedSecret) DeepCopyInto(out *FederatedSecret) {
*out = *in
@@ -2543,23 +2453,6 @@ func (in *PersistentVolumeClaimTemplate) DeepCopy() *PersistentVolumeClaimTempla
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ResourceQuotaTemplate) DeepCopyInto(out *ResourceQuotaTemplate) {
*out = *in
in.Spec.DeepCopyInto(&out.Spec)
return
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ResourceQuotaTemplate.
func (in *ResourceQuotaTemplate) DeepCopy() *ResourceQuotaTemplate {
if in == nil {
return nil
}
out := new(ResourceQuotaTemplate)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *SecretTemplate) DeepCopyInto(out *SecretTemplate) {
*out = *in

View File

@@ -121,7 +121,7 @@ func generateSwaggerJson() []byte {
informerFactory.KubeSphereSharedInformerFactory(), "", "", ""))
urlruntime.Must(kapisdevops.AddToContainer(container, ""))
urlruntime.Must(iamv1alpha2.AddToContainer(container, nil, nil, group.New(informerFactory, clientsets.KubeSphere(), clientsets.Kubernetes()), nil))
urlruntime.Must(monitoringv1alpha3.AddToContainer(container, clientsets.Kubernetes(), nil, nil, informerFactory, nil, nil, nil, nil))
urlruntime.Must(monitoringv1alpha3.AddToContainer(container, clientsets.Kubernetes(), nil, nil, informerFactory, nil, nil))
urlruntime.Must(openpitrixv1.AddToContainer(container, informerFactory, fake.NewSimpleClientset(), nil, nil))
urlruntime.Must(openpitrixv2.AddToContainer(container, informerFactory, fake.NewSimpleClientset(), nil))
urlruntime.Must(operationsv1alpha2.AddToContainer(container, clientsets.Kubernetes()))

27
vendor/github.com/google/gops/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2016 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

285
vendor/github.com/google/gops/agent/agent.go generated vendored Normal file
View File

@@ -0,0 +1,285 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package agent provides hooks programs can register to retrieve
// diagnostics data by using gops.
package agent
import (
"bufio"
"context"
"encoding/binary"
"fmt"
"io"
"io/ioutil"
"net"
"os"
gosignal "os/signal"
"path/filepath"
"runtime"
"runtime/debug"
"runtime/pprof"
"runtime/trace"
"strconv"
"strings"
"sync"
"syscall"
"time"
"github.com/google/gops/internal"
"github.com/google/gops/signal"
)
const defaultAddr = "127.0.0.1:0"
var (
mu sync.Mutex
portfile string
listener net.Listener
units = []string{" bytes", "KB", "MB", "GB", "TB", "PB"}
)
// Options allows configuring the started agent.
type Options struct {
// Addr is the host:port the agent will be listening at.
// Optional.
Addr string
// ConfigDir is the directory to store the configuration file,
// PID of the gops process, filename, port as well as content.
// Optional.
ConfigDir string
// ShutdownCleanup automatically cleans up resources if the
// running process receives an interrupt. Otherwise, users
// can call Close before shutting down.
// Optional.
ShutdownCleanup bool
// ReuseSocketAddrAndPort determines whether the SO_REUSEADDR and
// SO_REUSEPORT socket options should be set on the listening socket of
// the agent. This option is only effective on unix-like OSes and if
// Addr is set to a fixed host:port.
// Optional.
ReuseSocketAddrAndPort bool
}
// Listen starts the gops agent on a host process. Once agent started, users
// can use the advanced gops features. The agent will listen to Interrupt
// signals and exit the process, if you need to perform further work on the
// Interrupt signal use the options parameter to configure the agent
// accordingly.
//
// Note: The agent exposes an endpoint via a TCP connection that can be used by
// any program on the system. Review your security requirements before starting
// the agent.
func Listen(opts Options) error {
mu.Lock()
defer mu.Unlock()
if portfile != "" {
return fmt.Errorf("gops: agent already listening at: %v", listener.Addr())
}
// new
gopsdir := opts.ConfigDir
if gopsdir == "" {
cfgDir, err := internal.ConfigDir()
if err != nil {
return err
}
gopsdir = cfgDir
}
err := os.MkdirAll(gopsdir, os.ModePerm)
if err != nil {
return err
}
if opts.ShutdownCleanup {
gracefulShutdown()
}
addr := opts.Addr
if addr == "" {
addr = defaultAddr
}
var lc net.ListenConfig
if opts.ReuseSocketAddrAndPort {
lc.Control = setReuseAddrAndPortSockopts
}
listener, err = lc.Listen(context.Background(), "tcp", addr)
if err != nil {
return err
}
port := listener.Addr().(*net.TCPAddr).Port
portfile = filepath.Join(gopsdir, strconv.Itoa(os.Getpid()))
err = ioutil.WriteFile(portfile, []byte(strconv.Itoa(port)), os.ModePerm)
if err != nil {
return err
}
go listen()
return nil
}
func listen() {
buf := make([]byte, 1)
for {
fd, err := listener.Accept()
if err != nil {
// No great way to check for this, see https://golang.org/issues/4373.
if !strings.Contains(err.Error(), "use of closed network connection") {
fmt.Fprintf(os.Stderr, "gops: %v\n", err)
}
if netErr, ok := err.(net.Error); ok && !netErr.Temporary() {
break
}
continue
}
if _, err := fd.Read(buf); err != nil {
fmt.Fprintf(os.Stderr, "gops: %v\n", err)
continue
}
if err := handle(fd, buf); err != nil {
fmt.Fprintf(os.Stderr, "gops: %v\n", err)
continue
}
fd.Close()
}
}
func gracefulShutdown() {
c := make(chan os.Signal, 1)
gosignal.Notify(c, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)
go func() {
// cleanup the socket on shutdown.
sig := <-c
Close()
ret := 1
if sig == syscall.SIGTERM {
ret = 0
}
os.Exit(ret)
}()
}
// Close closes the agent, removing temporary files and closing the TCP listener.
// If no agent is listening, Close does nothing.
func Close() {
mu.Lock()
defer mu.Unlock()
if portfile != "" {
os.Remove(portfile)
portfile = ""
}
if listener != nil {
listener.Close()
}
}
func formatBytes(val uint64) string {
var i int
var target uint64
for i = range units {
target = 1 << uint(10*(i+1))
if val < target {
break
}
}
if i > 0 {
return fmt.Sprintf("%0.2f%s (%d bytes)", float64(val)/(float64(target)/1024), units[i], val)
}
return fmt.Sprintf("%d bytes", val)
}
func handle(conn io.ReadWriter, msg []byte) error {
switch msg[0] {
case signal.StackTrace:
return pprof.Lookup("goroutine").WriteTo(conn, 2)
case signal.GC:
runtime.GC()
_, err := conn.Write([]byte("ok"))
return err
case signal.MemStats:
var s runtime.MemStats
runtime.ReadMemStats(&s)
fmt.Fprintf(conn, "alloc: %v\n", formatBytes(s.Alloc))
fmt.Fprintf(conn, "total-alloc: %v\n", formatBytes(s.TotalAlloc))
fmt.Fprintf(conn, "sys: %v\n", formatBytes(s.Sys))
fmt.Fprintf(conn, "lookups: %v\n", s.Lookups)
fmt.Fprintf(conn, "mallocs: %v\n", s.Mallocs)
fmt.Fprintf(conn, "frees: %v\n", s.Frees)
fmt.Fprintf(conn, "heap-alloc: %v\n", formatBytes(s.HeapAlloc))
fmt.Fprintf(conn, "heap-sys: %v\n", formatBytes(s.HeapSys))
fmt.Fprintf(conn, "heap-idle: %v\n", formatBytes(s.HeapIdle))
fmt.Fprintf(conn, "heap-in-use: %v\n", formatBytes(s.HeapInuse))
fmt.Fprintf(conn, "heap-released: %v\n", formatBytes(s.HeapReleased))
fmt.Fprintf(conn, "heap-objects: %v\n", s.HeapObjects)
fmt.Fprintf(conn, "stack-in-use: %v\n", formatBytes(s.StackInuse))
fmt.Fprintf(conn, "stack-sys: %v\n", formatBytes(s.StackSys))
fmt.Fprintf(conn, "stack-mspan-inuse: %v\n", formatBytes(s.MSpanInuse))
fmt.Fprintf(conn, "stack-mspan-sys: %v\n", formatBytes(s.MSpanSys))
fmt.Fprintf(conn, "stack-mcache-inuse: %v\n", formatBytes(s.MCacheInuse))
fmt.Fprintf(conn, "stack-mcache-sys: %v\n", formatBytes(s.MCacheSys))
fmt.Fprintf(conn, "other-sys: %v\n", formatBytes(s.OtherSys))
fmt.Fprintf(conn, "gc-sys: %v\n", formatBytes(s.GCSys))
fmt.Fprintf(conn, "next-gc: when heap-alloc >= %v\n", formatBytes(s.NextGC))
lastGC := "-"
if s.LastGC != 0 {
lastGC = fmt.Sprint(time.Unix(0, int64(s.LastGC)))
}
fmt.Fprintf(conn, "last-gc: %v\n", lastGC)
fmt.Fprintf(conn, "gc-pause-total: %v\n", time.Duration(s.PauseTotalNs))
fmt.Fprintf(conn, "gc-pause: %v\n", s.PauseNs[(s.NumGC+255)%256])
fmt.Fprintf(conn, "gc-pause-end: %v\n", s.PauseEnd[(s.NumGC+255)%256])
fmt.Fprintf(conn, "num-gc: %v\n", s.NumGC)
fmt.Fprintf(conn, "num-forced-gc: %v\n", s.NumForcedGC)
fmt.Fprintf(conn, "gc-cpu-fraction: %v\n", s.GCCPUFraction)
fmt.Fprintf(conn, "enable-gc: %v\n", s.EnableGC)
fmt.Fprintf(conn, "debug-gc: %v\n", s.DebugGC)
case signal.Version:
fmt.Fprintf(conn, "%v\n", runtime.Version())
case signal.HeapProfile:
return pprof.WriteHeapProfile(conn)
case signal.CPUProfile:
if err := pprof.StartCPUProfile(conn); err != nil {
return err
}
time.Sleep(30 * time.Second)
pprof.StopCPUProfile()
case signal.Stats:
fmt.Fprintf(conn, "goroutines: %v\n", runtime.NumGoroutine())
fmt.Fprintf(conn, "OS threads: %v\n", pprof.Lookup("threadcreate").Count())
fmt.Fprintf(conn, "GOMAXPROCS: %v\n", runtime.GOMAXPROCS(0))
fmt.Fprintf(conn, "num CPU: %v\n", runtime.NumCPU())
case signal.BinaryDump:
path, err := os.Executable()
if err != nil {
return err
}
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
_, err = bufio.NewReader(f).WriteTo(conn)
return err
case signal.Trace:
if err := trace.Start(conn); err != nil {
return err
}
time.Sleep(5 * time.Second)
trace.Stop()
case signal.SetGCPercent:
perc, err := binary.ReadVarint(bufio.NewReader(conn))
if err != nil {
return err
}
fmt.Fprintf(conn, "New GC percent set to %v. Previous value was %v.\n", perc, debug.SetGCPercent(int(perc)))
}
return nil
}

View File

@@ -0,0 +1,37 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !js && !plan9 && !solaris && !windows
// +build !js,!plan9,!solaris,!windows
package agent
import (
"syscall"
"golang.org/x/sys/unix"
)
// setReuseAddrAndPortSockopts sets the SO_REUSEADDR and SO_REUSEPORT socket
// options on c's underlying socket in order to increase the chance to re-bind()
// to the same address and port upon agent restart.
func setReuseAddrAndPortSockopts(network, address string, c syscall.RawConn) error {
var soerr error
if err := c.Control(func(su uintptr) {
sock := int(su)
// Allow reuse of recently-used addresses. This socket option is
// set by default on listeners in Go's net package, see
// net.setDefaultSockopts.
soerr = unix.SetsockoptInt(sock, unix.SOL_SOCKET, unix.SO_REUSEADDR, 1)
if soerr != nil {
return
}
// Allow reuse of recently-used ports. This gives the agent a
// better chance to re-bind upon restarts.
soerr = unix.SetsockoptInt(sock, unix.SOL_SOCKET, unix.SO_REUSEPORT, 1)
}); err != nil {
return err
}
return soerr
}

View File

@@ -0,0 +1,14 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build (js && wasm) || plan9 || solaris || windows
// +build js,wasm plan9 solaris windows
package agent
import "syscall"
func setReuseAddrAndPortSockopts(network, address string, c syscall.RawConn) error {
return nil
}

71
vendor/github.com/google/gops/internal/internal.go generated vendored Normal file
View File

@@ -0,0 +1,71 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package internal
import (
"errors"
"io/ioutil"
"os"
"os/user"
"path/filepath"
"runtime"
"strconv"
"strings"
)
const gopsConfigDirEnvKey = "GOPS_CONFIG_DIR"
func ConfigDir() (string, error) {
if configDir := os.Getenv(gopsConfigDirEnvKey); configDir != "" {
return configDir, nil
}
if osUserConfigDir := getOSUserConfigDir(); osUserConfigDir != "" {
return filepath.Join(osUserConfigDir, "gops"), nil
}
if runtime.GOOS == "windows" {
return filepath.Join(os.Getenv("APPDATA"), "gops"), nil
}
if xdgConfigDir := os.Getenv("XDG_CONFIG_HOME"); xdgConfigDir != "" {
return filepath.Join(xdgConfigDir, "gops"), nil
}
homeDir := guessUnixHomeDir()
if homeDir == "" {
return "", errors.New("unable to get current user home directory: os/user lookup failed; $HOME is empty")
}
return filepath.Join(homeDir, ".config", "gops"), nil
}
func guessUnixHomeDir() string {
usr, err := user.Current()
if err == nil {
return usr.HomeDir
}
return os.Getenv("HOME")
}
func PIDFile(pid int) (string, error) {
gopsdir, err := ConfigDir()
if err != nil {
return "", err
}
return filepath.Join(gopsdir, strconv.Itoa(pid)), nil
}
func GetPort(pid int) (string, error) {
portfile, err := PIDFile(pid)
if err != nil {
return "", err
}
b, err := ioutil.ReadFile(portfile)
if err != nil {
return "", err
}
port := strings.TrimSpace(string(b))
return port, nil
}

View File

@@ -0,0 +1,20 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build go1.13
// +build go1.13
package internal
import (
"os"
)
func getOSUserConfigDir() string {
configDir, err := os.UserConfigDir()
if err != nil {
return ""
}
return configDir
}

View File

@@ -0,0 +1,12 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build !go1.13
// +build !go1.13
package internal
func getOSUserConfigDir() string {
return ""
}

38
vendor/github.com/google/gops/signal/signal.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package signal contains signals used to communicate to the gops agents.
package signal
const (
// StackTrace represents a command to print stack trace.
StackTrace = byte(0x1)
// GC runs the garbage collector.
GC = byte(0x2)
// MemStats reports memory stats.
MemStats = byte(0x3)
// Version prints the Go version.
Version = byte(0x4)
// HeapProfile starts `go tool pprof` with the current memory profile.
HeapProfile = byte(0x5)
// CPUProfile starts `go tool pprof` with the current CPU profile
CPUProfile = byte(0x6)
// Stats returns Go runtime statistics such as number of goroutines, GOMAXPROCS, and NumCPU.
Stats = byte(0x7)
// Trace starts the Go execution tracer, waits 5 seconds and launches the trace tool.
Trace = byte(0x8)
// BinaryDump returns running binary file.
BinaryDump = byte(0x9)
// SetGCPercent sets the garbage collection target percentage.
SetGCPercent = byte(0x10)
)

20
vendor/modules.txt vendored
View File

@@ -416,6 +416,11 @@ github.com/google/go-containerregistry/pkg/v1/types
github.com/google/go-querystring/query
# github.com/google/gofuzz v1.1.0 => github.com/google/gofuzz v1.1.0
github.com/google/gofuzz
# github.com/google/gops v0.3.23
## explicit
github.com/google/gops/agent
github.com/google/gops/internal
github.com/google/gops/signal
# github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 => github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
github.com/google/shlex
# github.com/google/uuid v1.1.2 => github.com/google/uuid v1.1.1
@@ -511,7 +516,7 @@ github.com/kubesphere/pvc-autoresizer/runners
# github.com/kubesphere/sonargo v0.0.2 => github.com/kubesphere/sonargo v0.0.2
## explicit
github.com/kubesphere/sonargo/sonar
# github.com/kubesphere/storageclass-accessor v0.2.0
# github.com/kubesphere/storageclass-accessor v0.2.0 => github.com/kubesphere/storageclass-accessor v0.2.0
## explicit
github.com/kubesphere/storageclass-accessor/client/apis/accessor/v1alpha1
github.com/kubesphere/storageclass-accessor/webhook
@@ -813,6 +818,10 @@ github.com/rubenv/sql-migrate/sqlparse
github.com/russross/blackfriday
# github.com/sergi/go-diff v1.1.0 => github.com/sergi/go-diff v1.0.0
github.com/sergi/go-diff/diffmatchpatch
# github.com/shirou/gopsutil v0.0.0-20180427012116-c95755e4bcd7
## explicit
# github.com/shirou/w32 v0.0.0-20160930032740-bb4de0191aa4
## explicit
# github.com/sirupsen/logrus v1.8.1 => github.com/sirupsen/logrus v1.4.2
github.com/sirupsen/logrus
# github.com/sony/sonyflake v0.0.0-20181109022403-6d5bd6181009 => github.com/sony/sonyflake v0.0.0-20181109022403-6d5bd6181009
@@ -871,7 +880,7 @@ github.com/xeipuuv/gojsonreference
github.com/xeipuuv/gojsonschema
# github.com/xenolf/lego v0.3.2-0.20160613233155-a9d8cec0e656 => github.com/xenolf/lego v0.3.2-0.20160613233155-a9d8cec0e656
## explicit
# github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca => github.com/xlab/treeprint v0.0.0-20180616005107-d6fb6747feb6
# github.com/xlab/treeprint v1.1.0 => github.com/xlab/treeprint v0.0.0-20180616005107-d6fb6747feb6
github.com/xlab/treeprint
# github.com/yashtewari/glob-intersection v0.0.0-20180916065949-5c77d914dd0b => github.com/yashtewari/glob-intersection v0.0.0-20180916065949-5c77d914dd0b
github.com/yashtewari/glob-intersection
@@ -988,7 +997,7 @@ golang.org/x/oauth2/internal
golang.org/x/sync/errgroup
golang.org/x/sync/semaphore
golang.org/x/sync/singleflight
# golang.org/x/sys v0.0.0-20210817190340-bfb29a6856f2 => golang.org/x/sys v0.0.0-20190228124157-a34e9553db1e
# golang.org/x/sys v0.0.0-20210902050250-f475640dd07b => golang.org/x/sys v0.0.0-20190228124157-a34e9553db1e
golang.org/x/sys/cpu
golang.org/x/sys/plan9
golang.org/x/sys/unix
@@ -2304,6 +2313,7 @@ sigs.k8s.io/yaml
# github.com/coreos/pkg => github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f
# github.com/cortexproject/cortex => github.com/cortexproject/cortex v1.3.1-0.20200901115931-255ff3306960
# github.com/cpuguy83/go-md2man => github.com/cpuguy83/go-md2man v1.0.10
# github.com/cpuguy83/go-md2man/v2 => github.com/cpuguy83/go-md2man/v2 v2.0.0
# github.com/creack/pty => github.com/creack/pty v1.1.7
# github.com/cyphar/filepath-securejoin => github.com/cyphar/filepath-securejoin v0.2.2
# github.com/cznic/b => github.com/cznic/b v0.0.0-20180115125044-35e9bbe41f07
@@ -2421,7 +2431,6 @@ sigs.k8s.io/yaml
# github.com/gobwas/pool => github.com/gobwas/pool v0.2.0
# github.com/gobwas/ws => github.com/gobwas/ws v1.0.2
# github.com/gocql/gocql => github.com/gocql/gocql v0.0.0-20200526081602-cd04bd7f22a7
# github.com/gocraft/dbr => github.com/gocraft/dbr v0.0.0-20180507214907-a0fd650918f6
# github.com/godbus/dbus => github.com/godbus/dbus v0.0.0-20190402143921-271e53dc4968
# github.com/godror/godror => github.com/godror/godror v0.13.3
# github.com/gofrs/flock => github.com/gofrs/flock v0.7.1
@@ -2547,10 +2556,10 @@ sigs.k8s.io/yaml
# github.com/kr/pty => github.com/kr/pty v1.1.5
# github.com/kr/text => github.com/kr/text v0.1.0
# github.com/kshvakov/clickhouse => github.com/kshvakov/clickhouse v1.3.5
# github.com/kubernetes-csi/external-snapshotter/client/v3 => github.com/kubernetes-csi/external-snapshotter/client/v3 v3.0.0
# github.com/kubernetes-csi/external-snapshotter/client/v4 => github.com/kubernetes-csi/external-snapshotter/client/v4 v4.2.0
# github.com/kubesphere/pvc-autoresizer => github.com/kubesphere/pvc-autoresizer v0.1.1
# github.com/kubesphere/sonargo => github.com/kubesphere/sonargo v0.0.2
# github.com/kubesphere/storageclass-accessor => github.com/kubesphere/storageclass-accessor v0.2.0
# github.com/kylelemons/go-gypsy => github.com/kylelemons/go-gypsy v0.0.0-20160905020020-08cad365cd28
# github.com/kylelemons/godebug => github.com/kylelemons/godebug v0.0.0-20160406211939-eadb3ce320cb
# github.com/lann/builder => github.com/lann/builder v0.0.0-20180802200727-47ae307949d0
@@ -2699,6 +2708,7 @@ sigs.k8s.io/yaml
# github.com/sergi/go-diff => github.com/sergi/go-diff v1.0.0
# github.com/shopspring/decimal => github.com/shopspring/decimal v0.0.0-20180709203117-cd690d0c9e24
# github.com/shurcooL/httpfs => github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749
# github.com/shurcooL/sanitized_anchor_name => github.com/shurcooL/sanitized_anchor_name v1.0.0
# github.com/shurcooL/vfsgen => github.com/shurcooL/vfsgen v0.0.0-20181202132449-6a9ea43bcacd
# github.com/siebenmann/go-kstat => github.com/siebenmann/go-kstat v0.0.0-20160321171754-d34789b79745
# github.com/sirupsen/logrus => github.com/sirupsen/logrus v1.4.2