Compare commits

...

128 Commits

Author SHA1 Message Date
zryfish
870fc29309 Merge pull request #238 from wansir/master
fix: data is cleared
2018-12-10 19:49:21 +08:00
hongming
e0dfc10010 fix: data is cleared
Signed-off-by: hongming <talonwan@yunify.com>
2018-12-10 18:50:30 +08:00
zryfish
492bf08de0 Merge pull request #237 from carmanzhang/monitor
Bug fix: cluster_pod_count metrics
2018-12-10 15:14:20 +08:00
Carman Zhang
86c7ca0362 Bug fix: cluster_pod_count metrics 2018-12-10 14:58:43 +08:00
zryfish
5d7d6755c0 Merge pull request #234 from carmanzhang/monitor
Bug fix: pod count metrics
2018-12-09 16:32:02 +08:00
Carman Zhang
e775421672 Bug fix: pod count metrics 2018-12-08 19:36:37 +08:00
zryfish
17727188f4 Merge pull request #232 from zryfish/fix_service_type_bug
fix service type
2018-12-06 23:29:45 +08:00
zryfish
e06c241555 Merge pull request #230 from wansir/master
fix: data error
2018-12-06 23:24:25 +08:00
zryfish
75ba702e6e Merge pull request #231 from carmanzhang/monitor
changed default prometheus query timeout to 10 seconds
2018-12-06 23:20:34 +08:00
jeff
3a4c4c10c2 fix service type
Signed-off-by: jeff <jeffzhang@yunify.com>
2018-12-06 23:19:37 +08:00
Carman Zhang
bf022a19fe changed default prometheus query timeout to 10 seconds 2018-12-06 21:54:37 +08:00
hongming
7b1cbabb4f fix: data error
Signed-off-by: hongming <talonwan@yunify.com>
2018-12-06 21:35:48 +08:00
zryfish
e40833c2a7 Merge pull request #229 from wansir/master
fix: miscount
2018-12-06 00:28:44 +08:00
hongming
1055111b5e fix: miscount
Signed-off-by: hongming <talonwan@yunify.com>
2018-12-05 21:30:44 +08:00
zryfish
341795f406 Merge pull request #228 from carmanzhang/monitor
changed nic related rules
2018-12-05 21:24:48 +08:00
Carman Zhang
3e13811ab6 changed nic related rules 2018-12-05 21:03:35 +08:00
zryfish
cde7415b7a Merge pull request #226 from wansir/master
support workspace member search
2018-12-05 19:36:37 +08:00
hongming
f25ae2d571 support workspace member search
Signed-off-by: hongming <talonwan@yunify.com>
2018-12-05 17:44:03 +08:00
zryfish
742ab06e75 Merge pull request #227 from zryfish/add_node_status
refactor component status
2018-12-04 11:25:50 +08:00
jeff
9122ed7b25 refactor component status
Signed-off-by: jeff <jeffzhang@yunify.com>
2018-12-04 01:21:38 +08:00
zryfish
9e7db66780 Merge pull request #225 from carmanzhang/monitor
refactor monitoring apis for high performance testing
2018-12-03 15:35:46 +08:00
Carman Zhang
c503efaa3b refactor monitoring apis for high performance testing 2018-12-03 15:15:53 +08:00
zryfish
fa30f68102 Merge pull request #222 from wansir/master
load data from lister
2018-12-03 14:02:16 +08:00
hongming
1df7185112 load data from lister
Signed-off-by: hongming <talonwan@yunify.com>
2018-12-03 12:01:47 +08:00
zryfish
3041d90e22 Merge pull request #224 from zryfish/refactor_component_status
refactor component status
2018-12-03 02:17:42 +08:00
jeff
cc5e0ec301 refactor component status
Signed-off-by: jeff <jeffzhang@yunify.com>
2018-12-03 02:07:05 +08:00
zryfish
030e4aaed2 Merge pull request #223 from zryfish/refactor_component_status
refactor component status
2018-12-03 01:42:53 +08:00
jeff
7f688317f3 Signed-off-by: jeff <jeffzhang@yunify.com>
refactor component status

Signed-off-by: jeff <jeffzhang@yunify.com>
2018-12-03 01:34:50 +08:00
zryfish
744516d944 Merge pull request #221 from zryfish/refactor_component_status
remove kubesphere-monitoring-system
2018-12-02 19:57:32 +08:00
jeff
b78bff7138 remove kubesphere-monitoring-system 2018-12-02 19:56:50 +08:00
zryfish
0dd78409e8 Merge pull request #220 from zryfish/refactor_resource_interface
refactor resource interface
2018-11-30 19:36:12 +08:00
jeff
f54dd2cf61 refactor resource interface
Signed-off-by: jeff <jeffzhang@yunify.com>
2018-11-30 19:03:14 +08:00
zryfish
92427183ff Merge pull request #219 from wansir/master
fix: sync devops rolebindings
2018-11-30 17:39:40 +08:00
hongming
b90a84b99b fix: sync devops rolebindings
Signed-off-by: hongming <talonwan@yunify.com>
2018-11-30 17:28:30 +08:00
zryfish
8857212695 Merge pull request #217 from zryfish/add_app_description
add app description
2018-11-29 15:31:28 +08:00
jeff
6ba1e0a468 add app description 2018-11-29 15:05:41 +08:00
zryfish
bb8a793e26 Merge pull request #216 from zryfish/pods_paging
query workload pods in a graceful way, which supports paging
2018-11-29 14:59:38 +08:00
jeff
58a58a2984 query workload pods in a graceful way, which supports paging 2018-11-29 14:46:09 +08:00
zryfish
dadea738a9 Merge pull request #214 from carmanzhang/monitor
change workload metrics, add node-pod-container metrics api
2018-11-28 21:10:00 +08:00
Carman Zhang
cb1c03961b change workload metrics, add node-pod-container metrics api 2018-11-28 20:28:17 +08:00
zryfish
0a3dfcc536 Merge pull request #215 from zryfish/add_monitoring_to_system_components
add kubesphere-monitoring-system to system monitoring component
2018-11-28 14:39:55 +08:00
jeff
106c2aad0b add kubesphere-monitoring-system to system monitoring component 2018-11-28 14:31:45 +08:00
zryfish
347bfef1b5 Merge pull request #213 from carmanzhang/monitor
support node-pod/namespace-workload sorting and paging
2018-11-23 16:16:16 +08:00
zryfish
56f7dabb67 Merge pull request #211 from wansir/master
fix bug: rolebinding cannot delete
2018-11-23 14:37:54 +08:00
hongming
a908757cfb fix bug: rolebinding cannot delete 2018-11-23 14:03:15 +08:00
Carman Zhang
9a6880bc9c support node-pod/namespace-workload sorting and paging 2018-11-23 11:41:56 +08:00
zryfish
c7a1011e0c Merge pull request #212 from zryfish/node_roles
fix node role bug
2018-11-22 22:51:57 +08:00
jeff
c07611cbed fix node role bug 2018-11-22 21:39:47 +08:00
zryfish
ba92091561 Merge pull request #209 from wansir/master
fix bug:db init failed,clusterrolebinding sync error
2018-11-21 14:27:10 +08:00
zryfish
f79ae414bc Merge pull request #210 from carmanzhang/monitor
changed promqls and fixed several monitoring bugs
2018-11-21 14:25:33 +08:00
hongming
49d40f48f7 fix bug:db init failed,clusterrolebinding sync error 2018-11-21 10:01:01 +08:00
Carman Zhang
982c4ac30e changed promqls and fixed several monitoring bugs 2018-11-20 19:14:21 +08:00
zryfish
7f780bd3fb Merge pull request #206 from wansir/master
refactor workspace api
2018-11-16 20:59:17 +08:00
zryfish
6156b6dac7 Merge pull request #208 from carmanzhang/monitor
fixed monitor bugs
2018-11-16 20:58:41 +08:00
hongming
bce25036a2 Merge remote-tracking branch 'upstream/master'
# Conflicts:
#	pkg/models/workspaces/workspaces.go
2018-11-16 18:37:56 +08:00
jeff
e9038e94d7 refactor workspace api 2018-11-16 18:35:21 +08:00
Carman Zhang
df31cab343 add load average metrics, change prometheus apiserver svc 2018-11-16 17:41:17 +08:00
Carman Zhang
16eac8ce3c fixed rank 2018-11-16 13:51:09 +08:00
Carman Zhang
c5d9da99a1 fixed component and cluster metrics 2018-11-16 11:04:42 +08:00
zryfish
066f36b81f Merge pull request #204 from carmanzhang/monitor
fixed deployment-pods metrics
2018-11-15 11:24:56 +08:00
Carman Zhang
beb7efdac0 fixed deployment-pods metrics 2018-11-14 16:04:24 +08:00
zryfish
fbf053306b Merge pull request #201 from carmanzhang/monitor
Refactor monitor module
2018-11-11 17:58:53 +08:00
Carman Zhang
f9057a0705 reformat monitoring apis 2018-11-11 13:13:39 +08:00
zryfish
02124c2a12 Merge pull request #202 from zryfish/fix_component_status_bug
component status
2018-11-08 11:38:31 +08:00
jeff
3a38a83dd2 component status 2018-11-08 11:29:42 +08:00
zryfish
1b15e5e774 Merge pull request #200 from zryfish/component_status
component status
2018-11-07 17:13:19 +08:00
jeff
a38bb3784d component status 2018-11-06 17:31:40 +08:00
不羁
c5b11300a1 Merge pull request #199 from wansir/master
refactor workspace api
2018-11-06 10:49:57 +08:00
hongming
e8a4a8685c Merge remote-tracking branch 'upstream/master'
# Conflicts:
#	pkg/apis/v1alpha/monitoring/monitor_handler.go
#	pkg/models/metrics/metricscollector.go
#	pkg/models/metrics/metricsconst.go
2018-11-01 12:26:14 +08:00
hongming
70065d430d refactor workspace api 2018-11-01 12:20:26 +08:00
zryfish
313ebea12c Merge pull request #194 from wansir/master
add workspace api
2018-10-29 11:28:48 +08:00
richardxz
a9cd961236 Merge pull request #197 from richardxz/master
add "Terminating" status in pvc's lifecycle
2018-10-26 18:36:45 +08:00
richardxz
03a37e70a1 add "Terminating" status in pvc's lifecycle 2018-10-25 03:00:56 -04:00
richardxz
920d09042d Merge pull request #196 from richardxz/master
ignore the role/clusterrole which don't have "creator" annotation
2018-10-24 15:29:38 +08:00
richardxz
33dd9fb2dd ignore the role/clusterrole which don't have "creator" annotation 2018-10-24 00:03:43 -04:00
zryfish
59d26c2809 Merge pull request #195 from richardxz/master
fix err in service type's judgment
2018-10-23 14:10:51 +08:00
richardxz
75d8787f64 fix err in service type 2018-10-23 00:20:22 -04:00
hongming
a8d5f552a0 add workspace api 2018-10-22 17:18:20 +08:00
zryfish
5fb551e8d4 Merge pull request #192 from carmanzhang/master
add cluster/workspace level multiple metrics in dashboard
2018-10-17 16:39:59 +08:00
Carman Zhang
c65ecddbef add cluster level multiple metrics in dashboard 2018-10-17 16:31:43 +08:00
richardxz
d368a791e0 Merge pull request #193 from richardxz/master
update job's "rerun" function
2018-10-17 10:21:29 +08:00
richardxz
b982f133aa update job's "rerun" function 2018-10-16 21:55:21 -04:00
zryfish
5a51bb68af Merge pull request #190 from wansir/master
add openpitrix proxy token
2018-10-12 09:58:22 +08:00
richardxz
5a71eaf75c Merge pull request #191 from richardxz/master
avoid incorrect result when list resource with search conditions
2018-10-11 14:58:40 +08:00
richardxz
0b6480328d avoid incorrect result when list resource with search conditions 2018-10-11 02:34:44 -04:00
richardxz
f7f59c0264 Merge pull request #189 from richardxz/master
support image search
2018-10-11 14:31:42 +08:00
hongming
85b3da3dcd add openpitrix proxy token 2018-10-09 19:55:45 +08:00
richardxz
48966ce7d9 support image search 2018-10-09 04:32:34 -04:00
richardxz
adc77fcd58 Merge pull request #188 from richardxz/master
support configmaps and secrets' paging
2018-10-09 14:51:08 +08:00
richardxz
6658177967 support configmaps and secrets' paging 2018-10-07 22:44:21 -04:00
Wiley Wang
8ada5d2b45 Merge pull request #186 from wnxn/master
update Gopkg.lock
2018-09-28 11:03:24 +08:00
wileywang
4dc68a2e41 update Gopkg.lock 2018-09-28 10:29:57 +08:00
carmanzhang
b360c0abd6 Merge pull request #183 from carmanzhang/master
add monitoring apis
2018-09-27 11:53:16 -05:00
Carman Zhang
53dd54b163 add monitoring apis 2018-09-27 17:50:26 +08:00
richardxz
df21cabbdd Merge pull request #180 from richardxz/master
add hpa api
2018-09-27 11:06:58 +08:00
richardxz
f1c1c9e6e4 add hpa api 2018-09-26 22:22:38 -04:00
Wiley Wang
639b94385e Merge pull request #175 from wnxn/master
get k8s version through k8s client at master branch
2018-09-26 13:46:10 +08:00
Wiley Wang
5afe55092b Merge branch 'ceph-secret' 2018-09-25 17:50:38 +08:00
wileywang
2fb4d3a3b8 get k8s version through k8sclient 2018-09-25 17:38:47 +08:00
richardxz
e82ad2d73c Merge pull request #172 from richardxz/master
ensure db connections are successfully closed when process exit
2018-09-25 16:14:27 +08:00
richardxz
dc93d00aed ensure db connections are successfully closed when process exit 2018-09-20 05:40:50 -04:00
richardxz
e1716f254d Merge pull request #171 from richardxz/master
support return storageclass's provisioner
2018-09-20 09:05:26 +08:00
Wiley Wang
975a1555ad Merge pull request #170 from wnxn/master
Add controller to create Ceph secret in master branch
2018-09-19 19:13:37 +08:00
wileywang
13b4b0eb04 Add controller to create Ceph secret 2018-09-19 16:26:41 +08:00
richardxz
1389332205 support return storageclass's provisioner 2018-09-19 14:00:35 +08:00
richardxz
1c27a36e06 Merge pull request #168 from richardxz/master
modify db client's  initialization function
2018-09-19 11:26:53 +08:00
richardxz
7db56c8b5f modify db client's initialization function 2018-09-17 15:16:58 +08:00
richardxz
5b52580b37 Merge pull request #167 from richardxz/master
register new apis
2018-09-17 15:08:50 +08:00
richardxz
cef1732595 register new apis 2018-09-17 13:57:43 +08:00
richardxz
48300b2bf6 Merge pull request #166 from richardxz/master
support get workload's revision by revision number
2018-09-17 13:57:03 +08:00
richardxz
6ad87a296f support get workload's revision by revision number 2018-09-17 13:44:45 +08:00
zryfish
4a3067c2ab Merge pull request #165 from wnxn/master
update dependency at master branch
2018-09-17 13:37:20 +08:00
richardxz
5eee8b3d53 Merge pull request #163 from richardxz/master
support job re-run
2018-09-17 13:02:25 +08:00
Wiley Wang
db08431ac1 update dependency 2018-09-17 03:20:28 +00:00
richardxz
18f8f13ffb support job re-run 2018-09-17 11:20:05 +08:00
zryfish
2e6bf0f566 Merge pull request #160 from richardxz/master
refactor the code of resource list function
2018-09-17 11:02:02 +08:00
richardxz
4bd18b072c refactor the code of resource list function 2018-09-17 10:24:01 +08:00
Calvin Yu
f7e607a14c refactor docs 2018-08-08 15:29:28 +08:00
richardxz
0d24ea922d Merge pull request #153 from richardxz/master
add desciption field in application response
2018-08-06 11:11:58 +08:00
richardxz
d24ee41c23 add desciption field in application response 2018-08-01 22:18:29 -04:00
richardxz
1987900430 Merge pull request #151 from richardxz/master
add resync function and support to view deployed applications
2018-07-31 13:42:30 +08:00
richardxz
49e297d663 add resync function and support to view deployed applications 2018-07-31 01:14:33 -04:00
richardxz
c76c82a635 Merge pull request #146 from richardxz/master
add swagger ui
2018-07-31 10:41:37 +08:00
zryfish
b21b33046b Merge pull request #150 from zryfish/fix_router_config_bug
fix router config bug
2018-07-23 11:06:30 +08:00
jeff
4265c3e9f1 fix router config bug 2018-07-23 10:50:09 +08:00
zryfish
6cac9b7f6d Merge pull request #148 from littlebeer2100/express
alter components function to filter non system component service
2018-07-11 19:30:57 +08:00
yanmingfan
f60610e39d alter components function to filter uncomponents svc 2018-07-11 18:21:21 +08:00
richardxz
f8a057abc8 add swagger ui 2018-07-11 17:28:19 +08:00
208 changed files with 38562 additions and 2259 deletions

13
AUTHORS
View File

@@ -1,13 +0,0 @@
# This is the official list of KubeSphere authors for copyright purposes.
# This file is distinct from the CONTRIBUTORS files.
# See the latter for an explanation.
# Names should be added to this file as one of
# Organization's name
# Individual's name <submission email address>
# Individual's name <submission email address> <email2> <emailN>
# See CONTRIBUTORS for the meaning of multiple email addresses.
# Please keep the list sorted.
Yunify Inc.

View File

@@ -1,20 +0,0 @@
# This is the official list of people who can contribute
# (and typically have contributed) code to the KubeSphere repository.
# The AUTHORS file lists the copyright holders; this file
# lists people. For example, Yunify employees are listed here
# but not in AUTHORS, because Yunify holds the copyright.
#
# When adding J Random Contributor's name to this file,
# either J's name or J's organization's name should be
# added to the AUTHORS file.
# Names should be added to this file like so:
# Individual's name <submission email address>
# Individual's name <submission email address> <email2> <emailN>
#
# An entry with multiple email addresses specifies that the
# first address should be used in the submit logs.
# Please keep the list sorted.
Ray@qingcloud <ray@yunify.com>

View File

@@ -6,5 +6,5 @@ RUN apk add --update ca-certificates \
COPY ./bin/* /usr/local/bin/
COPY ./install/ingress-controller /etc/kubesphere/ingress-controller
COPY ./install/swagger-ui /usr/lib/kubesphere/swagger-ui
CMD ["sh"]

259
Gopkg.lock generated
View File

@@ -2,24 +2,47 @@
[[projects]]
digest = "1:bf42be3cb1519bf8018dfd99720b1005ee028d947124cab3ccf965da59381df6"
name = "github.com/Microsoft/go-winio"
packages = ["."]
pruneopts = "UT"
revision = "7da180ee92d8bd8bb8c37fc560e673e6557c392f"
version = "v0.4.7"
[[projects]]
digest = "1:d1665c44bd5db19aaee18d1b6233c99b0b9a986e8bccb24ef54747547a48027f"
name = "github.com/PuerkitoBio/purell"
packages = ["."]
pruneopts = "UT"
revision = "0bcb03f4b4d0a9428594752bd2a3b9aa0a9d4bd4"
version = "v1.1.0"
[[projects]]
branch = "master"
digest = "1:c739832d67eb1e9cc478a19cc1a1ccd78df0397bf8a32978b759152e205f644b"
name = "github.com/PuerkitoBio/urlesc"
packages = ["."]
pruneopts = "UT"
revision = "de5bf2ad457846296e2031421a34e2568e304e35"
[[projects]]
digest = "1:9e9193aa51197513b3abcb108970d831fbcf40ef96aa845c4f03276e1fa316d2"
name = "github.com/Sirupsen/logrus"
packages = ["."]
pruneopts = "UT"
revision = "c155da19408a8799da419ed3eeb0cb5db0ad5dbc"
version = "v1.0.5"
[[projects]]
digest = "1:e49fec8537ec021eeb41d394684bce0365c8db14c8099215f7b509189ddb5852"
name = "github.com/antonholmquist/jason"
packages = ["."]
pruneopts = "UT"
revision = "c23cef7eaa75a6a5b8810120e167bd590d8fd2ab"
version = "v1.0.0"
[[projects]]
digest = "1:4fe4dc4ce7ebb5a4b0544c5b411196d23e221800d279a207d76f02812f756c3d"
name = "github.com/coreos/etcd"
packages = [
"auth/authpb",
@@ -29,27 +52,33 @@
"mvcc/mvccpb",
"pkg/tlsutil",
"pkg/transport",
"pkg/types"
"pkg/types",
]
pruneopts = "UT"
revision = "33245c6b5b49130ca99280408fadfab01aac0e48"
version = "v3.3.8"
[[projects]]
digest = "1:a2c1d0e43bd3baaa071d1b9ed72c27d78169b2b269f71c105ac4ba34b1be4a39"
name = "github.com/davecgh/go-spew"
packages = ["spew"]
pruneopts = "UT"
revision = "346938d642f2ec3594ed81d874461961cd0faa76"
version = "v1.1.0"
[[projects]]
branch = "master"
digest = "1:4189ee6a3844f555124d9d2656fe7af02fca961c2a9bad9074789df13a0c62e0"
name = "github.com/docker/distribution"
packages = [
"digestset",
"reference"
"reference",
]
pruneopts = "UT"
revision = "749f6afb4572201e3c37325d0ffedb6f32be8950"
[[projects]]
digest = "1:ec821dda59d7dd340498d74f798aa218b2c782bba54a690e866dc4f520d900d5"
name = "github.com/docker/docker"
packages = [
"api",
@@ -71,194 +100,295 @@
"pkg/ioutils",
"pkg/longpath",
"pkg/system",
"pkg/tlsconfig"
"pkg/tlsconfig",
]
pruneopts = "UT"
revision = "90d35abf7b3535c1c319c872900fbd76374e521c"
version = "v17.05.0-ce-rc3"
[[projects]]
branch = "master"
digest = "1:811c86996b1ca46729bad2724d4499014c4b9effd05ef8c71b852aad90deb0ce"
name = "github.com/docker/go-connections"
packages = [
"nat",
"sockets",
"tlsconfig"
"tlsconfig",
]
pruneopts = "UT"
revision = "7395e3f8aa162843a74ed6d48e79627d9792ac55"
[[projects]]
digest = "1:6f82cacd0af5921e99bf3f46748705239b36489464f4529a1589bc895764fb18"
name = "github.com/docker/go-units"
packages = ["."]
pruneopts = "UT"
revision = "47565b4f722fb6ceae66b95f853feed578a4a51c"
version = "v0.3.3"
[[projects]]
branch = "master"
digest = "1:4841e14252a2cecf11840bd05230412ad469709bbacfc12467e2ce5ad07f339b"
name = "github.com/docker/libtrust"
packages = ["."]
pruneopts = "UT"
revision = "aabc10ec26b754e797f9028f4589c5b7bd90dc20"
[[projects]]
branch = "master"
digest = "1:dbb3d1675f5beeb37de6e9b95cc460158ff212902a916e67688b01e0660f41bd"
name = "github.com/docker/spdystream"
packages = [
".",
"spdy"
"spdy",
]
pruneopts = "UT"
revision = "bc6354cbbc295e925e4c611ffe90c1f287ee54db"
[[projects]]
digest = "1:798072bbab2506719d8292cd9b5840a0b5babe0348393bd7097d8fb25ecf0b82"
name = "github.com/emicklei/go-restful"
packages = [
".",
"log"
"log",
]
pruneopts = "UT"
revision = "3658237ded108b4134956c1b3050349d93e7b895"
version = "v2.7.1"
[[projects]]
digest = "1:e2300c0b15e8b7cb908da64f50e748725c739bcf042a19ceb971680763339888"
name = "github.com/emicklei/go-restful-openapi"
packages = ["."]
pruneopts = "UT"
revision = "51bf251d405ad1e23511fef0a2dbe40bc70ce2c6"
version = "v0.11.0"
[[projects]]
digest = "1:2cd7915ab26ede7d95b8749e6b1f933f1c6d5398030684e6505940a10f31cfda"
name = "github.com/ghodss/yaml"
packages = ["."]
pruneopts = "UT"
revision = "0ca9ea5df5451ffdf184b4428c902747c2c11cd7"
version = "v1.0.0"
[[projects]]
branch = "master"
digest = "1:2997679181d901ac8aaf4330d11138ecf3974c6d3334995ff36f20cbd597daf8"
name = "github.com/go-openapi/jsonpointer"
packages = ["."]
pruneopts = "UT"
revision = "3a0015ad55fa9873f41605d3e8f28cd279c32ab2"
[[projects]]
branch = "master"
digest = "1:1ae3f233d75a731b164ca9feafd8ed646cbedf1784095876ed6988ce8aa88b1f"
name = "github.com/go-openapi/jsonreference"
packages = ["."]
pruneopts = "UT"
revision = "3fb327e6747da3043567ee86abd02bb6376b6be2"
[[projects]]
branch = "master"
digest = "1:cbd9c1cc4ce36075f4ebf0e0525e6cda8597daac1a5eb5f7f88480a3c00e7319"
name = "github.com/go-openapi/spec"
packages = ["."]
pruneopts = "UT"
revision = "bce47c9386f9ecd6b86f450478a80103c3fe1402"
[[projects]]
branch = "master"
digest = "1:731022b436cdb9b4b2a53be2ead693467a1474b8b873d4f90cb424fffdc3d0ff"
name = "github.com/go-openapi/swag"
packages = ["."]
pruneopts = "UT"
revision = "2b0bd4f193d011c203529df626a65d63cb8a79e8"
[[projects]]
digest = "1:adea5a94903eb4384abef30f3d878dc9ff6b6b5b0722da25b82e5169216dfb61"
name = "github.com/go-sql-driver/mysql"
packages = ["."]
pruneopts = "UT"
revision = "d523deb1b23d913de5bdada721a6071e71283618"
version = "v1.4.0"
[[projects]]
digest = "1:cd559bf134bbedd0dfd5db4d988c88d8f96674fa3f2af0cb5b0dcd5fc0a0a019"
name = "github.com/gogo/protobuf"
packages = [
"gogoproto",
"proto",
"protoc-gen-gogo/descriptor",
"sortkeys"
"sortkeys",
]
pruneopts = "UT"
revision = "1adfc126b41513cc696b209667c8656ea7aac67c"
version = "v1.0.0"
[[projects]]
branch = "master"
digest = "1:1ba1d79f2810270045c328ae5d674321db34e3aae468eb4233883b473c5c0467"
name = "github.com/golang/glog"
packages = ["."]
pruneopts = "UT"
revision = "23def4e6c14b4da8ac2ed8007337bc5eb5007998"
[[projects]]
digest = "1:17fe264ee908afc795734e8c4e63db2accabaf57326dbf21763a7d6b86096260"
name = "github.com/golang/protobuf"
packages = [
"proto",
"ptypes",
"ptypes/any",
"ptypes/duration",
"ptypes/timestamp"
"ptypes/timestamp",
]
pruneopts = "UT"
revision = "b4deda0973fb4c70b50d226b1af49f3da59f5265"
version = "v1.1.0"
[[projects]]
branch = "master"
digest = "1:3ee90c0d94da31b442dde97c99635aaafec68d0b8a3c12ee2075c6bdabeec6bb"
name = "github.com/google/gofuzz"
packages = ["."]
pruneopts = "UT"
revision = "24818f796faf91cd76ec7bddd72458fbced7a6c1"
[[projects]]
digest = "1:65c4414eeb350c47b8de71110150d0ea8a281835b1f386eacaa3ad7325929c21"
name = "github.com/googleapis/gnostic"
packages = [
"OpenAPIv2",
"compiler",
"extensions"
"extensions",
]
pruneopts = "UT"
revision = "7c663266750e7d82587642f65e60bc4083f1f84e"
version = "v0.2.0"
[[projects]]
digest = "1:43dd08a10854b2056e615d1b1d22ac94559d822e1f8b6fcc92c1a1057e85188e"
name = "github.com/gorilla/websocket"
packages = ["."]
pruneopts = "UT"
revision = "ea4d1f681babbce9545c9c5f3d5194a789c89f5b"
version = "v1.2.0"
[[projects]]
branch = "master"
digest = "1:cf296baa185baae04a9a7004efee8511d08e2f5f51d4cbe5375da89722d681db"
name = "github.com/hashicorp/golang-lru"
packages = [
".",
"simplelru"
"simplelru",
]
pruneopts = "UT"
revision = "0fb14efe8c47ae851c0034ed7a448854d3d34cf3"
[[projects]]
branch = "master"
digest = "1:0778dc7fce1b4669a8bfa7ae506ec1f595b6ab0f8989c1c0d22a8ca1144e9972"
name = "github.com/howeyc/gopass"
packages = ["."]
pruneopts = "UT"
revision = "bf9dde6d0d2c004a008c27aaee91170c786f6db8"
[[projects]]
digest = "1:3e260afa138eab6492b531a3b3d10ab4cb70512d423faa78b8949dec76e66a21"
name = "github.com/imdario/mergo"
packages = ["."]
pruneopts = "UT"
revision = "9316a62528ac99aaecb4e47eadd6dc8aa6533d58"
version = "v0.3.5"
[[projects]]
digest = "1:235ae01f32fb5f12c5f6d2e0e05ab48e651ab31c126e45a4efc4f510810941ac"
name = "github.com/jinzhu/gorm"
packages = [
".",
"dialects/mysql"
]
packages = ["."]
pruneopts = "UT"
revision = "6ed508ec6a4ecb3531899a69cbc746ccf65a4166"
version = "v1.9.1"
[[projects]]
branch = "master"
digest = "1:fd97437fbb6b7dce04132cf06775bd258cce305c44add58eb55ca86c6c325160"
name = "github.com/jinzhu/inflection"
packages = ["."]
pruneopts = "UT"
revision = "04140366298a54a039076d798123ffa108fff46c"
[[projects]]
digest = "1:b1d4df033414c1a0d85fa7037b9aaf03746314811c860a95ea2d5fd481cd6c35"
name = "github.com/json-iterator/go"
packages = ["."]
pruneopts = "UT"
revision = "ca39e5af3ece67bbcda3d0f4f56a8e24d9f2dad4"
version = "1.1.3"
[[projects]]
branch = "master"
digest = "1:ada518b8c338e10e0afa443d84671476d3bd1d926e13713938088e8ddbee1a3e"
name = "github.com/mailru/easyjson"
packages = [
"buffer",
"jlexer",
"jwriter",
]
pruneopts = "UT"
revision = "3fdea8d05856a0c8df22ed4bc71b3219245e4485"
[[projects]]
digest = "1:33422d238f147d247752996a26574ac48dcf472976eda7f5134015f06bf16563"
name = "github.com/modern-go/concurrent"
packages = ["."]
pruneopts = "UT"
revision = "bacd9c7ef1dd9b15be4a9909b8ac7a4e313eec94"
version = "1.0.3"
[[projects]]
digest = "1:d711dfcf661439f1ef0b202a02e8a1ff4deac48f26f34253520dcdbecbd7c5f1"
name = "github.com/modern-go/reflect2"
packages = ["."]
pruneopts = "UT"
revision = "1df9eeb2bb81f327b96228865c5687bc2194af3f"
version = "1.0.0"
[[projects]]
digest = "1:ee4d4af67d93cc7644157882329023ce9a7bcfce956a079069a9405521c7cc8d"
name = "github.com/opencontainers/go-digest"
packages = ["."]
pruneopts = "UT"
revision = "279bed98673dd5bef374d3b6e4b09e2af76183bf"
version = "v1.0.0-rc1"
[[projects]]
digest = "1:40e195917a951a8bf867cd05de2a46aaf1806c50cf92eebf4c16f78cd196f747"
name = "github.com/pkg/errors"
packages = ["."]
pruneopts = "UT"
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
version = "v0.8.0"
[[projects]]
digest = "1:9424f440bba8f7508b69414634aef3b2b3a877e522d8a4624692412805407bb7"
name = "github.com/spf13/pflag"
packages = ["."]
pruneopts = "UT"
revision = "583c0c0531f06d5278b7d917446061adc344b5cd"
version = "v1.0.1"
[[projects]]
branch = "master"
digest = "1:3f3a05ae0b95893d90b9b3b5afdb79a9b3d96e4e36e099d841ae602e4aca0da8"
name = "golang.org/x/crypto"
packages = ["ssh/terminal"]
pruneopts = "UT"
revision = "7f39a6fea4fe9364fb61e1def6a268a51b4f3a06"
[[projects]]
branch = "master"
digest = "1:bae20a4ea45ad83eb54271a18c820a4ca7c03880aa20d964e2d5bb1d57b1a41a"
name = "golang.org/x/net"
packages = [
"context",
@@ -270,20 +400,24 @@
"internal/socks",
"internal/timeseries",
"proxy",
"trace"
"trace",
]
pruneopts = "UT"
revision = "db08ff08e8622530d9ed3a0e8ac279f6d4c02196"
[[projects]]
branch = "master"
digest = "1:a17927b3d78603ae6691d5bf8d3d91467a6edd4ca43c9509347e016a54477f96"
name = "golang.org/x/sys"
packages = [
"unix",
"windows"
"windows",
]
pruneopts = "UT"
revision = "fc8bd948cf46f9c7af0f07d34151ce25fe90e477"
[[projects]]
digest = "1:0c56024909189aee3364b7f21a95a27459f718aa7c199a5c111c36cfffd9eaef"
name = "golang.org/x/text"
packages = [
"collate",
@@ -299,30 +433,39 @@
"unicode/bidi",
"unicode/cldr",
"unicode/norm",
"unicode/rangetable"
"unicode/rangetable",
"width",
]
pruneopts = "UT"
revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
version = "v0.3.0"
[[projects]]
branch = "master"
digest = "1:c9e7a4b4d47c0ed205d257648b0e5b0440880cb728506e318f8ac7cd36270bc4"
name = "golang.org/x/time"
packages = ["rate"]
pruneopts = "UT"
revision = "fbb02b2291d28baffd63558aa44b4b56f178d650"
[[projects]]
digest = "1:c25289f43ac4a68d88b02245742347c94f1e108c534dda442188015ff80669b3"
name = "google.golang.org/appengine"
packages = ["cloudsql"]
pruneopts = "UT"
revision = "b1f26356af11148e710935ed1ac8a7f5702c7612"
version = "v1.1.0"
[[projects]]
branch = "master"
digest = "1:601e63e7d4577f907118bec825902505291918859d223bce015539e79f1160e3"
name = "google.golang.org/genproto"
packages = ["googleapis/rpc/status"]
pruneopts = "UT"
revision = "32ee49c4dd805befd833990acba36cb75042378c"
[[projects]]
digest = "1:3a98314fd2e43bbd905b33125dad80b10111ba6e5e541db8ed2a953fe01fbb31"
name = "google.golang.org/grpc"
packages = [
".",
@@ -350,30 +493,38 @@
"stats",
"status",
"tap",
"transport"
"transport",
]
pruneopts = "UT"
revision = "168a6198bcb0ef175f7dacec0b8691fc141dc9b8"
version = "v1.13.0"
[[projects]]
digest = "1:7a23929a5a0d4266c8f5444dae1e7594dbb0cae1c3091834119b162f81e229ff"
name = "gopkg.in/igm/sockjs-go.v2"
packages = ["sockjs"]
pruneopts = "UT"
revision = "d276e9ffe5cc5c271b81198cc77a2adf6c4482d2"
version = "v2.0.0"
[[projects]]
digest = "1:2d1fbdc6777e5408cabeb02bf336305e724b925ff4546ded0fa8715a7267922a"
name = "gopkg.in/inf.v0"
packages = ["."]
pruneopts = "UT"
revision = "d2d2541c53f18d2a059457998ce2876cc8e67cbf"
version = "v0.9.1"
[[projects]]
digest = "1:342378ac4dcb378a5448dd723f0784ae519383532f5e70ade24132c4c8693202"
name = "gopkg.in/yaml.v2"
packages = ["."]
pruneopts = "UT"
revision = "5420a8b6744d3b0345ab293f6fcba19c978f1183"
version = "v2.2.1"
[[projects]]
digest = "1:cae8f1d1d786aa486a7ed236a8c1f099b3b44697ec6bbb5951d7e9bdb53a5125"
name = "k8s.io/api"
packages = [
"admissionregistration/v1alpha1",
@@ -403,12 +554,14 @@
"settings/v1alpha1",
"storage/v1",
"storage/v1alpha1",
"storage/v1beta1"
"storage/v1beta1",
]
pruneopts = "UT"
revision = "73d903622b7391f3312dcbac6483fed484e185f8"
version = "kubernetes-1.10.0"
[[projects]]
digest = "1:d0089d5f7811ded4279da7a8a66d2721488afec8208d86bdad3f4a20d3687e81"
name = "k8s.io/apimachinery"
packages = [
"pkg/api/errors",
@@ -453,12 +606,14 @@
"pkg/version",
"pkg/watch",
"third_party/forked/golang/netutil",
"third_party/forked/golang/reflect"
"third_party/forked/golang/reflect",
]
pruneopts = "UT"
revision = "302974c03f7e50f16561ba237db776ab93594ef6"
version = "kubernetes-1.10.0"
[[projects]]
digest = "1:7ee72261d268f7443085aad95b39fefc17fca826a9bfd8bd2d431bc081852a62"
name = "k8s.io/client-go"
packages = [
"discovery",
@@ -580,20 +735,78 @@
"util/flowcontrol",
"util/homedir",
"util/integer",
"util/retry"
"util/retry",
]
pruneopts = "UT"
revision = "23781f4d6632d88e869066eaebb743857aa1ef9b"
version = "v7.0.0"
[[projects]]
digest = "1:2bdbea32607f4effd9e91dadd90baab1ecf224839b613bcaa8f50db5a5f133f5"
name = "k8s.io/kubernetes"
packages = ["pkg/util/slice"]
packages = [
"pkg/apis/core",
"pkg/util/slice",
"pkg/util/version",
]
pruneopts = "UT"
revision = "5ca598b4ba5abb89bb773071ce452e33fb66339d"
version = "v1.10.4"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "aee0cc75f6ebd8678991b74319dba7fc523e5652286a5b790a53595c1ae09802"
input-imports = [
"github.com/antonholmquist/jason",
"github.com/coreos/etcd/clientv3",
"github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes",
"github.com/coreos/etcd/pkg/transport",
"github.com/docker/docker/api/types",
"github.com/docker/docker/client",
"github.com/emicklei/go-restful",
"github.com/emicklei/go-restful-openapi",
"github.com/go-openapi/spec",
"github.com/go-sql-driver/mysql",
"github.com/golang/glog",
"github.com/jinzhu/gorm",
"github.com/pkg/errors",
"github.com/spf13/pflag",
"gopkg.in/igm/sockjs-go.v2/sockjs",
"gopkg.in/yaml.v2",
"k8s.io/api/apps/v1",
"k8s.io/api/apps/v1beta2",
"k8s.io/api/autoscaling/v1",
"k8s.io/api/batch/v1",
"k8s.io/api/batch/v1beta1",
"k8s.io/api/core/v1",
"k8s.io/api/extensions/v1beta1",
"k8s.io/api/policy/v1beta1",
"k8s.io/api/rbac/v1",
"k8s.io/api/storage/v1",
"k8s.io/apimachinery/pkg/api/errors",
"k8s.io/apimachinery/pkg/api/resource",
"k8s.io/apimachinery/pkg/apis/meta/v1",
"k8s.io/apimachinery/pkg/labels",
"k8s.io/apimachinery/pkg/types",
"k8s.io/apimachinery/pkg/util/sets",
"k8s.io/apimachinery/pkg/util/wait",
"k8s.io/client-go/informers",
"k8s.io/client-go/kubernetes",
"k8s.io/client-go/kubernetes/scheme",
"k8s.io/client-go/listers/apps/v1",
"k8s.io/client-go/listers/batch/v1",
"k8s.io/client-go/listers/batch/v1beta1",
"k8s.io/client-go/listers/core/v1",
"k8s.io/client-go/listers/extensions/v1beta1",
"k8s.io/client-go/listers/rbac/v1",
"k8s.io/client-go/listers/storage/v1",
"k8s.io/client-go/rest",
"k8s.io/client-go/tools/cache",
"k8s.io/client-go/tools/clientcmd",
"k8s.io/client-go/tools/remotecommand",
"k8s.io/kubernetes/pkg/apis/core",
"k8s.io/kubernetes/pkg/util/slice",
"k8s.io/kubernetes/pkg/util/version",
]
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -8,20 +8,19 @@
**Features:**
- Multiple IaaS platform support, including baremetal/KVM/QingCloud, and more will be supported in future release.
- Easy setup of Kubernetes standalone(only one master node) and cluster environment(including High Availability support).
- Powerful management console to help business users to manage and monitor the Kubernetes environment.
- Powerful management console to help business users to manage and monitor the Kubernetes.
- Integrate with [OpenPitrix](https://github.com/openpitrix) to provide full life cycle of application management and be compatible of helm package.
- Support popular open source network solutions, including calico and flannel, also could use [qingcloud hostnic solution](https://github.com/yunify/hostnic-cni) if the Kubernetes is deployed on QingCloud platform.
- Support popular open source storage solutions, including Glusterfs and Cephfs, also could use [qingcloud storage solution](https://github.com/yunify/qingcloud-volume-provisioner) if the Kubernetes is deployed on QingCloud platform.
- Support popular open source storage solutions, including Glusterfs and Cephfs, also could use [qingcloud storage solution](https://github.com/yunify/qingcloud-csi) or [qingstor storage solution](https://github.com/yunify/qingstor-csi) if the Kubernetes is deployed on QingCloud platform or QingStor NeonSAN.
- CI/CD support.
- Service Mesh support.
- Multiple image registries support.
- Federation support.
- Integrate with QingCloud IAM.
----
## Motivation
The project originates from the requirement and pains we heard from our customers on public and private QingCloud platform, who have strong will to deploy Kubernetes in their IT system but struggle on completed setup process and long learning curve. With help of KubeSphere, their IT operators could setup Kubernetes environment quickly and use an easy management UI interface to mange their applications.
The project originates from the requirement and pains we heard from our customers on public and private QingCloud platform, who have strong will to deploy Kubernetes in their IT system but struggle on completed setup process and long learning curve. With help of KubeSphere, their IT operators could setup Kubernetes environment quickly and use an easy management UI interface to mange their applications, also KubeSphere provides more features to help customers to handle daily business more easily, including CI/CD, micro services management...etc.
Getting Started
---------------
@@ -31,8 +30,8 @@ Getting Started
## Contributing to the project
All [members](docs/members.md) of the KubeSphere community must abide by [Code of Conduct](code-of-conduct.md). Only by respecting each other can we develop a productive, collaborative community.
All members of the KubeSphere community must abide by [Code of Conduct](docs/code-of-conduct.md). Only by respecting each other can we develop a productive, collaborative community.
You can then check out how to [setup for development](docs/development.md).
You can then find out more detail [here](docs/welcome-toKubeSphere-new-developer-guide.md).

View File

@@ -1,51 +0,0 @@
# OpenPitrix Developer Guide
The developer guide is for anyone wanting to either write code which directly accesses the
OpenPitrix API, or to contribute directly to the OpenPitrix project.
## The process of developing and contributing code to the OpenPitrix project
* **Welcome to OpenPitrix (New Developer Guide)**
([welcome-to-OpenPitrix-new-developer-guide.md](welcome-to-OpenPitrix-new-developer-guide.md)):
An introductory guide to contributing to OpenPitrix.
* **On Collaborative Development** ([collab.md](collab.md)): Info on pull requests and code reviews.
* **GitHub Issues** ([issues.md](issues.md)): How incoming issues are triaged.
* **Pull Request Process** ([pull-requests.md](pull-requests.md)): When and why pull requests are closed.
* **Getting Recent Builds** ([getting-builds.md](getting-builds.md)): How to get recent builds including the latest builds that pass CI.
* **Automated Tools** ([automation.md](automation.md)): Descriptions of the automation that is running on our github repository.
## Setting up your dev environment, coding, and debugging
* **Development Guide** ([development.md](development.md)): Setting up your development environment.
* **Testing** ([testing.md](testing.md)): How to run unit, integration, and end-to-end tests in your development sandbox.
* **Hunting flaky tests** ([flaky-tests.md](flaky-tests.md)): We have a goal of 99.9% flake free tests.
Here's how to run your tests many times.
* **Logging Conventions** ([logging.md](logging.md)): Glog levels.
* **Coding Conventions** ([coding-conventions.md](coding-conventions.md)):
Coding style advice for contributors.
* **Document Conventions** ([how-to-doc.md](how-to-doc.md))
Document style advice for contributors.
* **Running a cluster locally** ([running-locally.md](running-locally.md)):
A fast and lightweight local cluster deployment for development.
## Developing against the OpenPitrix API
* The [REST API documentation](http://openpitrix.io/docs/reference/) explains the REST
API exposed by apiserver.
## Building releases
See the [openpitrix/release](https://github.com/kubernetes/release) repository for details on creating releases and related tools and helper scripts.

View File

@@ -1,74 +0,0 @@
# Developing for KubeSphere
The [community repository](https://github.com/kubesphere) hosts all information about
building KubeSphere from source, how to contribute code and documentation, who to contact about what, etc. If you find a requirement that this doc does not capture, or if you find other docs with references to requirements that are not simply links to this doc, please [submit an issue](https://github.com/kubesphere/kubesphere/issues/new).
----
## To start developing KubeSphere
First of all, you should fork the project. Then follow one of the three options below to develop the project. Please note you should replace the official repo when using __go get__ or __git clone__ below with your own one.
### 1. You have a working [Docker Compose](https://docs.docker.com/compose/install) environment [recommend].
>You need to install [Docker](https://docs.docker.com/engine/installation/) first.
```shell
$ git clone https://github.com/kubesphere/kubesphere
$ cd kubesphere
$ make build
$ make compose-up
```
Exit docker runtime environment
```shell
$ make compose-down
```
### 2. You have a working [Docker](https://docs.docker.com/engine/installation/) environment.
Exit docker runtime environment
```shell
$ docker stop $(docker ps -f name=kubesphere -q)
```
### 3. You have a working [Go](prereqs.md#setting-up-go) environment.
- Install [protoc compiler](https://github.com/google/protobuf/releases/)
- Install protoc plugin:
```shell
$ go get github.com/golang/protobuf/protoc-gen-go
$ go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-grpc-gateway
$ go get github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger
$ go get github.com/mwitkow/go-proto-validators/protoc-gen-govalidators
```
- Get kubesphere source code and build service:
```shell
$ go get -d kubesphere.io/kubesphere
$ cd $GOPATH/src/kubesphere.io/kubesphere
$ make generate
$ GOBIN=`pwd`/bin go install ./cmd/...
```
- Start KubeSphere service:
- Exit go runtime environment
```shell
$ ps aux | grep kubesphere- | grep -v grep | awk '{print $2}' | xargs kill -9
```
----
## Test KubeSphere
Visit http://127.0.0.1:9100/swagger-ui in browser, and try it online, or test kubesphere api service via command line:
----
## DevOps
Please check [How to set up DevOps environment](devops.md).

View File

@@ -1,79 +0,0 @@
# Set Up DevOps Environment
DevOps is recommended to use for this project. Please follow the instructions below to set up your environment. We use Jenkins with Blue Ocean plugin and deploy it on Kubernetes, also continuously deploy KubeSphere on the Kubernetes cluster.
----
- [Create Kubernetes Cluster](#create-kubernetes-cluster)
- [Deploy Jenkins](#deploy-jenkins)
- [Configure Jenkins](#configure-jenkins)
- [Create a Pipeline](#create-a-pipeline)
## Create Kubernetes Cluster
We are using [Kubernetes on QingCloud](https://appcenter.qingcloud.com/apps/app-u0llx5j8) to create a kubernetes production environment by one click. Please follow the [instructions](https://appcenter-docs.qingcloud.com/user-guide/apps/docs/kubernetes/) to create your own cluster. Access the Kubernetes client using one of the following options.
- **Open VPN**<a id="openvpn"></a>: Go to the left navigation tree of the [QingCloud console](https://console.qingcloud.com), choose _Networks & CDN_, then _VPC Networks_; on the content of the VPC page, choose _Management Configuration_, _VPN Service_, then you will find _Open VPN_ service. Here is the [screenshot](images/openvpn.png) of the page.
- **Port Forwarding**<a id="port-forwarding"></a>: same as Open VPN, but choose _Port Forwarding_ on the content of VPC page instead of VPN Service; and add a rule to forward source port to the port of ssh port of the kubernetes client, for instance, forward 10007 to 22 of the kubernetes client with the private IP being 192.168.100.7. After that, you need to open the firewall to allow the port 10007 accessible from outside. Please click the _Security Group_ ID on the same page of the VPC, and add the downstream rule for the firewall.
- **VNC**: If you don't want to access the client node remotely, just go to the kubernetes cluster detailed page on the [QingCloud console](https://console.qingcloud.com), and click the windows icon aside of the client ID shown as the [screenshot](images/kubernets.png) (user/password: root/k8s). The way is not recommended, however you can check kubernetes quickly using VNC since you don't configure anything.
## Deploy Jenkins
1. Copy the [yaml file](../devops/kubernetes/jenkins-qingcloud.yaml) to the kubernetes client, and deploy
```
# kubectl apply -f jenkins-qingcloud.yaml
```
2. Access Jenkins console by opening http://\<ip\>:9200 where ip depends on how you expose the Jenkins service to outside explained below. (You can find your way to access Jenkins console such as ingress, cloud LB etc.) On the kubernetes client
```
# iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 9200 -j DNAT --to-destination "$(kubectl get svc -n jenkins --selector=app=jenkins -o jsonpath='{.items..spec.clusterIP}')":9200
# iptables -t nat -A POSTROUTING -p tcp --dport 9200 -j MASQUERADE
# sysctl -w net.ipv4.conf.eth0.route_localnet=1
```
3. Now the request to the kubernetes client port 9200 will be forwarded to the Jenkins service.
- If you use [Open VPN](#openvpn) to access the kubernetes client, then open http://\<kubernetes client private ip\>:9200 to access Jenkins console.
- If you use [Port Forwarding](#port-forwarding) to access the client, then forward the VPC port 9200 to the kubernetes client port 9200. Now open http://\<VPC EIP\>:9200 to access Jenkins console.
## Configure Jenkins
> You can refer [jenkins.io](https://jenkins.io/doc/tutorials/using-jenkins-to-build-a-java-maven-project/) about how to configure Jenkins and create a pipeline.
1. Unlock Jenkins
- Get the Adminstrator password from the log on the kubernetes client
```
# kubectl logs "$(kubectl get pods -n jenkins --selector=app=jenkins -o jsonpath='{.items..metadata.name}')" -c jenkins -n jenkins
```
- Go to Jenkins console, paste the password and continue. Install suggested plugins, then create the first admin user and save & finish.
2. Configure Jenkins
We will deploy KubeSphere application into the same Kubernetes cluster as the one that the Jenkins is running on. So we need configure the Jenkins pod to access the Kubernetes cluster, and log in docker registry given that during the [Jenkins pipeline](#create-a-pipeline) we push KubeSphere image into a registry which you can change on your own.
On the Kubernetes client, execute the following to log in Jenkins container.
```
# kubectl exec -it "$(kubectl get pods -n jenkins --selector=app=jenkins -o jsonpath='{.items..metadata.name}')" -c jenkins -n jenkins -- /bin/bash
```
After logging in the Jenkins container, then run the following to log in docker registry and prepare folder to hold kubectl configuration.
```
bash-4.3# docker login -u xxx -p xxxx
bash-4.3# mkdir /root/.kube
bash-4.3# exit
```
Once back again to the Kubernetes client, run the following to copy the tool kubectl and its configuration from the client to the Jenkins container.
```
# kubectl cp /usr/bin/kubectl jenkins/"$(kubectl get pods -n jenkins --selector=app=jenkins -o jsonpath='{.items..metadata.name}')":/usr/bin/kubectl -c jenkins
# kubectl cp /root/.kube/config jenkins/"$(kubectl get pods -n jenkins --selector=app=jenkins -o jsonpath='{.items..metadata.name}')":/root/.kube/config -c jenkins
```
## Create a pipeline
- Fork KubeSphere from github for your development. You need to change the docker repository to your own in the files [kubesphere.yaml](devops/kubernetes/kubesphere.yaml), [build-images.sh](devops/scripts/build-images.sh), [push-images.sh](devops/scripts/push-images.sh) and [clean.sh](devops/scripts/clean.sh).
- On the Jenkins panel, click _Open Blue Ocean_ and start to create a new pipeline. Choose _GitHub_, paste your access key of GitHub, select the repository you want to create a CI/CD pipeline. We already created the pipeline Jenkinsfile on the upstream repository which includes compiling KubeSphere, building images, push images, deploying the application, verifying the application and cleaning up.
- It is better to configure one more thing. On the Jenkins panel, go to the configuration of KubeSphere, check _Periodically if not otherwise run_ under _Scan Repository Triggers_ and select the interval at your will.
- If your repository is an upstream, you can select _Discover pull requests from forks_ under _Behaviors_ so that the pipeline will work for PR before merged.
- Now it is good to go. Whenever you commit a change to your forked repository, the pipeline will work during the Jenkins trigger interval.

View File

@@ -1,12 +0,0 @@
# Contributors
## Component and Member List
| Name | Leads |
|------|-------|
| Deployment | (yunify) |
| Service | (yunify) |
| Application | (yunify) |
| Cluster | Jeff (yunify) |
| App Runtime | (yunify) |
| Documents | |

View File

@@ -1,22 +0,0 @@
## QuickStart
KubeSphere uses same app-manager module from [OpenPitrix](https://github/openpitrix/openpitrix), which is another open source project initiated by QingCloud.
For testing and development purpose, follow steps below to setup app-manager service locally:
* Make sure git and docker runtime is installed in your local environment
* Clone the OpenPitrix project to your local environment:
```console
git clone https://github.com/openpitrix/openpitrix.git
```
* Get into openpitrix directory, run commands below:
```console
cd openpitrix
make build
make compose-up-app
```
## Test app-manager
Visit http://127.0.0.1:9100/swagger-ui in browser, and try it online, or test app-manager api service via command line:
```shell
$ curl http://localhost:9100/v1/apps
{"total_items":0,"total_pages":0,"page_size":10,"current_page":1}

View File

@@ -12,7 +12,6 @@ branch, but release branches should not change.
- [Prerequisites](#prerequisites)
- [Setting up Go](#setting-up-go)
- [Setting up Swagger](#setting-up-swagger)
- [To start developing KubeSphere](#to-start-developing-kubesphere)
- [DevOps](#devops)
@@ -37,20 +36,6 @@ $ export GOPATH=~/go
$ export PATH=$PATH:$GOPATH/bin
```
### Setting up Swagger
KubeSphere is using [OpenAPI/Swagger](https://swagger.io) to develop API, so follow
[the instructions](https://github.com/go-swagger/go-swagger/tree/master/docs) to
install Swagger. If you are not familar with Swagger, please read the
[tutorial](http://apihandyman.io/writing-openapi-swagger-specification-tutorial-part-1-introduction/#writing-openapi-fka-swagger-specification-tutorial). If you install Swagger using docker distribution,
please run
```shell
$ docker pull quay.io/goswagger/swagger
$ alias swagger="docker run --rm -it -e GOPATH=$GOPATH:/go -v $HOME:$HOME -w $(pwd) quay.io/goswagger/swagger"
$ swagger version
```
## To start developing KubeSphere
There are two options to get KubeSphere source code and build the project:
@@ -70,7 +55,3 @@ $ git clone https://github.com/kubesphere/kubesphere
$ cd kubesphere
$ make
```
## DevOps
Please check [How to set up DevOps environment](devops.md)

View File

@@ -4,7 +4,6 @@ This doc explains the process and best practices for submitting a PR to the [Kub
- [Before You Submit a PR](#before-you-submit-a-pr)
* [Run Local Verifications](#run-local-verifications)
* [Sign the CLA](#sign-the-cla)
- [The PR Submit Process](#the-pr-submit-process)
* [Write Release Notes if Needed](#write-release-notes-if-needed)
* [The Testing and Merge Workflow](#the-testing-and-merge-workflow)
@@ -38,22 +37,10 @@ This guide is for contributors who already have a PR to submit. If you're lookin
You can run these local verifications before you submit your PR to predict the
pass or fail of continuous integration.
* Run and pass `make verify` (can take 30-40 minutes)
* Run and pass `make test`
* Run and pass `make test-integration`
## Sign the CLA
You must sign the CLA before your first contribution. [Read more about the CLA.](https://github.com/kubesphere/kubesphere/docs/CLA.md)
If you haven't signed the Contributor License Agreement (CLA) before making a PR,
the `@o8x-ci-robot` will leave a comment with instructions on how to sign the CLA.
# The PR Submit Process
Merging a PR requires the following steps to be completed before the PR will be merged automatically. For details about each step, see the [The Testing and Merge Workflow](#the-testing-and-merge-workflow) section below.
- Sign the CLA (prerequisite)
- Make the PR
- Release notes - do one of the following:
- Add notes in the release notes block, or
@@ -152,15 +139,15 @@ If you want to solicit reviews before the implementation of your pull request is
The GitHub robots will add and remove the `do-not-merge/hold` label as you use the comment commands and the `do-not-merge/work-in-progress` label as you edit your title. While either label is present, your pull request will not be considered for merging.
## Comment Commands Reference
## Comment Commands Reference//TODO
[The commands doc]() contains a reference for all comment commands. //TODO
## Automation
## Automation//TODO
The KubeSphere developer community uses a variety of automation to manage pull requests. This automation is described in detail [in the automation doc](automation.md). //TODO
## How the Tests Work
## How the Tests Work//TODO
The tests will post the status results to the PR. If an e2e test fails,
`@o8x-ci-robot` will comment on the PR with the test history and the
@@ -212,7 +199,7 @@ Let's talk about best practices so your PR gets reviewed quickly.
## 0. Familiarize yourself with project conventions
* [Development guide](development.md)
* [Development guide](code-of-conduct.md)
## 1. Is the feature wanted? Make a Design Doc or Sketch PR
@@ -220,7 +207,7 @@ Are you sure Feature-X is something the KubeSphere team wants or will accept? Is
It's better to get confirmation beforehand. There are two ways to do this:
- Make a proposal doc (in docs/proposals; for example [the QoS proposal](), or reach out to the affected special interest group (SIG). Here's a [list of SIGs](https://github.com/KubeSphere/KubeSphere/docs/sig-list.md)
- Make a proposal doc (in docs/proposals; for example [the QoS proposal]()
- Coordinate your effort with [SIG Docs]() ahead of time. //TODO
- Make a sketch PR (e.g., just the API or Go interface). Write or code up just enough to express the idea and the design and why you made those choices

View File

@@ -2,57 +2,30 @@
Welcome to KubeSphere! (New Developer Guide)
============================================
_This document assumes that you know what KubeSphere does. If you don't,
try the demo at [https://o8x.io/](https://kubesphere.io/)._
_This document assumes that you know what KubeSphere does.
Introduction
------------
Have you ever wanted to contribute to the coolest cloud technology? This
This
document will help you understand the organization of the KubeSphere project and
direct you to the best places to get started. By the end of this doc, you'll be
able to pick up issues, write code to fix them, and get your work reviewed and
merged.
If you have questions about the development process, feel free to jump into our
[Slack workspace](http://KubeSphere.slack.com/) or join our [mailing
list](https://groups.google.com/forum/#!forum/KubeSphere-dev). If you join the
[Slack workspace](http://KubeSphere.slack.com/). If you join the
Slack workspace it is recommended to set your Slack display name to your GitHub
account handle.
Special Interest Groups
-----------------------
KubeSphere developers work in teams called Special Interest Groups (SIGs). At
the time of this writing there are [2 SIGs](sig-list.md).
The developers within each SIG have autonomy and ownership over that SIG's part
of KubeSphere. SIGs organize themselves by meeting regularly and submitting
markdown design documents to the
[KubeSphere/community](https://github.com/KubeSphere/community) repository.
Like everything else in KubeSphere, a SIG is an open, community, effort. Anybody
is welcome to jump into a SIG and begin fixing issues, critiquing design
proposals and reviewing code.
Most people who visit the KubeSphere repository for the first time are
bewildered by the thousands of [open
issues](https://github.com/KubeSphere/KubeSphere/issues) in our main repository.
But now that you know about SIGs, it's easy to filter by labels to see what's
going on in a particular SIG. For more information about our issue system, check
out
[issues.md](https://github.com/KubeSphere/community/blob/master/contributors/devel/issues.md).
//TODO
Downloading, Building, and Testing KubeSphere
---------------------------------------------
This guide is non-technical, so it does not cover the technical details of
working KubeSphere. We have plenty of documentation available under
[github.com/KubeSphere/KubeSphere/docs/](https://github.com/KubeSphere/KubeSphere/docs/).
Check out
[development.md](https://github.com/KubeSphere/KubeSphere/docs/development.md)
for more details.
[docs.kubesphere.io](https://docs.kubesphere.io).
Pull-Request Process
--------------------
@@ -61,21 +34,4 @@ The pull-request process is documented in [pull-requests.md](pull-requests.md).
As described in that document, you must sign the CLA before
KubeSphere can accept your contribution.
The Release Process and Code Freeze
-----------------------------------
Every so often @o8x-merge-robot will refuse to merge your PR, saying something
about release milestones. This happens when we are in a code freeze for a
release. In order to ensure KubeSphere is stable, we stop merging everything
that's not a bugfix, then focus on making all the release tests pass. This code
freeze usually lasts two weeks and happens once per quarter.
If you're new to KubeSphere, you won't have to worry about this too much. After
you've contributed for a few months, you will be added as a [community
member](https://github.com/KubeSphere/KubeSphere/docs/membership.md)
and take ownership of some of the tests. At this point, you'll work with members
of your SIG to review PRs coming into your area and track down issues that occur
in tests.
Thanks for reading!

Binary file not shown.

After

Width:  |  Height:  |  Size: 445 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -0,0 +1,60 @@
<!-- HTML for static distribution bundle build -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Swagger UI</title>
<link rel="stylesheet" type="text/css" href="./swagger-ui.css" >
<link rel="icon" type="image/png" href="./favicon-32x32.png" sizes="32x32" />
<link rel="icon" type="image/png" href="./favicon-16x16.png" sizes="16x16" />
<style>
html
{
box-sizing: border-box;
overflow: -moz-scrollbars-vertical;
overflow-y: scroll;
}
*,
*:before,
*:after
{
box-sizing: inherit;
}
body
{
margin:0;
background: #fafafa;
}
</style>
</head>
<body>
<div id="swagger-ui"></div>
<script src="./swagger-ui-bundle.js"> </script>
<script src="./swagger-ui-standalone-preset.js"> </script>
<script>
window.onload = function() {
// Build a system
const ui = SwaggerUIBundle({
url: "/swagger-ui/api.json",
dom_id: '#swagger-ui',
deepLinking: true,
presets: [
SwaggerUIBundle.presets.apis,
SwaggerUIStandalonePreset
],
plugins: [
SwaggerUIBundle.plugins.DownloadUrl
],
layout: "StandaloneLayout"
})
window.ui = ui
}
</script>
</body>
</html>

View File

@@ -0,0 +1,67 @@
<!doctype html>
<html lang="en-US">
<body onload="run()">
</body>
</html>
<script>
'use strict';
function run () {
var oauth2 = window.opener.swaggerUIRedirectOauth2;
var sentState = oauth2.state;
var redirectUrl = oauth2.redirectUrl;
var isValid, qp, arr;
if (/code|token|error/.test(window.location.hash)) {
qp = window.location.hash.substring(1);
} else {
qp = location.search.substring(1);
}
arr = qp.split("&")
arr.forEach(function (v,i,_arr) { _arr[i] = '"' + v.replace('=', '":"') + '"';})
qp = qp ? JSON.parse('{' + arr.join() + '}',
function (key, value) {
return key === "" ? value : decodeURIComponent(value)
}
) : {}
isValid = qp.state === sentState
if ((
oauth2.auth.schema.get("flow") === "accessCode"||
oauth2.auth.schema.get("flow") === "authorizationCode"
) && !oauth2.auth.code) {
if (!isValid) {
oauth2.errCb({
authId: oauth2.auth.name,
source: "auth",
level: "warning",
message: "Authorization may be unsafe, passed state was changed in server Passed state wasn't returned from auth server"
});
}
if (qp.code) {
delete oauth2.state;
oauth2.auth.code = qp.code;
oauth2.callback({auth: oauth2.auth, redirectUrl: redirectUrl});
} else {
let oauthErrorMsg
if (qp.error) {
oauthErrorMsg = "["+qp.error+"]: " +
(qp.error_description ? qp.error_description+ ". " : "no accessCode received from the server. ") +
(qp.error_uri ? "More info: "+qp.error_uri : "");
}
oauth2.errCb({
authId: oauth2.auth.name,
source: "auth",
level: "error",
message: oauthErrorMsg || "[Authorization failed]: no accessCode received from the server"
});
}
} else {
oauth2.callback({auth: oauth2.auth, token: qp, isValid: isValid, redirectUrl: redirectUrl});
}
window.close();
}
</script>

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
{"version":3,"sources":[],"names":[],"mappings":"","file":"swagger-ui.css","sourceRoot":""}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -19,6 +19,8 @@ package components
import (
"net/http"
"github.com/golang/glog"
"github.com/emicklei/go-restful"
"kubesphere.io/kubesphere/pkg/constants"
@@ -30,22 +32,56 @@ func Register(ws *restful.WebService, subPath string) {
ws.Route(ws.GET(subPath).To(handleGetComponents).Filter(route.RouteLogging)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/{namespace}/{componentName}").To(handleGetComponentStatus).
Filter(route.RouteLogging)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET("/health").To(handleGetSystemHealthStatus).Filter(route.RouteLogging)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
}
//get all components
func handleGetSystemHealthStatus(request *restful.Request, response *restful.Response) {
if status, err := models.GetSystemHealthStatus(); err != nil {
err = response.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
if err != nil {
glog.Errorln(err)
}
} else {
err = response.WriteAsJson(status)
if err != nil {
glog.Errorln(err)
}
}
}
// get a specific component status
func handleGetComponentStatus(request *restful.Request, response *restful.Response) {
namespace := request.PathParameter("namespace")
componentName := request.PathParameter("componentName")
if component, err := models.GetComponentStatus(namespace, componentName); err != nil {
err = response.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
if err != nil {
glog.Errorln(err)
}
} else {
if err = response.WriteAsJson(component); err != nil {
glog.Errorln(err)
}
}
}
// get all components
func handleGetComponents(request *restful.Request, response *restful.Response) {
result, err := models.GetComponents()
result, err := models.GetAllComponentsStatus()
if err != nil {
response.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
} else {
response.WriteAsJson(result)
}
}

View File

@@ -0,0 +1,56 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package daemonsets
import (
"net/http"
"github.com/emicklei/go-restful"
"github.com/emicklei/go-restful-openapi"
"k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/errors"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models"
)
func Register(ws *restful.WebService, subPath string) {
tags := []string{"daemonsets"}
ws.Route(ws.GET(subPath).To(getDaemonSetRevision).Consumes("*/*").Metadata(restfulspec.KeyOpenAPITags, tags).Doc("Handle daemonset" +
" operation").Param(ws.PathParameter("daemonset", "daemonset's name").DataType("string")).Param(ws.PathParameter("namespace",
"daemonset's namespace").DataType("string")).Param(ws.PathParameter("revision", "daemonset's revision")).Writes(v1.DaemonSet{}))
}
func getDaemonSetRevision(req *restful.Request, resp *restful.Response) {
daemonset := req.PathParameter("daemonset")
namespace := req.PathParameter("namespace")
revision := req.PathParameter("revision")
res, err := models.GetDaemonSetRevision(namespace, daemonset, revision)
if err != nil {
if errors.IsNotFound(err) {
resp.WriteHeaderAndEntity(http.StatusNotFound, constants.MessageResponse{Message: err.Error()})
} else {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
}
}
resp.WriteEntity(res)
}

View File

@@ -0,0 +1,56 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package deployments
import (
"net/http"
"github.com/emicklei/go-restful"
"github.com/emicklei/go-restful-openapi"
"k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/errors"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models"
)
func Register(ws *restful.WebService, subPath string) {
tags := []string{"deployments"}
ws.Route(ws.GET(subPath).To(getDeployRevision).Consumes("*/*").Metadata(restfulspec.KeyOpenAPITags, tags).Doc("Handle deployment" +
" operation").Param(ws.PathParameter("deployment", "deployment's name").DataType("string")).Param(ws.PathParameter("namespace",
"deployment's namespace").DataType("string")).Param(ws.PathParameter("deployment", "deployment's name")).Writes(v1.ReplicaSet{}))
}
func getDeployRevision(req *restful.Request, resp *restful.Response) {
deploy := req.PathParameter("deployment")
namespace := req.PathParameter("namespace")
revision := req.PathParameter("revision")
res, err := models.GetDeployRevision(namespace, deploy, revision)
if err != nil {
if errors.IsNotFound(err) {
resp.WriteHeaderAndEntity(http.StatusNotFound, constants.MessageResponse{Message: err.Error()})
} else {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
}
}
resp.WriteEntity(res)
}

View File

@@ -0,0 +1,57 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package hpa
import (
"net/http"
"github.com/emicklei/go-restful"
"github.com/emicklei/go-restful-openapi"
"k8s.io/api/autoscaling/v1"
"k8s.io/apimachinery/pkg/api/errors"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"kubesphere.io/kubesphere/pkg/client"
)
func Register(ws *restful.WebService, subPath string) {
tags := []string{"horizontalpodautoscalers"}
ws.Route(ws.GET(subPath).To(getHpa).Consumes("*/*").Metadata(restfulspec.KeyOpenAPITags, tags).Doc(
"get horizontalpodautoscalers").Param(ws.PathParameter("namespace",
"horizontalpodautoscalers's namespace").DataType("string")).Param(ws.PathParameter(
"horizontalpodautoscaler", "horizontalpodautoscaler's name")).Writes(v1.HorizontalPodAutoscaler{}))
}
func getHpa(req *restful.Request, resp *restful.Response) {
hpa := req.PathParameter("horizontalpodautoscaler")
namespace := req.PathParameter("namespace")
client := client.NewK8sClient()
res, err := client.AutoscalingV1().HorizontalPodAutoscalers(namespace).Get(hpa, metaV1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
resp.WriteHeaderAndEntity(http.StatusOK, nil)
} else {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, nil)
}
}
resp.WriteEntity(res)
}

View File

@@ -63,7 +63,7 @@ func userRolesHandler(req *restful.Request, resp *restful.Response) {
username := req.PathParameter("username")
roles, err := iam.GetRoles(username)
roles, err := iam.GetRoles("", username)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
@@ -206,7 +206,7 @@ func clusterRoleRulesHandler(req *restful.Request, resp *restful.Response) {
var rules []iam.Rule
if name == "" {
rules = iam.ClusterRoleRuleGroup
rules = iam.ClusterRoleRuleMapping
} else {
var err error
rules, err = iam.GetClusterRoleRules(name)
@@ -227,7 +227,7 @@ func roleRulesHandler(req *restful.Request, resp *restful.Response) {
var rules []iam.Rule
if namespace == "" && name == "" {
rules = iam.RoleRuleGroup
rules = iam.RoleRuleMapping
} else {
var err error
rules, err = iam.GetRoleRules(namespace, name)

View File

@@ -21,18 +21,25 @@ import (
"kubesphere.io/kubesphere/pkg/apis/v1alpha/components"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/containers"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/daemonsets"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/deployments"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/hpa"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/iam"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/jobs"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/monitoring"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/nodes"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/pods"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/quota"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/registries"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/resources"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/routes"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/statefulsets"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/storage"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/terminal"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/users"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/volumes"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/workloadstatus"
"kubesphere.io/kubesphere/pkg/apis/v1alpha/workspaces"
_ "kubesphere.io/kubesphere/pkg/filter/container"
)
@@ -54,8 +61,14 @@ func init() {
terminal.Register(ws, "/namespaces/{namespace}/pod/{pod}/shell/{container}")
workloadstatus.Register(ws, "/status")
quota.Register(ws, "/quota")
hpa.Register(ws, "/namespaces/{namespace}/horizontalpodautoscalers/{horizontalpodautoscaler}")
jobs.Register(ws, "/namespaces/{namespace}/jobs/{job}")
deployments.Register(ws, "/namespaces/{namespace}/deployments/{deployment}/revisions/{revision}")
daemonsets.Register(ws, "/namespaces/{namespace}/daemonsets/{daemonset}/revisions/{revision}")
statefulsets.Register(ws, "/namespaces/{namespace}/statefulsets/{statefulset}/revisions/{revision}")
resources.Register(ws, "/resources")
monitoring.Register(ws, "/monitoring")
workspaces.Register(ws, "/workspaces")
// add webservice to default container
restful.Add(ws)

View File

@@ -0,0 +1,63 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package jobs
import (
"net/http"
"github.com/emicklei/go-restful"
"github.com/emicklei/go-restful-openapi"
"fmt"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models/controllers"
)
func Register(ws *restful.WebService, subPath string) {
tags := []string{"jobs"}
ws.Route(ws.POST(subPath).To(handleJob).Consumes("*/*").Metadata(restfulspec.KeyOpenAPITags, tags).Doc("Handle job" +
" operation").Param(ws.PathParameter("job", "job name").DataType("string")).Param(ws.PathParameter("namespace",
"job's namespace").DataType("string")).Param(ws.QueryParameter("a",
"action").DataType("string")).Writes(""))
}
func handleJob(req *restful.Request, resp *restful.Response) {
var res interface{}
var err error
job := req.PathParameter("job")
namespace := req.PathParameter("namespace")
action := req.QueryParameter("a")
switch action {
case "rerun":
res, err = controllers.JobReRun(namespace, job)
default:
resp.WriteHeaderAndEntity(http.StatusForbidden, constants.MessageResponse{Message: fmt.Sprintf("invalid operation %s", action)})
return
}
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(res)
}

View File

@@ -0,0 +1,443 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package monitoring
import (
"github.com/emicklei/go-restful"
"github.com/emicklei/go-restful-openapi"
"kubesphere.io/kubesphere/pkg/client"
"kubesphere.io/kubesphere/pkg/filter/route"
"kubesphere.io/kubesphere/pkg/models/metrics"
)
func (u Monitor) monitorPod(request *restful.Request, response *restful.Response) {
requestParams := client.ParseMonitoringRequestParams(request)
podName := requestParams.PodName
metricName := requestParams.MetricsName
if podName != "" {
// single pod single metric
queryType, params, nullRule := metrics.AssemblePodMetricRequestInfo(requestParams, metricName)
var res *metrics.FormatedMetric
if !nullRule {
res = metrics.GetMetric(queryType, params, metricName)
}
response.WriteAsJson(res)
} else {
// multiple
rawMetrics := metrics.MonitorAllMetrics(requestParams, metrics.MetricLevelPod)
// sorting
sortedMetrics, maxMetricCount := metrics.Sort(requestParams.SortMetricName, requestParams.SortType, rawMetrics, metrics.MetricLevelPodName)
// paging
pagedMetrics := metrics.Page(requestParams.PageNum, requestParams.LimitNum, sortedMetrics, maxMetricCount)
response.WriteAsJson(pagedMetrics)
}
}
func (u Monitor) monitorContainer(request *restful.Request, response *restful.Response) {
requestParams := client.ParseMonitoringRequestParams(request)
metricName := requestParams.MetricsName
if requestParams.MetricsFilter != "" {
rawMetrics := metrics.MonitorAllMetrics(requestParams, metrics.MetricLevelContainer)
// sorting
sortedMetrics, maxMetricCount := metrics.Sort(requestParams.SortMetricName, requestParams.SortType, rawMetrics, metrics.MetricLevelContainerName)
// paging
pagedMetrics := metrics.Page(requestParams.PageNum, requestParams.LimitNum, sortedMetrics, maxMetricCount)
response.WriteAsJson(pagedMetrics)
} else {
res := metrics.MonitorContainer(requestParams, metricName)
response.WriteAsJson(res)
}
}
func (u Monitor) monitorWorkload(request *restful.Request, response *restful.Response) {
requestParams := client.ParseMonitoringRequestParams(request)
rawMetrics := metrics.MonitorAllMetrics(requestParams, metrics.MetricLevelWorkload)
var sortedMetrics *metrics.FormatedLevelMetric
var maxMetricCount int
wlKind := requestParams.WorkloadKind
// sorting
if wlKind == "" {
sortedMetrics, maxMetricCount = metrics.Sort(requestParams.SortMetricName, requestParams.SortType, rawMetrics, metrics.MetricLevelWorkload)
} else {
sortedMetrics, maxMetricCount = metrics.Sort(requestParams.SortMetricName, requestParams.SortType, rawMetrics, metrics.MetricLevelPodName)
}
// paging
pagedMetrics := metrics.Page(requestParams.PageNum, requestParams.LimitNum, sortedMetrics, maxMetricCount)
response.WriteAsJson(pagedMetrics)
}
func (u Monitor) monitorAllWorkspaces(request *restful.Request, response *restful.Response) {
requestParams := client.ParseMonitoringRequestParams(request)
tp := requestParams.Tp
if tp == "_statistics" {
// merge multiple metric: all-devops, all-roles, all-projects...this api is designed for admin
res := metrics.MonitorAllWorkspacesStatistics()
response.WriteAsJson(res)
} else if tp == "rank" {
rawMetrics := metrics.MonitorAllWorkspaces(requestParams)
// sorting
sortedMetrics, maxMetricCount := metrics.Sort(requestParams.SortMetricName, requestParams.SortType, rawMetrics, metrics.MetricLevelWorkspace)
// paging
pagedMetrics := metrics.Page(requestParams.PageNum, requestParams.LimitNum, sortedMetrics, maxMetricCount)
response.WriteAsJson(pagedMetrics)
} else {
res := metrics.MonitorAllMetrics(requestParams, metrics.MetricLevelWorkspace)
response.WriteAsJson(res)
}
}
func (u Monitor) monitorOneWorkspace(request *restful.Request, response *restful.Response) {
requestParams := client.ParseMonitoringRequestParams(request)
tp := requestParams.Tp
if tp == "rank" {
// multiple
rawMetrics := metrics.MonitorAllMetrics(requestParams, metrics.MetricLevelWorkspace)
// sorting
sortedMetrics, maxMetricCount := metrics.Sort(requestParams.SortMetricName, requestParams.SortType, rawMetrics, metrics.MetricLevelNamespace)
// paging
pagedMetrics := metrics.Page(requestParams.PageNum, requestParams.LimitNum, sortedMetrics, maxMetricCount)
response.WriteAsJson(pagedMetrics)
} else if tp == "_statistics" {
wsName := requestParams.WsName
// merge multiple metric: devops, roles, projects...
res := metrics.MonitorOneWorkspaceStatistics(wsName)
response.WriteAsJson(res)
} else {
res := metrics.MonitorAllMetrics(requestParams, metrics.MetricLevelWorkspace)
response.WriteAsJson(res)
}
}
func (u Monitor) monitorNamespace(request *restful.Request, response *restful.Response) {
requestParams := client.ParseMonitoringRequestParams(request)
metricName := requestParams.MetricsName
nsName := requestParams.NsName
if nsName != "" {
// single
queryType, params := metrics.AssembleNamespaceMetricRequestInfo(requestParams, metricName)
res := metrics.GetMetric(queryType, params, metricName)
response.WriteAsJson(res)
} else {
// multiple
rawMetrics := metrics.MonitorAllMetrics(requestParams, metrics.MetricLevelNamespace)
// sorting
sortedMetrics, maxMetricCount := metrics.Sort(requestParams.SortMetricName, requestParams.SortType, rawMetrics, metrics.MetricLevelNamespace)
// paging
pagedMetrics := metrics.Page(requestParams.PageNum, requestParams.LimitNum, sortedMetrics, maxMetricCount)
response.WriteAsJson(pagedMetrics)
}
}
func (u Monitor) monitorCluster(request *restful.Request, response *restful.Response) {
requestParams := client.ParseMonitoringRequestParams(request)
metricName := requestParams.MetricsName
if metricName != "" {
// single
queryType, params := metrics.AssembleClusterMetricRequestInfo(requestParams, metricName)
res := metrics.GetMetric(queryType, params, metricName)
response.WriteAsJson(res)
} else {
// multiple
res := metrics.MonitorAllMetrics(requestParams, metrics.MetricLevelCluster)
response.WriteAsJson(res)
}
}
func (u Monitor) monitorNode(request *restful.Request, response *restful.Response) {
requestParams := client.ParseMonitoringRequestParams(request)
metricName := requestParams.MetricsName
if metricName != "" {
// single
queryType, params := metrics.AssembleNodeMetricRequestInfo(requestParams, metricName)
res := metrics.GetMetric(queryType, params, metricName)
nodeAddress := metrics.GetNodeAddressInfo()
metrics.AddNodeAddressMetric(res, nodeAddress)
response.WriteAsJson(res)
} else {
// multiple
rawMetrics := metrics.MonitorAllMetrics(requestParams, metrics.MetricLevelNode)
nodeAddress := metrics.GetNodeAddressInfo()
for i := 0; i < len(rawMetrics.Results); i++ {
metrics.AddNodeAddressMetric(&rawMetrics.Results[i], nodeAddress)
}
// sorting
sortedMetrics, maxMetricCount := metrics.Sort(requestParams.SortMetricName, requestParams.SortType, rawMetrics, metrics.MetricLevelNode)
// paging
pagedMetrics := metrics.Page(requestParams.PageNum, requestParams.LimitNum, sortedMetrics, maxMetricCount)
response.WriteAsJson(pagedMetrics)
}
}
// k8s component(controller, scheduler, etcd) status
func (u Monitor) monitorComponentStatus(request *restful.Request, response *restful.Response) {
requestParams := client.ParseMonitoringRequestParams(request)
status := metrics.MonitorComponentStatus(requestParams)
response.WriteAsJson(status)
}
type Monitor struct {
}
func Register(ws *restful.WebService, subPath string) {
tags := []string{"monitoring apis"}
u := Monitor{}
ws.Route(ws.GET(subPath+"/clusters").To(u.monitorCluster).
Filter(route.RouteLogging).
Doc("monitor cluster level metrics").
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("cluster_cpu_utilisation")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/nodes").To(u.monitorNode).
Filter(route.RouteLogging).
Doc("monitor nodes level metrics").
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("node_cpu_utilisation")).
Param(ws.QueryParameter("nodes_filter", "node re2 expression filter").DataType("string").Required(false).DefaultValue("")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("4")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/nodes/{node_id}").To(u.monitorNode).
Filter(route.RouteLogging).
Doc("monitor specific node level metrics").
Param(ws.PathParameter("node_id", "specific node").DataType("string").Required(true).DefaultValue("")).
Param(ws.QueryParameter("metrics_name", "metrics name cpu memory...").DataType("string").Required(true).DefaultValue("node_cpu_utilisation")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/namespaces").To(u.monitorNamespace).
Filter(route.RouteLogging).
Doc("monitor namespaces level metrics").
Param(ws.QueryParameter("namespaces_filter", "namespaces re2 expression filter").DataType("string").Required(false).DefaultValue("")).
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("namespace_memory_utilisation")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("4")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/namespaces/{ns_name}").To(u.monitorNamespace).
Filter(route.RouteLogging).
Doc("monitor specific namespace level metrics").
Param(ws.PathParameter("ns_name", "specific namespace").DataType("string").Required(true).DefaultValue("monitoring")).
Param(ws.QueryParameter("metrics_name", "metrics name cpu memory...").DataType("string").Required(true).DefaultValue("namespace_memory_utilisation")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/namespaces/{ns_name}/pods").To(u.monitorPod).
Filter(route.RouteLogging).
Doc("monitor pods level metrics").
Param(ws.PathParameter("ns_name", "specific namespace").DataType("string").Required(true).DefaultValue("monitoring")).
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("pod_memory_utilisation_wo_cache")).
Param(ws.QueryParameter("pods_filter", "pod re2 expression filter").DataType("string").Required(false).DefaultValue("")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("4")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/namespaces/{ns_name}/pods/{pod_name}").To(u.monitorPod).
Filter(route.RouteLogging).
Doc("monitor specific pod level metrics").
Param(ws.PathParameter("ns_name", "specific namespace").DataType("string").Required(true).DefaultValue("monitoring")).
Param(ws.PathParameter("pod_name", "specific pod").DataType("string").Required(true).DefaultValue("")).
Param(ws.QueryParameter("metrics_name", "metrics name cpu memory...").DataType("string").Required(true).DefaultValue("pod_memory_utilisation_wo_cache")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/nodes/{node_id}/pods").To(u.monitorPod).
Filter(route.RouteLogging).
Doc("monitor pods level metrics by nodeid").
Param(ws.PathParameter("node_id", "specific node").DataType("string").Required(true).DefaultValue("i-k89a62il")).
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("pod_memory_utilisation_wo_cache")).
Param(ws.QueryParameter("pods_filter", "pod re2 expression filter").DataType("string").Required(false).DefaultValue("openpitrix.*")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("4")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/nodes/{node_id}/pods/{pod_name}").To(u.monitorPod).
Filter(route.RouteLogging).
Doc("monitor specific pod level metrics by nodeid").
Param(ws.PathParameter("node_id", "specific node").DataType("string").Required(true).DefaultValue("i-k89a62il")).
Param(ws.PathParameter("pod_name", "specific pod").DataType("string").Required(true).DefaultValue("")).
Param(ws.QueryParameter("metrics_name", "metrics name cpu memory...").DataType("string").Required(true).DefaultValue("pod_memory_utilisation_wo_cache")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/nodes/{node_id}/pods/{pod_name}/containers").To(u.monitorContainer).
Filter(route.RouteLogging).
Doc("monitor specific pod level metrics by nodeid").
Param(ws.PathParameter("node_id", "specific node").DataType("string").Required(true)).
Param(ws.PathParameter("pod_name", "specific pod").DataType("string").Required(true)).
Param(ws.QueryParameter("containers_filter", "container re2 expression filter").DataType("string").Required(false).DefaultValue("")).
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...").DataType("string").Required(false)).
Param(ws.QueryParameter("metrics_name", "metrics name cpu memory...").DataType("string").Required(true).DefaultValue("pod_memory_utilisation_wo_cache")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("4")).
Param(ws.QueryParameter("type", "rank, statistic").DataType("string").Required(false).DefaultValue("rank")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/namespaces/{ns_name}/pods/{pod_name}/containers").To(u.monitorContainer).
Filter(route.RouteLogging).
Doc("monitor containers level metrics").
Param(ws.PathParameter("ns_name", "specific namespace").DataType("string").Required(true).DefaultValue("monitoring")).
Param(ws.PathParameter("pod_name", "specific pod").DataType("string").Required(true).DefaultValue("")).
Param(ws.QueryParameter("containers_filter", "container re2 expression filter").DataType("string").Required(false).DefaultValue("")).
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...").DataType("string").Required(false)).
Param(ws.QueryParameter("metrics_name", "metrics name cpu memory...").DataType("string").Required(true).DefaultValue("container_memory_utilisation_wo_cache")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("4")).
Param(ws.QueryParameter("type", "rank, statistic").DataType("string").Required(false).DefaultValue("rank")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/namespaces/{ns_name}/pods/{pod_name}/containers/{container_name}").To(u.monitorContainer).
Filter(route.RouteLogging).
Doc("monitor specific container level metrics").
Param(ws.PathParameter("ns_name", "specific namespace").DataType("string").Required(true).DefaultValue("monitoring")).
Param(ws.PathParameter("pod_name", "specific pod").DataType("string").Required(true).DefaultValue("")).
Param(ws.PathParameter("container_name", "specific container").DataType("string").Required(true).DefaultValue("")).
Param(ws.QueryParameter("metrics_name", "metrics name cpu memory...").DataType("string").Required(true).DefaultValue("container_memory_utilisation_wo_cache")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/namespaces/{ns_name}/workloads/{workload_kind}").To(u.monitorWorkload).
Filter(route.RouteLogging).
Doc("monitor specific workload level metrics").
Param(ws.PathParameter("ns_name", "namespace").DataType("string").Required(true).DefaultValue("kube-system")).
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...").DataType("string").Required(false)).
Param(ws.PathParameter("workload_kind", "workload kind").DataType("string").Required(false).DefaultValue("daemonset")).
Param(ws.QueryParameter("workload_name", "workload name").DataType("string").Required(true).DefaultValue("")).
Param(ws.QueryParameter("pods_filter", "pod re2 expression filter").DataType("string").Required(false).DefaultValue("openpitrix.*")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "max metric items in a page").DataType("string").Required(false).DefaultValue("4")).
Param(ws.QueryParameter("type", "rank, statistic").DataType("string").Required(false).DefaultValue("rank")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/namespaces/{ns_name}/workloads").To(u.monitorWorkload).
Filter(route.RouteLogging).
Doc("monitor all workload level metrics").
Param(ws.PathParameter("ns_name", "namespace").DataType("string").Required(true).DefaultValue("kube-system")).
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...").DataType("string").Required(false)).
Param(ws.QueryParameter("workloads_filter", "pod re2 expression filter").DataType("string").Required(false).DefaultValue("")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("4")).
Param(ws.QueryParameter("type", "rank, statistic").DataType("string").Required(false).DefaultValue("rank")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
// list all namespace in this workspace by selected metrics
ws.Route(ws.GET(subPath+"/workspaces/{workspace_name}").To(u.monitorOneWorkspace).
Filter(route.RouteLogging).
Doc("monitor workspaces level metrics").
Param(ws.PathParameter("workspace_name", "workspace name").DataType("string").Required(true)).
Param(ws.QueryParameter("namespaces_filter", "namespaces filter").DataType("string").Required(false).DefaultValue("k.*")).
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("namespace_memory_utilisation_wo_cache")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("4")).
Param(ws.QueryParameter("type", "rank, statistic").DataType("string").Required(false).DefaultValue("rank")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/workspaces").To(u.monitorAllWorkspaces).
Filter(route.RouteLogging).
Doc("monitor workspaces level metrics").
Param(ws.QueryParameter("metrics_filter", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("workspace_memory_utilisation")).
Param(ws.QueryParameter("workspaces_filter", "workspaces re2 expression filter").DataType("string").Required(false).DefaultValue(".*")).
Param(ws.QueryParameter("sort_metric", "sort metric").DataType("string").Required(false)).
Param(ws.QueryParameter("sort_type", "ascending descending order").DataType("string").Required(false)).
Param(ws.QueryParameter("page", "page number").DataType("string").Required(false).DefaultValue("1")).
Param(ws.QueryParameter("limit", "metrics name cpu memory...in re2 regex").DataType("string").Required(false).DefaultValue("4")).
Param(ws.QueryParameter("type", "rank, statistic").DataType("string").Required(false).DefaultValue("rank")).
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath+"/components").To(u.monitorComponentStatus).
Filter(route.RouteLogging).
Doc("monitor k8s components status").
Metadata(restfulspec.KeyOpenAPITags, tags)).
Consumes(restful.MIME_JSON, restful.MIME_XML).
Produces(restful.MIME_JSON)
}

View File

@@ -21,14 +21,23 @@ import (
"github.com/emicklei/go-restful"
"github.com/emicklei/go-restful-openapi"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models"
)
func Register(ws *restful.WebService, subPath string) {
ws.Route(ws.GET(subPath).To(getClusterQuota).Produces(restful.MIME_JSON))
ws.Route(ws.GET(subPath + "/namespaces/{namespace}").To(getNamespaceQuota).Produces(restful.MIME_JSON))
tags := []string{"quota"}
ws.Route(ws.GET(subPath).To(getClusterQuota).Produces(restful.MIME_JSON).Doc("get whole "+
"cluster's resource usage").Writes(models.ResourceQuota{}).Metadata(restfulspec.KeyOpenAPITags, tags))
ws.Route(ws.GET(subPath+"/namespaces/{namespace}").Doc("get specified namespace's resource "+
"quota and usage").Param(ws.PathParameter("namespace",
"namespace's name").DataType("string")).Writes(models.ResourceQuota{}).
Metadata(restfulspec.KeyOpenAPITags, tags).To(getNamespaceQuota).Produces(restful.MIME_JSON))
}

View File

@@ -55,6 +55,23 @@ func Register(ws *restful.WebService, subPath string) {
Consumes(restful.MIME_JSON).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath + "/{name}/namespaces/{namespace}/searchwords/{searchWord}").
Param(ws.PathParameter("namespace", "registry secret's namespace")).
Param(ws.PathParameter("name", "registry secret's name")).
Param(ws.PathParameter("searchWord", "keyword use to search image")).
To(handlerImageSearch).
Filter(route.RouteLogging)).
Consumes(restful.MIME_JSON).
Produces(restful.MIME_JSON)
ws.Route(ws.GET(subPath + "/{name}/namespaces/{namespace}/tags").
Param(ws.QueryParameter("image", "imageName")).
Param(ws.PathParameter("namespace", "registry secret's namespace")).
Param(ws.PathParameter("name", "registry secret's name")).
To(handlerGetImageTags).
Filter(route.RouteLogging)).
Consumes(restful.MIME_JSON).
Produces(restful.MIME_JSON)
}
func handlerRegistryValidation(request *restful.Request, response *restful.Response) {
@@ -77,6 +94,30 @@ func handlerRegistryValidation(request *restful.Request, response *restful.Respo
}
func handlerImageSearch(request *restful.Request, response *restful.Response) {
registry := request.PathParameter("name")
searchWord := request.PathParameter("searchWord")
namespace := request.PathParameter("namespace")
res := models.ImageSearch(namespace, registry, searchWord)
response.WriteEntity(res)
}
func handlerGetImageTags(request *restful.Request, response *restful.Response) {
registry := request.PathParameter("name")
image := request.QueryParameter("image")
namespace := request.PathParameter("namespace")
res := models.GetImageTags(namespace, registry, image)
response.WriteEntity(res)
}
func handleCreateRegistries(request *restful.Request, response *restful.Response) {
registries := models.Registries{}

View File

@@ -21,23 +21,75 @@ import (
"github.com/emicklei/go-restful"
"github.com/emicklei/go-restful-openapi"
"fmt"
"strings"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models"
)
func Register(ws *restful.WebService, subPath string) {
ws.Route(ws.GET(subPath + "/{resource}").To(listResource).Produces(restful.MIME_JSON))
tags := []string{"resources"}
ws.Route(ws.GET(subPath+"/{resource}").To(listResource).
Produces(restful.MIME_JSON).
Metadata(restfulspec.KeyOpenAPITags, tags).
Doc("Get resource list").
Param(ws.PathParameter("resource", "resource name").DataType("string")).
Param(ws.QueryParameter("conditions", "search conditions").DataType("string")).
Param(ws.QueryParameter("reverse", "support reverse ordering").DataType("bool").DefaultValue("false")).
Param(ws.QueryParameter("order", "the field for sorting").DataType("string")).
Param(ws.QueryParameter("paging", "support paging function").DataType("string")).
Writes(models.ResourceList{}))
}
func isInvalid(str string) bool {
invalidList := []string{"exec", "insert", "select", "delete", "update", "count", "*", "%", "truncate", "drop"}
str = strings.Replace(str, "=", " ", -1)
str = strings.Replace(str, ",", " ", -1)
str = strings.Replace(str, "~", " ", -1)
items := strings.Split(str, " ")
for _, invalid := range invalidList {
for _, item := range items {
if item == invalid || strings.ToLower(item) == invalid {
return true
}
}
}
return false
}
func listResource(req *restful.Request, resp *restful.Response) {
resource := req.PathParameter("resource")
if resource == "applications" {
handleApplication(req, resp)
return
}
conditions := req.QueryParameter("conditions")
paging := req.QueryParameter("paging")
orderField := req.QueryParameter("order")
reverse := req.QueryParameter("reverse")
res, err := models.ListResource(resource, conditions, paging)
if len(orderField) > 0 {
if reverse == "true" {
orderField = fmt.Sprintf("%s %s", orderField, "desc")
} else {
orderField = fmt.Sprintf("%s %s", orderField, "asc")
}
}
if isInvalid(conditions) || isInvalid(paging) || isInvalid(orderField) {
resp.WriteHeaderAndEntity(http.StatusBadRequest, constants.MessageResponse{Message: "invalid input"})
return
}
res, err := models.ListResource(resource, conditions, paging, orderField)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
@@ -45,3 +97,29 @@ func listResource(req *restful.Request, resp *restful.Response) {
resp.WriteEntity(res)
}
func handleApplication(req *restful.Request, resp *restful.Response) {
paging := req.QueryParameter("paging")
clusterId := req.QueryParameter("cluster_id")
runtimeId := req.QueryParameter("runtime_id")
conditions := req.QueryParameter("conditions")
if len(clusterId) > 0 {
app, err := models.GetApplication(clusterId)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(app)
return
}
res, err := models.ListApplication(runtimeId, conditions, paging)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(res)
}

View File

@@ -0,0 +1,56 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package statefulsets
import (
"net/http"
"github.com/emicklei/go-restful"
"github.com/emicklei/go-restful-openapi"
"k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/errors"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models"
)
func Register(ws *restful.WebService, subPath string) {
tags := []string{"statefulsets"}
ws.Route(ws.GET(subPath).To(getDaemonSetRevision).Consumes("*/*").Metadata(restfulspec.KeyOpenAPITags, tags).Doc("Handle statefulset" +
" operation").Param(ws.PathParameter("statefulset", "statefulset's name").DataType("string")).Param(ws.PathParameter("namespace",
"statefulset's namespace").DataType("string")).Param(ws.PathParameter("revision", "statefulset's revision")).Writes(v1.StatefulSet{}))
}
func getDaemonSetRevision(req *restful.Request, resp *restful.Response) {
statefulset := req.PathParameter("statefulset")
namespace := req.PathParameter("namespace")
revision := req.PathParameter("revision")
res, err := models.GetStatefulSetRevision(namespace, statefulset, revision)
if err != nil {
if errors.IsNotFound(err) {
resp.WriteHeaderAndEntity(http.StatusNotFound, constants.MessageResponse{Message: err.Error()})
} else {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
}
}
resp.WriteEntity(res)
}

View File

@@ -22,17 +22,26 @@ import (
"github.com/emicklei/go-restful"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"github.com/emicklei/go-restful-openapi"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models"
"kubesphere.io/kubesphere/pkg/models/iam"
"kubesphere.io/kubesphere/pkg/models/kubectl"
)
func Register(ws *restful.WebService, subPath string) {
ws.Route(ws.POST(subPath).To(createUser).Consumes("*/*").Produces(restful.MIME_JSON))
ws.Route(ws.DELETE(subPath).To(delUser).Produces(restful.MIME_JSON))
ws.Route(ws.GET(subPath + "/kubectl").To(getKubectl).Produces(restful.MIME_JSON))
ws.Route(ws.GET(subPath + "/kubeconfig").To(getKubeconfig).Produces(restful.MIME_JSON))
tags := []string{"users"}
ws.Route(ws.POST(subPath).Doc("create user").Param(ws.PathParameter("user",
"the username to be created").DataType("string")).Metadata(restfulspec.KeyOpenAPITags, tags).
To(createUser).Consumes("*/*").Produces(restful.MIME_JSON))
ws.Route(ws.DELETE(subPath).Doc("delete user").Param(ws.PathParameter("user",
"the username to be deleted").DataType("string")).Metadata(restfulspec.KeyOpenAPITags, tags).To(delUser).Produces(restful.MIME_JSON))
ws.Route(ws.GET(subPath+"/kubectl").Doc("get user's kubectl pod").Param(ws.PathParameter("user",
"username").DataType("string")).Metadata(restfulspec.KeyOpenAPITags, tags).To(getKubectl).Produces(restful.MIME_JSON))
ws.Route(ws.GET(subPath+"/kubeconfig").Doc("get users' kubeconfig").Param(ws.PathParameter("user",
"username").DataType("string")).Metadata(restfulspec.KeyOpenAPITags, tags).To(getKubeconfig).Produces(restful.MIME_JSON))
}
@@ -46,13 +55,6 @@ func createUser(req *restful.Request, resp *restful.Response) {
return
}
err = models.CreateKubectlPod(user)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(constants.MessageResponse{Message: "successfully created"})
}
@@ -60,7 +62,7 @@ func delUser(req *restful.Request, resp *restful.Response) {
user := req.PathParameter("user")
err := models.DelKubectlPod(user)
err := kubectl.DelKubectlDeploy(user)
if err != nil && !apierrors.IsNotFound(err) {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
@@ -74,13 +76,6 @@ func delUser(req *restful.Request, resp *restful.Response) {
return
}
err = iam.DeleteRoleBindings(user)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(constants.MessageResponse{Message: "successfully deleted"})
}
@@ -88,7 +83,7 @@ func getKubectl(req *restful.Request, resp *restful.Response) {
user := req.PathParameter("user")
kubectlPod, err := models.GetKubectlPod(user)
kubectlPod, err := kubectl.GetKubectlPod(user)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})

View File

@@ -21,14 +21,19 @@ import (
"github.com/emicklei/go-restful"
"github.com/emicklei/go-restful-openapi"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models"
)
func Register(ws *restful.WebService, subPath string) {
ws.Route(ws.GET(subPath).To(getClusterStatus).Produces(restful.MIME_JSON))
ws.Route(ws.GET(subPath + "/namespaces/{namespace}").To(getNamespaceStatus).Produces(restful.MIME_JSON))
tags := []string{"workloadStatus"}
ws.Route(ws.GET(subPath).Doc("get abnormal workloads' count of whole cluster").Metadata(restfulspec.KeyOpenAPITags, tags).To(getClusterStatus).Produces(restful.MIME_JSON))
ws.Route(ws.GET(subPath+"/namespaces/{namespace}").Doc("get abnormal workloads' count of specified namespace").Param(ws.PathParameter("namespace",
"the name of namespace").DataType("string")).Metadata(restfulspec.KeyOpenAPITags, tags).To(getNamespaceStatus).Produces(restful.MIME_JSON))
}

View File

@@ -0,0 +1,536 @@
package workspaces
import (
"net/http"
"github.com/emicklei/go-restful"
"k8s.io/api/core/v1"
"fmt"
"strings"
"k8s.io/kubernetes/pkg/util/slice"
"strconv"
"regexp"
"sort"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models/iam"
"kubesphere.io/kubesphere/pkg/models/metrics"
"kubesphere.io/kubesphere/pkg/models/workspaces"
)
const UserNameHeader = "X-Token-Username"
func Register(ws *restful.WebService, subPath string) {
ws.Route(ws.GET(subPath).To(UserWorkspaceListHandler))
ws.Route(ws.POST(subPath).To(WorkspaceCreateHandler))
ws.Route(ws.DELETE(subPath + "/{name}").To(DeleteWorkspaceHandler))
ws.Route(ws.GET(subPath + "/{name}").To(WorkspaceDetailHandler))
ws.Route(ws.PUT(subPath + "/{name}").To(WorkspaceEditHandler))
ws.Route(ws.GET(subPath + "/{workspace}/namespaces").To(UserNamespaceListHandler))
ws.Route(ws.GET(subPath + "/{workspace}/members/{username}/namespaces").To(UserNamespaceListHandler))
ws.Route(ws.POST(subPath + "/{name}/namespaces").To(NamespaceCreateHandler))
ws.Route(ws.DELETE(subPath + "/{name}/namespaces/{namespace}").To(NamespaceDeleteHandler))
ws.Route(ws.GET(subPath + "/{name}/namespaces/{namespace}").To(NamespaceCheckHandler))
ws.Route(ws.GET("/namespaces/{namespace}").To(NamespaceCheckHandler))
ws.Route(ws.GET(subPath + "/{name}/devops").To(DevOpsProjectHandler))
ws.Route(ws.GET(subPath + "/{name}/members/{username}/devops").To(DevOpsProjectHandler))
ws.Route(ws.POST(subPath + "/{name}/devops").To(DevOpsProjectCreateHandler))
ws.Route(ws.DELETE(subPath + "/{name}/devops/{id}").To(DevOpsProjectDeleteHandler))
ws.Route(ws.GET(subPath + "/{name}/members").To(MembersHandler))
ws.Route(ws.GET(subPath + "/{name}/members/{member}").To(MemberHandler))
ws.Route(ws.GET(subPath + "/{name}/roles").To(RolesHandler))
ws.Route(ws.GET(subPath + "/{name}/roles/{role}").To(RoleHandler))
ws.Route(ws.POST(subPath + "/{name}/members").To(MembersInviteHandler))
ws.Route(ws.DELETE(subPath + "/{name}/members").To(MembersRemoveHandler))
}
func RoleHandler(req *restful.Request, resp *restful.Response) {
workspaceName := req.PathParameter("name")
roleName := req.PathParameter("role")
if !slice.ContainsString(constants.WorkSpaceRoles, roleName, nil) {
resp.WriteHeaderAndEntity(http.StatusNotFound, constants.MessageResponse{Message: fmt.Sprintf("role %s not found", roleName)})
return
}
role, rules, err := iam.WorkspaceRoleRules(workspaceName, roleName)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
users, err := iam.WorkspaceRoleUsers(workspaceName, roleName)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(map[string]interface{}{"role": role, "rules": rules, "users": users})
}
func RolesHandler(req *restful.Request, resp *restful.Response) {
name := req.PathParameter("name")
workspace, err := workspaces.Detail(name)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
roles, err := workspaces.Roles(workspace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(roles)
}
func MembersHandler(req *restful.Request, resp *restful.Response) {
workspace := req.PathParameter("name")
keyword := req.QueryParameter("keyword")
users, err := workspaces.GetWorkspaceMembers(workspace, keyword)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(users)
}
func MemberHandler(req *restful.Request, resp *restful.Response) {
workspace := req.PathParameter("name")
username := req.PathParameter("member")
user, err := iam.GetUser(username)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
namespaces, err := workspaces.Namespaces(workspace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
user.WorkspaceRole = user.WorkspaceRoles[workspace]
roles := make(map[string]string)
for _, namespace := range namespaces {
if role := user.Roles[namespace.Name]; role != "" {
roles[namespace.Name] = role
}
}
user.Roles = roles
user.Rules = nil
user.WorkspaceRules = nil
user.WorkspaceRoles = nil
user.ClusterRules = nil
resp.WriteEntity(user)
}
func MembersInviteHandler(req *restful.Request, resp *restful.Response) {
var users []workspaces.UserInvite
workspace := req.PathParameter("name")
err := req.ReadEntity(&users)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
err = workspaces.Invite(workspace, users)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteHeaderAndEntity(http.StatusOK, constants.MessageResponse{Message: "success"})
}
func MembersRemoveHandler(req *restful.Request, resp *restful.Response) {
query := req.QueryParameter("name")
workspace := req.PathParameter("name")
names := strings.Split(query, ",")
err := workspaces.RemoveMembers(workspace, names)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteHeaderAndEntity(http.StatusOK, constants.MessageResponse{Message: "success"})
}
func NamespaceCheckHandler(req *restful.Request, resp *restful.Response) {
namespace := req.PathParameter("namespace")
exist, err := workspaces.NamespaceExistCheck(namespace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(map[string]bool{"exist": exist})
}
func NamespaceDeleteHandler(req *restful.Request, resp *restful.Response) {
namespace := req.PathParameter("namespace")
workspace := req.PathParameter("name")
err := workspaces.DeleteNamespace(workspace, namespace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteHeaderAndEntity(http.StatusOK, constants.MessageResponse{Message: "success"})
}
func DevOpsProjectDeleteHandler(req *restful.Request, resp *restful.Response) {
devops := req.PathParameter("id")
workspace := req.PathParameter("name")
force := req.QueryParameter("force")
username := req.HeaderParameter(UserNameHeader)
err := workspaces.UnBindDevopsProject(workspace, devops)
if err != nil && force != "true" {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
err = workspaces.DeleteDevopsProject(username, devops)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(constants.MessageResponse{Message: "success"})
}
func DevOpsProjectCreateHandler(req *restful.Request, resp *restful.Response) {
workspace := req.PathParameter("name")
username := req.HeaderParameter(UserNameHeader)
var devops workspaces.DevopsProject
err := req.ReadEntity(&devops)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusBadRequest, constants.MessageResponse{Message: err.Error()})
return
}
project, err := workspaces.CreateDevopsProject(username, workspace, devops)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(project)
}
func NamespaceCreateHandler(req *restful.Request, resp *restful.Response) {
workspace := req.PathParameter("name")
username := req.HeaderParameter(UserNameHeader)
namespace := &v1.Namespace{}
err := req.ReadEntity(namespace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusBadRequest, constants.MessageResponse{Message: err.Error()})
return
}
if namespace.Annotations == nil {
namespace.Annotations = make(map[string]string, 0)
}
namespace.Annotations["creator"] = username
namespace.Annotations["workspace"] = workspace
if namespace.Labels == nil {
namespace.Labels = make(map[string]string, 0)
}
namespace.Labels["kubesphere.io/workspace"] = workspace
namespace, err = workspaces.CreateNamespace(namespace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusBadRequest, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(namespace)
}
func DevOpsProjectHandler(req *restful.Request, resp *restful.Response) {
workspace := req.PathParameter("name")
username := req.PathParameter("username")
keyword := req.QueryParameter("keyword")
if username == "" {
username = req.HeaderParameter(UserNameHeader)
}
limit := 65535
offset := 0
orderBy := "createTime"
reverse := true
if groups := regexp.MustCompile(`^limit=(\d+),page=(\d+)$`).FindStringSubmatch(req.QueryParameter("paging")); len(groups) == 3 {
limit, _ = strconv.Atoi(groups[1])
page, _ := strconv.Atoi(groups[2])
offset = (page - 1) * limit
}
if groups := regexp.MustCompile(`^(createTime|name)$`).FindStringSubmatch(req.QueryParameter("order")); len(groups) == 2 {
orderBy = groups[1]
reverse = false
}
if q := req.QueryParameter("reverse"); q != "" {
b, err := strconv.ParseBool(q)
if err == nil {
reverse = b
}
}
total, devOpsProjects, err := workspaces.ListDevopsProjectsByUser(username, workspace, keyword, orderBy, reverse, limit, offset)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
result := constants.PageableResponse{}
result.TotalCount = total
result.Items = make([]interface{}, 0)
for _, n := range devOpsProjects {
result.Items = append(result.Items, n)
}
resp.WriteEntity(result)
}
func WorkspaceCreateHandler(req *restful.Request, resp *restful.Response) {
var workspace workspaces.Workspace
username := req.HeaderParameter(UserNameHeader)
err := req.ReadEntity(&workspace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusBadRequest, constants.MessageResponse{Message: err.Error()})
return
}
if workspace.Name == "" || strings.Contains(workspace.Name, ":") {
resp.WriteHeaderAndEntity(http.StatusBadRequest, constants.MessageResponse{Message: "invalid workspace name"})
return
}
workspace.Path = workspace.Name
workspace.Members = nil
if workspace.Admin != "" {
workspace.Creator = workspace.Admin
} else {
workspace.Creator = username
}
created, err := workspaces.Create(&workspace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(created)
}
func DeleteWorkspaceHandler(req *restful.Request, resp *restful.Response) {
name := req.PathParameter("name")
if name == "" || strings.Contains(name, ":") {
resp.WriteHeaderAndEntity(http.StatusBadRequest, constants.MessageResponse{Message: "invalid workspace name"})
return
}
workspace, err := workspaces.Detail(name)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
err = workspaces.Delete(workspace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(constants.MessageResponse{Message: "success"})
}
func WorkspaceEditHandler(req *restful.Request, resp *restful.Response) {
var workspace workspaces.Workspace
name := req.PathParameter("name")
err := req.ReadEntity(&workspace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusBadRequest, constants.MessageResponse{Message: err.Error()})
return
}
if name != workspace.Name {
resp.WriteError(http.StatusBadRequest, fmt.Errorf("the name of workspace (%s) does not match the name on the URL (%s)", workspace.Name, name))
return
}
if workspace.Name == "" || strings.Contains(workspace.Name, ":") {
resp.WriteHeaderAndEntity(http.StatusBadRequest, constants.MessageResponse{Message: "invalid workspace name"})
return
}
workspace.Path = workspace.Name
workspace.Members = nil
edited, err := workspaces.Edit(&workspace)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(edited)
}
func WorkspaceDetailHandler(req *restful.Request, resp *restful.Response) {
name := req.PathParameter("name")
workspace, err := workspaces.Detail(name)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
resp.WriteEntity(workspace)
}
// List all workspaces for the current user
func UserWorkspaceListHandler(req *restful.Request, resp *restful.Response) {
keyword := req.QueryParameter("keyword")
username := req.HeaderParameter(UserNameHeader)
ws, err := workspaces.ListWorkspaceByUser(username, keyword)
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
sort.Slice(ws, func(i, j int) bool {
t1, err := ws[i].GetCreateTime()
if err != nil {
return false
}
t2, err := ws[j].GetCreateTime()
if err != nil {
return true
}
return t1.After(t2)
})
resp.WriteEntity(ws)
}
func UserNamespaceListHandler(req *restful.Request, resp *restful.Response) {
withMetrics, err := strconv.ParseBool(req.QueryParameter("metrics"))
if err != nil {
withMetrics = false
}
username := req.PathParameter("username")
keyword := req.QueryParameter("keyword")
if username == "" {
username = req.HeaderParameter(UserNameHeader)
}
limit := 65535
offset := 0
orderBy := "createTime"
reverse := true
if groups := regexp.MustCompile(`^limit=(\d+),page=(\d+)$`).FindStringSubmatch(req.QueryParameter("paging")); len(groups) == 3 {
limit, _ = strconv.Atoi(groups[1])
page, _ := strconv.Atoi(groups[2])
if page < 0 {
page = 1
}
offset = (page - 1) * limit
}
if groups := regexp.MustCompile(`^(createTime|name)$`).FindStringSubmatch(req.QueryParameter("order")); len(groups) == 2 {
orderBy = groups[1]
reverse = false
}
if q := req.QueryParameter("reverse"); q != "" {
b, err := strconv.ParseBool(q)
if err == nil {
reverse = b
}
}
workspaceName := req.PathParameter("workspace")
total, namespaces, err := workspaces.ListNamespaceByUser(workspaceName, username, keyword, orderBy, reverse, limit, offset)
if withMetrics {
namespaces = metrics.GetNamespacesWithMetrics(namespaces)
}
if err != nil {
resp.WriteHeaderAndEntity(http.StatusInternalServerError, constants.MessageResponse{Message: err.Error()})
return
}
result := constants.PageableResponse{}
result.TotalCount = total
result.Items = make([]interface{}, 0)
for _, n := range namespaces {
result.Items = append(result.Items, n)
}
resp.WriteEntity(result)
}

View File

@@ -22,16 +22,30 @@ import (
"github.com/emicklei/go-restful"
"github.com/golang/glog"
"k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"net"
"net/http"
"github.com/emicklei/go-restful-openapi"
"github.com/go-openapi/spec"
"k8s.io/apimachinery/pkg/api/errors"
"os"
"os/signal"
"sync"
"syscall"
"k8s.io/api/core/v1"
_ "kubesphere.io/kubesphere/pkg/apis/v1alpha"
"kubesphere.io/kubesphere/pkg/client"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models"
"kubesphere.io/kubesphere/pkg/models/controllers"
"kubesphere.io/kubesphere/pkg/models/kubectl"
"kubesphere.io/kubesphere/pkg/models/workspaces"
"kubesphere.io/kubesphere/pkg/options"
)
@@ -61,19 +75,66 @@ func newKubeSphereServer(options *options.ServerRunOptions) *kubeSphereServer {
func preCheck() error {
k8sClient := client.NewK8sClient()
nsList, err := k8sClient.CoreV1().Namespaces().List(meta_v1.ListOptions{})
_, err := k8sClient.CoreV1().Namespaces().Get(constants.KubeSphereControlNamespace, metaV1.GetOptions{})
if err != nil {
return err
}
for _, ns := range nsList.Items {
if ns.Name == constants.KubeSphereControlNamespace {
return nil
if errors.IsNotFound(err) {
_, err = k8sClient.CoreV1().Namespaces().Create(&v1.Namespace{ObjectMeta: metaV1.ObjectMeta{Name: constants.KubeSphereControlNamespace}})
if err != nil {
return err
}
} else {
return err
}
}
namespace := v1.Namespace{ObjectMeta: meta_v1.ObjectMeta{Name: constants.KubeSphereControlNamespace}}
_, err = k8sClient.CoreV1().Namespaces().Create(&namespace)
return err
_, err = k8sClient.AppsV1().Deployments(constants.KubeSphereControlNamespace).Get(constants.AdminUserName, metaV1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
if err = models.CreateKubeConfig(constants.AdminUserName); err != nil {
return err
}
if err = kubectl.CreateKubectlDeploy(constants.AdminUserName); err != nil {
return err
}
} else {
return err
}
}
db := client.NewSharedDBClient()
defer db.Close()
if !db.HasTable(&workspaces.WorkspaceDPBinding{}) {
if err := db.CreateTable(&workspaces.WorkspaceDPBinding{}).Error; err != nil {
return err
}
}
return nil
}
func registerSwagger() {
config := restfulspec.Config{
WebServices: restful.RegisteredWebServices(), // you control what services are visible
APIPath: "/swagger-ui/api.json",
PostBuildSwaggerObjectHandler: enrichSwaggerObject}
restful.DefaultContainer.Add(restfulspec.NewOpenAPIService(config))
http.Handle("/swagger-ui/", http.StripPrefix("/swagger-ui/", http.FileServer(http.Dir("/usr/lib/kubesphere/swagger-ui"))))
}
func enrichSwaggerObject(swo *spec.Swagger) {
swo.Info = &spec.Info{
InfoProps: spec.InfoProps{
Title: "KubeSphere",
Description: "The extend apis of kubesphere",
Version: "v1.0-alpha",
},
}
swo.Tags = []spec.Tag{spec.Tag{TagProps: spec.TagProps{
Name: "extend apis"}}}
}
func (server *kubeSphereServer) run() {
err := preCheck()
if err != nil {
@@ -81,7 +142,12 @@ func (server *kubeSphereServer) run() {
return
}
go controllers.Run()
var wg sync.WaitGroup
stopChan := make(chan struct{})
wg.Add(1)
go controllers.Run(stopChan, &wg)
registerSwagger()
if len(server.certFile) > 0 && len(server.keyFile) > 0 {
servingCert, err := tls.LoadX509KeyPair(server.certFile, server.keyFile)
@@ -108,7 +174,12 @@ func (server *kubeSphereServer) run() {
go func() { glog.Fatal(http.ListenAndServe(insecureAddr, nil)) }()
}
select {}
sigs := make(chan os.Signal)
signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)
<-sigs
close(stopChan)
wg.Wait()
}
func Run() {

View File

@@ -15,13 +15,12 @@ package client
import (
"fmt"
"log"
_ "github.com/go-sql-driver/mysql"
"github.com/golang/glog"
"github.com/jinzhu/gorm"
"log"
"kubesphere.io/kubesphere/pkg/logs"
"kubesphere.io/kubesphere/pkg/options"
)
@@ -31,6 +30,24 @@ var dbClient *gorm.DB
const database = "kubesphere"
func NewDBClient() *gorm.DB {
user := options.ServerOptions.GetMysqlUser()
passwd := options.ServerOptions.GetMysqlPassword()
addr := options.ServerOptions.GetMysqlAddr()
conn := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local", user, passwd, addr, database)
db, err := gorm.Open("mysql", conn)
if err != nil {
glog.Error(err)
panic(err)
}
db.SetLogger(log.New(logs.GlogWriter{}, " ", 0))
return db
}
func NewSharedDBClient() *gorm.DB {
if dbClient != nil {
err := dbClient.DB().Ping()
@@ -42,23 +59,5 @@ func NewDBClient() *gorm.DB {
}
}
user := options.ServerOptions.GetMysqlUser()
passwd := options.ServerOptions.GetMysqlPassword()
addr := options.ServerOptions.GetMysqlAddr()
if dbClient == nil {
conn := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local", user, passwd, addr, database)
glog.Info(conn)
db, err := gorm.Open("mysql", conn)
if err != nil {
glog.Error(err)
panic(err)
}
db.SetLogger(log.New(logs.GlogWriter{}, " ", 0))
dbClient = db
return dbClient
}
return dbClient
return NewDBClient()
}

View File

@@ -0,0 +1,198 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (
"io/ioutil"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"os"
"github.com/emicklei/go-restful"
"github.com/golang/glog"
)
const (
DefaultScheme = "http"
DefaultPrometheusPort = "9090"
PrometheusApiPath = "/api/v1/"
DefaultQueryStep = "10m"
DefaultQueryTimeout = "10s"
RangeQueryType = "query_range?"
DefaultQueryType = "query?"
PrometheusAPIServerEnv = "PROMETHEUS_API_SERVER"
)
var PrometheusAPIServer = "prometheus-k8s.kubesphere-monitoring-system.svc"
var PrometheusEndpointUrl string
func init() {
if env := os.Getenv(PrometheusAPIServerEnv); env != "" {
PrometheusAPIServer = env
}
PrometheusEndpointUrl = DefaultScheme + "://" + PrometheusAPIServer + ":" + DefaultPrometheusPort + PrometheusApiPath
}
type MonitoringRequestParams struct {
Params url.Values
QueryType string
SortMetricName string
SortType string
PageNum string
LimitNum string
Tp string
MetricsFilter string
NodesFilter string
WsFilter string
NsFilter string
PodsFilter string
ContainersFilter string
MetricsName string
WorkloadName string
WlFilter string
NodeId string
WsName string
NsName string
PodName string
ContainerName string
WorkloadKind string
}
var client = &http.Client{}
func SendMonitoringRequest(queryType string, params string) string {
epurl := PrometheusEndpointUrl + queryType + params
response, err := client.Get(epurl)
if err != nil {
glog.Error(err)
} else {
defer response.Body.Close()
contents, err := ioutil.ReadAll(response.Body)
if err != nil {
glog.Error(err)
}
return string(contents)
}
return ""
}
func ParseMonitoringRequestParams(request *restful.Request) *MonitoringRequestParams {
instantTime := strings.Trim(request.QueryParameter("time"), " ")
start := strings.Trim(request.QueryParameter("start"), " ")
end := strings.Trim(request.QueryParameter("end"), " ")
step := strings.Trim(request.QueryParameter("step"), " ")
timeout := strings.Trim(request.QueryParameter("timeout"), " ")
sortMetricName := strings.Trim(request.QueryParameter("sort_metric"), " ")
sortType := strings.Trim(request.QueryParameter("sort_type"), " ")
pageNum := strings.Trim(request.QueryParameter("page"), " ")
limitNum := strings.Trim(request.QueryParameter("limit"), " ")
tp := strings.Trim(request.QueryParameter("type"), " ")
metricsFilter := strings.Trim(request.QueryParameter("metrics_filter"), " ")
nodesFilter := strings.Trim(request.QueryParameter("nodes_filter"), " ")
wsFilter := strings.Trim(request.QueryParameter("workspaces_filter"), " ")
nsFilter := strings.Trim(request.QueryParameter("namespaces_filter"), " ")
wlFilter := strings.Trim(request.QueryParameter("workloads_filter"), " ")
podsFilter := strings.Trim(request.QueryParameter("pods_filter"), " ")
containersFilter := strings.Trim(request.QueryParameter("containers_filter"), " ")
metricsName := strings.Trim(request.QueryParameter("metrics_name"), " ")
workloadName := strings.Trim(request.QueryParameter("workload_name"), " ")
nodeId := strings.Trim(request.PathParameter("node_id"), " ")
wsName := strings.Trim(request.PathParameter("workspace_name"), " ")
nsName := strings.Trim(request.PathParameter("ns_name"), " ")
podName := strings.Trim(request.PathParameter("pod_name"), " ")
containerName := strings.Trim(request.PathParameter("container_name"), " ")
workloadKind := strings.Trim(request.PathParameter("workload_kind"), " ")
var requestParams = MonitoringRequestParams{
SortMetricName: sortMetricName,
SortType: sortType,
PageNum: pageNum,
LimitNum: limitNum,
Tp: tp,
MetricsFilter: metricsFilter,
NodesFilter: nodesFilter,
WsFilter: wsFilter,
NsFilter: nsFilter,
PodsFilter: podsFilter,
ContainersFilter: containersFilter,
MetricsName: metricsName,
WorkloadName: workloadName,
WlFilter: wlFilter,
NodeId: nodeId,
WsName: wsName,
NsName: nsName,
PodName: podName,
ContainerName: containerName,
WorkloadKind: workloadKind,
}
if timeout == "" {
timeout = DefaultQueryTimeout
}
if step == "" {
step = DefaultQueryStep
}
// Whether query or query_range request
u := url.Values{}
if start != "" && end != "" {
u.Set("start", convertTimeGranularity(start))
u.Set("end", convertTimeGranularity(end))
u.Set("step", step)
u.Set("timeout", timeout)
requestParams.QueryType = RangeQueryType
requestParams.Params = u
return &requestParams
}
if instantTime != "" {
u.Set("time", instantTime)
u.Set("timeout", timeout)
requestParams.QueryType = DefaultQueryType
requestParams.Params = u
return &requestParams
} else {
//u.Set("time", strconv.FormatInt(int64(time.Now().Unix()), 10))
u.Set("timeout", timeout)
requestParams.QueryType = DefaultQueryType
requestParams.Params = u
return &requestParams
}
glog.Errorln("Parse request %s failed", u)
requestParams.QueryType = DefaultQueryType
requestParams.Params = u
return &requestParams
}
func convertTimeGranularity(ts string) string {
timeFloat, err := strconv.ParseFloat(ts, 64)
if err != nil {
glog.Errorf("convert second timestamp %s to minute timestamp failed", ts)
return strconv.FormatInt(int64(time.Now().Unix()), 10)
}
timeInt := int64(timeFloat)
// convert second timestamp to minute timestamp
secondTime := time.Unix(timeInt, 0).Truncate(time.Minute).Unix()
return strconv.FormatInt(secondTime, 10)
}

View File

@@ -16,6 +16,8 @@ limitations under the License.
package constants
import "os"
type MessageResponse struct {
Message string `json:"message"`
}
@@ -34,8 +36,42 @@ const (
KubeSphereNamespace = "kubesphere-system"
KubeSphereControlNamespace = "kubesphere-controls-system"
IngressControllerNamespace = KubeSphereControlNamespace
DataHome = "/etc/kubesphere"
IngressControllerFolder = DataHome + "/ingress-controller"
IngressControllerPrefix = "kubesphere-router-"
AdminUserName = "admin"
DataHome = "/etc/kubesphere"
IngressControllerFolder = DataHome + "/ingress-controller"
IngressControllerPrefix = "kubesphere-router-"
DevopsAPIServerEnv = "DEVOPS_API_SERVER"
AccountAPIServerEnv = "ACCOUNT_API_SERVER"
DevopsProxyTokenEnv = "DEVOPS_PROXY_TOKEN"
OpenPitrixProxyTokenEnv = "OPENPITRIX_PROXY_TOKEN"
WorkspaceLabelKey = "kubesphere.io/workspace"
WorkspaceAdmin = "workspace-admin"
ClusterAdmin = "cluster-admin"
WorkspaceRegular = "workspace-regular"
WorkspaceViewer = "workspace-viewer"
DevopsOwner = "owner"
DevopsReporter = "reporter"
)
var (
DevopsAPIServer = "ks-devops-apiserver.kubesphere-system.svc"
AccountAPIServer = "ks-account.kubesphere-system.svc"
DevopsProxyToken = ""
OpenPitrixProxyToken = ""
WorkSpaceRoles = []string{WorkspaceAdmin, WorkspaceRegular, WorkspaceViewer}
)
func init() {
if env := os.Getenv(DevopsAPIServerEnv); env != "" {
DevopsAPIServer = env
}
if env := os.Getenv(AccountAPIServerEnv); env != "" {
AccountAPIServer = env
}
if env := os.Getenv(DevopsProxyTokenEnv); env != "" {
DevopsProxyToken = env
}
if env := os.Getenv(OpenPitrixProxyTokenEnv); env != "" {
OpenPitrixProxyToken = env
}
}

View File

@@ -18,6 +18,7 @@ package container
import (
"strings"
"time"
"github.com/emicklei/go-restful"
"github.com/golang/glog"
@@ -25,14 +26,16 @@ import (
func logFilter() restful.FilterFunction {
return func(req *restful.Request, resp *restful.Response, chain *restful.FilterChain) {
start := time.Now()
chain.ProcessFilter(req, resp)
glog.Infof("%s - \"%s %s %s\" %d %d",
glog.Infof("%s - \"%s %s %s\" %d %d in %dms",
strings.Split(req.Request.RemoteAddr, ":")[0],
req.Request.Method,
req.Request.URL.RequestURI(),
req.Request.Proto,
resp.StatusCode(),
resp.ContentLength(),
time.Since(start)/time.Millisecond,
)
}
}

View File

@@ -19,150 +19,226 @@ package models
import (
"time"
"github.com/golang/glog"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
v13 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1"
"kubesphere.io/kubesphere/pkg/client"
"kubesphere.io/kubesphere/pkg/constants"
v12 "k8s.io/client-go/listers/core/v1"
"kubesphere.io/kubesphere/pkg/models/controllers"
"k8s.io/apimachinery/pkg/labels"
"github.com/golang/glog"
)
type ComponentsCount struct {
KubernetesCount int `json:"kubernetesCount"`
OpenpitrixCount int `json:"openpitrixCount"`
KubesphereCount int `json:"kubesphereCount"`
IstioCount int `json:"istioCount"`
// Namespaces need to watch
var SYSTEM_NAMESPACES = [...]string{"kubesphere-system", "openpitrix-system", "kube-system"}
type Component struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
SelfLink string `json:"selfLink"`
Label interface{} `json:"label"`
StartedAt time.Time `json:"startedAt"`
TotalBackends int `json:"totalBackends"`
HealthyBackends int `json:"healthyBackends"`
}
type Components struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
SelfLink string `json:"selfLink"`
Label interface{} `json:"label"`
HealthStatus string `json:"healthStatus"`
CreateTime time.Time `json:"createTime"`
}
/***
* get all components from k8s and kubesphere system
*
*/
func GetComponents() (map[string]interface{}, error) {
result := make(map[string]interface{})
componentsList := make([]Components, 0)
k8sClient := client.NewK8sClient()
var count ComponentsCount
var components Components
label := "kubernetes.io/cluster-service=true"
option := meta_v1.ListOptions{
LabelSelector: label,
func GetComponentStatus(namespace string, componentName string) (interface{}, error) {
lister, err := controllers.GetLister(controllers.Services)
if err != nil {
glog.Errorln(err)
return nil, err
}
namespaces := []string{constants.KubeSystemNamespace, constants.OpenPitrixNamespace, constants.IstioNamespace, constants.KubeSphereNamespace}
for _, ns := range namespaces {
serviceLister := lister.(v12.ServiceLister)
service, err := serviceLister.Services(namespace).Get(componentName)
if ns != constants.KubeSystemNamespace {
option.LabelSelector = ""
if err != nil {
glog.Error(err)
return nil, err
}
lister, err = controllers.GetLister(controllers.Pods)
if err != nil {
glog.Errorln(err)
return nil, err
}
podLister := lister.(v12.PodLister)
set := labels.Set(service.Spec.Selector)
pods, err := podLister.Pods(namespace).List(set.AsSelector())
if err != nil {
glog.Errorln(err)
return nil, err
} else {
component := Component{
Name: service.Name,
Namespace: service.Namespace,
SelfLink: service.SelfLink,
Label: service.Spec.Selector,
StartedAt: service.CreationTimestamp.Time,
HealthyBackends: 0,
TotalBackends: 0,
}
servicelists, err := k8sClient.CoreV1().Services(ns).List(option)
for _, v := range pods {
component.TotalBackends++
component.HealthyBackends++
for _, c := range v.Status.ContainerStatuses {
if !c.Ready {
component.HealthyBackends--
break
}
}
}
return component, nil
}
}
func GetSystemHealthStatus() (map[string]interface{}, error) {
status := make(map[string]interface{})
k8sClient := client.NewK8sClient()
csList, err := k8sClient.Core().ComponentStatuses().List(v1.ListOptions{})
if err != nil {
glog.Errorln(err)
return nil, err
}
for _, cs := range csList.Items {
status[cs.Name] = cs.Conditions[0]
}
// get kubesphere-system components
systemComponentStatus, err := GetAllComponentsStatus()
if err != nil {
glog.Errorln(err)
}
for k, v := range systemComponentStatus {
status[k] = v
}
// get node status
lister, err := controllers.GetLister(controllers.Nodes)
if err != nil {
glog.Errorln(err)
return status, nil
}
nodeLister := lister.(v12.NodeLister)
nodes, err := nodeLister.List(labels.Everything())
if err != nil {
glog.Errorln(err)
return status, nil
}
nodeStatus := make(map[string]int)
totalNodes := 0
healthyNodes := 0
for _, nodes := range nodes {
totalNodes++
for _, condition := range nodes.Status.Conditions {
if condition.Type == v13.NodeReady && condition.Status == v13.ConditionTrue {
healthyNodes++
}
}
}
nodeStatus["totalNodes"] = totalNodes
nodeStatus["healthyNodes"] = healthyNodes
status["nodes"] = nodeStatus
return status, nil
}
func GetAllComponentsStatus() (map[string]interface{}, error) {
status := make(map[string]interface{})
var err error
lister, err := controllers.GetLister(controllers.Services)
if err != nil {
glog.Errorln(err)
return nil, err
}
serviceLister := lister.(v12.ServiceLister)
lister, err = controllers.GetLister(controllers.Pods)
if err != nil {
glog.Errorln(err)
return nil, err
}
podLister := lister.(v12.PodLister)
for _, ns := range SYSTEM_NAMESPACES {
nsStatus := make(map[string]interface{})
services, err := serviceLister.Services(ns).List(labels.Everything())
if err != nil {
glog.Error(err)
return result, err
continue
}
if len(servicelists.Items) > 0 {
for _, service := range services {
for _, service := range servicelists.Items {
switch ns {
case constants.KubeSystemNamespace:
count.KubernetesCount++
case constants.OpenPitrixNamespace:
count.OpenpitrixCount++
case constants.KubeSphereNamespace:
count.KubesphereCount++
default:
count.IstioCount++
}
components.Name = service.Name
components.Namespace = service.Namespace
components.CreateTime = service.CreationTimestamp.Time
components.Label = service.Spec.Selector
components.SelfLink = service.SelfLink
label := service.Spec.Selector
combination := ""
for key, val := range label {
labelstr := key + "=" + val
if combination == "" {
combination = labelstr
} else {
combination = combination + "," + labelstr
}
}
option := meta_v1.ListOptions{
LabelSelector: combination,
}
podsList, err := k8sClient.CoreV1().Pods(ns).List(option)
if err != nil {
glog.Error(err)
return result, err
}
if len(podsList.Items) > 0 {
var health bool
for _, pod := range podsList.Items {
for _, status := range pod.Status.ContainerStatuses {
if status.Ready == false {
health = status.Ready
break
} else {
health = status.Ready
}
}
if health == false {
components.HealthStatus = "unhealth"
break
}
}
if health == true {
components.HealthStatus = "health"
}
} else {
components.HealthStatus = "unhealth"
}
componentsList = append(componentsList, components)
set := labels.Set(service.Spec.Selector)
if len(set) == 0 {
continue
}
component := Component{
Name: service.Name,
Namespace: service.Namespace,
SelfLink: service.SelfLink,
Label: service.Spec.Selector,
StartedAt: service.CreationTimestamp.Time,
HealthyBackends: 0,
TotalBackends: 0,
}
pods, err := podLister.Pods(ns).List(set.AsSelector())
if err != nil {
glog.Errorln(err)
continue
}
for _, pod := range pods {
component.TotalBackends++
component.HealthyBackends++
for _, c := range pod.Status.ContainerStatuses {
if !c.Ready {
component.HealthyBackends--
break
}
}
}
nsStatus[service.Name] = component
}
if len(nsStatus) > 0 {
status[ns] = nsStatus
}
}
result["count"] = count
result["item"] = componentsList
return result, nil
return status, err
}

View File

@@ -0,0 +1,490 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"encoding/json"
"fmt"
"strconv"
"strings"
"time"
"github.com/golang/glog"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"kubesphere.io/kubesphere/pkg/client"
)
const (
unknown = "-"
deploySurffix = "-Deployment"
daemonSurffix = "-DaemonSet"
stateSurffix = "-StatefulSet"
)
type ApplicationCtl struct {
OpenpitrixAddr string
}
type Application struct {
Name string `json:"name"`
RepoName string `json:"repoName"`
Runtime string `json:"namespace"`
RuntimeId string `json:"runtime_id"`
Version string `json:"version"`
VersionId string `json:"version_id"`
Status string `json:"status"`
UpdateTime time.Time `json:"updateTime"`
CreateTime time.Time `json:"createTime"`
App string `json:"app"`
AppId string `json:"app_id"`
Description string `json:"description,omitempty"`
WorkLoads *workLoads `json:"workloads,omitempty"`
Services *[]Service `json:"services,omitempty"`
Ingresses *[]ing `json:"ingresses,omitempty"`
ClusterID string `json:"cluster_id"`
}
type ing struct {
Name string `json:"name"`
Rules []ingressRule `json:"rules"`
}
type clusterRole struct {
ClusterID string `json:"cluster_id"`
Role string `json:"role"`
}
type cluster struct {
ClusterID string `json:"cluster_id"`
Name string `json:"name"`
AppID string `json:"app_id"`
VersionID string `json:"version_id"`
Status string `json:"status"`
UpdateTime time.Time `json:"status_time"`
CreateTime time.Time `json:"create_time"`
RunTimeId string `json:"runtime_id"`
Description string `json:"description"`
ClusterRoleSets []clusterRole `json:"cluster_role_set"`
}
type clusters struct {
Total int `json:"total_count"`
Clusters []cluster `json:"cluster_set"`
}
type versionList struct {
Total int `json:"total_count"`
Versions []version `json:"app_version_set"`
}
type version struct {
Name string `json:"name"`
VersionID string `json:"version_id"`
}
type runtime struct {
RuntimeID string `json:"runtime_id"`
Zone string `json:"zone"`
}
type runtimeList struct {
Total int `json:"total_count"`
Runtimes []runtime `json:"runtime_set"`
}
type app struct {
AppId string `json:"app_id"`
Name string `json:"name"`
ChartName string `json:"chart_name"`
RepoId string `json:"repo_id"`
}
type repo struct {
RepoId string `json:"repo_id"`
Name string `json:"name"`
Url string `json:"url"`
}
type workLoads struct {
Deployments []Deployment `json:"deployments,omitempty"`
Statefulsets []Statefulset `json:"statefulsets,omitempty"`
Daemonsets []Daemonset `json:"daemonsets,omitempty"`
}
//type description struct {
// Creator string `json:"creator"`
//}
type appList struct {
Total int `json:"total_count"`
Apps []app `json:"app_set"`
}
type repoList struct {
Total int `json:"total_count"`
Repos []repo `json:"repo_set"`
}
func (ctl *ApplicationCtl) GetAppInfo(appId string) (string, string, string, error) {
url := fmt.Sprintf("%s/v1/apps?app_id=%s", ctl.OpenpitrixAddr, appId)
resp, err := makeHttpRequest("GET", url, "")
if err != nil {
glog.Error(err)
return unknown, unknown, unknown, err
}
var apps appList
err = json.Unmarshal(resp, &apps)
if err != nil {
glog.Error(err)
return unknown, unknown, unknown, err
}
if len(apps.Apps) == 0 {
return unknown, unknown, unknown, err
}
return apps.Apps[0].ChartName, apps.Apps[0].RepoId, apps.Apps[0].AppId, nil
}
func (ctl *ApplicationCtl) GetRepo(repoId string) (string, error) {
url := fmt.Sprintf("%s/v1/repos?repo_id=%s", ctl.OpenpitrixAddr, repoId)
resp, err := makeHttpRequest("GET", url, "")
if err != nil {
glog.Error(err)
return unknown, err
}
var repos repoList
err = json.Unmarshal(resp, &repos)
if err != nil {
glog.Error(err)
return unknown, err
}
if len(repos.Repos) == 0 {
return unknown, err
}
return repos.Repos[0].Name, nil
}
func (ctl *ApplicationCtl) GetVersion(versionId string) (string, error) {
versionUrl := fmt.Sprintf("%s/v1/app_versions?version_id=%s", ctl.OpenpitrixAddr, versionId)
resp, err := makeHttpRequest("GET", versionUrl, "")
if err != nil {
glog.Error(err)
return unknown, err
}
var versions versionList
err = json.Unmarshal(resp, &versions)
if err != nil {
glog.Error(err)
return unknown, err
}
if len(versions.Versions) == 0 {
return unknown, nil
}
return versions.Versions[0].Name, nil
}
func (ctl *ApplicationCtl) GetRuntime(runtimeId string) (string, error) {
versionUrl := fmt.Sprintf("%s/v1/runtimes?runtime_id=%s", ctl.OpenpitrixAddr, runtimeId)
resp, err := makeHttpRequest("GET", versionUrl, "")
if err != nil {
glog.Error(err)
return unknown, err
}
var runtimes runtimeList
err = json.Unmarshal(resp, &runtimes)
if err != nil {
glog.Error(err)
return unknown, err
}
if len(runtimes.Runtimes) == 0 {
return unknown, nil
}
return runtimes.Runtimes[0].Zone, nil
}
func (ctl *ApplicationCtl) GetWorkLoads(namespace string, clusterRoles []clusterRole) *workLoads {
var works workLoads
for _, clusterRole := range clusterRoles {
workLoadName := clusterRole.Role
if len(workLoadName) > 0 {
if strings.HasSuffix(workLoadName, deploySurffix) {
name := strings.Split(workLoadName, deploySurffix)[0]
ctl := ResourceControllers.Controllers[Deployments]
_, items, _ := ctl.ListWithConditions(fmt.Sprintf("namespace='%s' and name = '%s'", namespace, name), nil, "")
works.Deployments = append(works.Deployments, items.([]Deployment)...)
continue
}
if strings.HasSuffix(workLoadName, daemonSurffix) {
name := strings.Split(workLoadName, daemonSurffix)[0]
ctl := ResourceControllers.Controllers[Daemonsets]
_, items, _ := ctl.ListWithConditions(fmt.Sprintf("namespace='%s' and name = '%s'", namespace, name), nil, "")
works.Daemonsets = append(works.Daemonsets, items.([]Daemonset)...)
continue
}
if strings.HasSuffix(workLoadName, stateSurffix) {
name := strings.Split(workLoadName, stateSurffix)[0]
ctl := ResourceControllers.Controllers[Statefulsets]
_, items, _ := ctl.ListWithConditions(fmt.Sprintf("namespace='%s' and name = '%s'", namespace, name), nil, "")
works.Statefulsets = append(works.Statefulsets, items.([]Statefulset)...)
continue
}
}
}
return &works
}
func (ctl *ApplicationCtl) getLabels(namespace string, workloads *workLoads) *[]map[string]string {
k8sClient := client.NewK8sClient()
var workloadLables []map[string]string
if workloads == nil {
return nil
}
for _, workload := range workloads.Deployments {
deploy, err := k8sClient.AppsV1().Deployments(namespace).Get(workload.Name, metaV1.GetOptions{})
if errors.IsNotFound(err) {
continue
}
workloadLables = append(workloadLables, deploy.Labels)
}
for _, workload := range workloads.Daemonsets {
daemonset, err := k8sClient.AppsV1().DaemonSets(namespace).Get(workload.Name, metaV1.GetOptions{})
if errors.IsNotFound(err) {
continue
}
workloadLables = append(workloadLables, daemonset.Labels)
}
for _, workload := range workloads.Statefulsets {
statefulset, err := k8sClient.AppsV1().StatefulSets(namespace).Get(workload.Name, metaV1.GetOptions{})
if errors.IsNotFound(err) {
continue
}
workloadLables = append(workloadLables, statefulset.Labels)
}
return &workloadLables
}
func isExist(svcs []Service, svc v1.Service) bool {
for _, item := range svcs {
if item.Name == svc.Name && item.Namespace == svc.Namespace {
return true
}
}
return false
}
func (ctl *ApplicationCtl) getSvcs(namespace string, workLoadLabels *[]map[string]string) *[]Service {
if len(*workLoadLabels) == 0 {
return nil
}
k8sClient := client.NewK8sClient()
var services []Service
for _, label := range *workLoadLabels {
labelSelector := labels.Set(label).AsSelector().String()
svcs, err := k8sClient.CoreV1().Services(namespace).List(metaV1.ListOptions{LabelSelector: labelSelector})
if err != nil {
glog.Errorf("get app's svc failed, reason: %v", err)
}
for _, item := range svcs.Items {
if !isExist(services, item) {
services = append(services, *generateSvcObject(item))
}
}
}
return &services
}
func (ctl *ApplicationCtl) getIng(namespace string, services *[]Service) *[]ing {
if services == nil {
return nil
}
ingCtl := ResourceControllers.Controllers[Ingresses]
var ings []ing
for _, svc := range *services {
_, items, err := ingCtl.ListWithConditions(fmt.Sprintf("namespace = '%s' and rules like '%%%s%%' ", namespace, svc.Name), nil, "")
if err != nil {
glog.Error(err)
return nil
}
glog.Error(items)
for _, ingress := range items.([]Ingress) {
var rules []ingressRule
err := json.Unmarshal([]byte(ingress.Rules), &rules)
if err != nil {
return nil
}
exist := false
var tmpRules []ingressRule
for _, rule := range rules {
if rule.Service == svc.Name {
exist = true
tmpRules = append(tmpRules, rule)
}
}
if exist {
ings = append(ings, ing{Name: ingress.Name, Rules: tmpRules})
}
}
}
return &ings
}
func (ctl *ApplicationCtl) ListApplication(runtimeId string, match, fuzzy map[string]string, paging *Paging) (int, interface{}, error) {
limit := paging.Limit
offset := paging.Offset
if strings.HasSuffix(ctl.OpenpitrixAddr, "/") {
ctl.OpenpitrixAddr = strings.TrimSuffix(ctl.OpenpitrixAddr, "/")
}
defaultStatus := "status=active&status=stopped&status=pending&status=ceased"
url := fmt.Sprintf("%s/v1/clusters?limit=%s&offset=%s", ctl.OpenpitrixAddr, strconv.Itoa(limit), strconv.Itoa(offset))
if len(fuzzy["name"]) > 0 {
url = fmt.Sprintf("%s&search_word=%s", url, fuzzy["name"])
}
if len(match["status"]) > 0 {
url = fmt.Sprintf("%s&status=%s", url, match["status"])
} else {
url = fmt.Sprintf("%s&%s", url, defaultStatus)
}
if len(runtimeId) > 0 {
url = fmt.Sprintf("%s&runtime_id=%s", url, runtimeId)
}
resp, err := makeHttpRequest("GET", url, "")
if err != nil {
glog.Errorf("request %s failed, reason: %s", url, err)
return 0, nil, err
}
var clusterList clusters
err = json.Unmarshal(resp, &clusterList)
if err != nil {
return 0, nil, err
}
var apps []Application
for _, item := range clusterList.Clusters {
var app Application
app.Name = item.Name
app.ClusterID = item.ClusterID
app.UpdateTime = item.UpdateTime
app.Status = item.Status
versionInfo, _ := ctl.GetVersion(item.VersionID)
app.Version = versionInfo
app.VersionId = item.VersionID
runtimeInfo, _ := ctl.GetRuntime(item.RunTimeId)
app.Runtime = runtimeInfo
app.RuntimeId = item.RunTimeId
appInfo, _, appId, _ := ctl.GetAppInfo(item.AppID)
app.App = appInfo
app.AppId = appId
app.Description = item.Description
apps = append(apps, app)
}
return clusterList.Total, apps, nil
}
func (ctl *ApplicationCtl) GetApp(clusterId string) (*Application, error) {
if strings.HasSuffix(ctl.OpenpitrixAddr, "/") {
ctl.OpenpitrixAddr = strings.TrimSuffix(ctl.OpenpitrixAddr, "/")
}
url := fmt.Sprintf("%s/v1/clusters?cluster_id=%s", ctl.OpenpitrixAddr, clusterId)
resp, err := makeHttpRequest("GET", url, "")
if err != nil {
glog.Error(err)
return nil, err
}
var clusterList clusters
err = json.Unmarshal(resp, &clusterList)
if err != nil {
glog.Error(err)
return nil, err
}
if len(clusterList.Clusters) == 0 {
return nil, fmt.Errorf("NotFound, clusterId:%s", clusterId)
}
item := clusterList.Clusters[0]
var app Application
app.Name = item.Name
app.ClusterID = item.ClusterID
app.UpdateTime = item.UpdateTime
app.CreateTime = item.CreateTime
app.Status = item.Status
versionInfo, _ := ctl.GetVersion(item.VersionID)
app.Version = versionInfo
app.VersionId = item.VersionID
runtimeInfo, _ := ctl.GetRuntime(item.RunTimeId)
app.Runtime = runtimeInfo
app.RuntimeId = item.RunTimeId
appInfo, repoId, appId, _ := ctl.GetAppInfo(item.AppID)
app.App = appInfo
app.AppId = appId
app.Description = item.Description
app.RepoName, _ = ctl.GetRepo(repoId)
app.WorkLoads = ctl.GetWorkLoads(app.Runtime, item.ClusterRoleSets)
workloadLabels := ctl.getLabels(app.Runtime, app.WorkLoads)
app.Services = ctl.getSvcs(app.Runtime, workloadLabels)
app.Ingresses = ctl.getIng(app.Runtime, app.Services)
return &app, nil
}

View File

@@ -0,0 +1,185 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"time"
"fmt"
"regexp"
"strings"
"github.com/golang/glog"
"github.com/pkg/errors"
rbac "k8s.io/api/rbac/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/informers"
"k8s.io/client-go/tools/cache"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models/kubectl"
)
func (ctl *ClusterRoleBindingCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *ClusterRoleBindingCtl) sync(stopChan chan struct{}) {
ctl.initListerAndInformer()
ctl.informer.Run(stopChan)
}
func (ctl *ClusterRoleBindingCtl) total() int {
return 0
}
func (ctl *ClusterRoleBindingCtl) handleWorkspaceRoleChange(clusterRole *rbac.ClusterRoleBinding) {
if groups := regexp.MustCompile(fmt.Sprintf(`^system:(\S+):(%s)$`, strings.Join(constants.WorkSpaceRoles, "|"))).FindStringSubmatch(clusterRole.Name); len(groups) == 3 {
workspace := groups[1]
go ctl.restNamespaceRoleBinding(workspace)
}
}
func (ctl *ClusterRoleBindingCtl) restNamespaceRoleBinding(workspace string) {
selector := labels.SelectorFromSet(labels.Set{"kubesphere.io/workspace": workspace})
namespaces, err := ctl.K8sClient.CoreV1().Namespaces().List(meta_v1.ListOptions{LabelSelector: selector.String()})
if err != nil {
glog.Warning("workspace roles sync failed", workspace, err)
return
}
for _, namespace := range namespaces.Items {
pathJson := fmt.Sprintf(`{"metadata":{"annotations":{"%s":"%s"}}}`, initTimeAnnotateKey, "")
_, err := ctl.K8sClient.CoreV1().Namespaces().Patch(namespace.Name, "application/strategic-merge-patch+json", []byte(pathJson))
if err != nil {
glog.Warning("workspace roles sync failed", workspace, err)
return
}
}
}
func (ctl *ClusterRoleBindingCtl) initListerAndInformer() {
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Rbac().V1().ClusterRoleBindings().Lister()
ctl.informer = informerFactory.Rbac().V1().ClusterRoleBindings().Informer()
ctl.informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
clusterRoleBinding := obj.(*rbac.ClusterRoleBinding)
ctl.handleTerminalCreate(clusterRoleBinding)
},
UpdateFunc: func(old, new interface{}) {
oldValue := old.(*rbac.ClusterRoleBinding)
newValue := new.(*rbac.ClusterRoleBinding)
if !subjectsCompile(oldValue.Subjects, newValue.Subjects) {
ctl.handleWorkspaceRoleChange(newValue)
ctl.handleTerminalUpdate(oldValue, newValue)
}
},
DeleteFunc: func(obj interface{}) {
clusterRoleBinding := obj.(*rbac.ClusterRoleBinding)
ctl.handleTerminalDelete(clusterRoleBinding)
},
})
}
func (ctl *ClusterRoleBindingCtl) handleTerminalCreate(clusterRoleBinding *rbac.ClusterRoleBinding) {
if clusterRoleBinding.RoleRef.Name == constants.ClusterAdmin {
for _, subject := range clusterRoleBinding.Subjects {
if subject.Kind == rbac.UserKind {
err := kubectl.CreateKubectlDeploy(subject.Name)
if err != nil {
glog.Error(fmt.Sprintf("create %s's terminal pod failed:%s", subject.Name, err))
}
}
}
}
}
func (ctl *ClusterRoleBindingCtl) handleTerminalUpdate(old *rbac.ClusterRoleBinding, new *rbac.ClusterRoleBinding) {
if new.RoleRef.Name == constants.ClusterAdmin {
for _, newSubject := range new.Subjects {
isAdded := true
for _, oldSubject := range old.Subjects {
if oldSubject == newSubject {
isAdded = false
break
}
}
if isAdded && newSubject.Kind == rbac.UserKind {
err := kubectl.CreateKubectlDeploy(newSubject.Name)
if err != nil {
glog.Error(fmt.Sprintf("create %s's terminal pod failed:%s", newSubject.Name, err))
}
}
}
for _, oldSubject := range old.Subjects {
isDeleted := true
for _, newSubject := range new.Subjects {
if newSubject == oldSubject {
isDeleted = false
break
}
}
if isDeleted && oldSubject.Kind == rbac.UserKind {
err := kubectl.DelKubectlDeploy(oldSubject.Name)
if err != nil {
glog.Error(fmt.Sprintf("delete %s's terminal pod failed:%s", oldSubject.Name, err))
}
}
}
}
}
func (ctl *ClusterRoleBindingCtl) handleTerminalDelete(clusterRoleBinding *rbac.ClusterRoleBinding) {
if clusterRoleBinding.RoleRef.Name == constants.ClusterAdmin {
for _, subject := range clusterRoleBinding.Subjects {
if subject.Kind == rbac.UserKind {
err := kubectl.DelKubectlDeploy(subject.Name)
if err != nil {
glog.Error(fmt.Sprintf("delete %s's terminal pod failed:%s", subject.Name, err))
}
}
}
}
}
func subjectsCompile(s1 []rbac.Subject, s2 []rbac.Subject) bool {
if len(s1) != len(s2) {
return false
}
for i, v := range s1 {
if v.Name != s2[i].Name || v.Kind != s2[i].Kind {
return false
}
}
return true
}
func (ctl *ClusterRoleBindingCtl) CountWithConditions(conditions string) int {
return 0
}
func (ctl *ClusterRoleBindingCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
return 0, nil, errors.New("not implement")
}
func (ctl *ClusterRoleBindingCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -27,9 +27,16 @@ import (
"k8s.io/client-go/tools/cache"
)
const systemPrefix = "system:"
func (ctl *ClusterRoleCtl) generateObject(item v1.ClusterRole) *ClusterRole {
var displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
if strings.HasPrefix(name, "system:") {
if strings.HasPrefix(name, systemPrefix) || item.Annotations == nil || len(item.Annotations[creator]) == 0 {
return nil
}
@@ -38,35 +45,27 @@ func (ctl *ClusterRoleCtl) generateObject(item v1.ClusterRole) *ClusterRole {
createTime = time.Now()
}
object := &ClusterRole{Name: name, CreateTime: createTime, Annotation: Annotation{item.Annotations}}
object := &ClusterRole{Name: name, CreateTime: createTime, Annotation: MapString{item.Annotations}, DisplayName: displayName}
return object
}
func (ctl *ClusterRoleCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *ClusterRoleCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *ClusterRoleCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&ClusterRole{}) {
db.DropTable(&ClusterRole{})
}
db = db.CreateTable(&ClusterRole{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Rbac().V1().ClusterRoles().Informer()
lister := kubeInformerFactory.Rbac().V1().ClusterRoles().Lister()
ctl.initListerAndInformer()
list, err := lister.List(labels.Everything())
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
@@ -74,50 +73,90 @@ func (ctl *ClusterRoleCtl) listAndWatch() {
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
if obj != nil {
if err := db.Create(obj).Error; err != nil {
glog.Error("cluster roles sync error", err)
}
}
}
ctl.informer.Run(stopChan)
}
func (ctl *ClusterRoleCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s failed, reason:%s", err, ctl.Name())
return 0
}
count := 0
for _, item := range list {
if !strings.HasPrefix(item.Name, systemPrefix) && item.Annotations != nil && len(item.Annotations[creator]) > 0 {
count++
}
}
return count
}
func (ctl *ClusterRoleCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Rbac().V1().ClusterRoles().Lister()
informer := informerFactory.Rbac().V1().ClusterRoles().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
object := obj.(*v1.ClusterRole)
mysqlObject := ctl.generateObject(*object)
if mysqlObject != nil {
db.Create(mysqlObject)
if err := db.Create(mysqlObject).Error; err != nil {
glog.Error("cluster roles sync error", err)
}
}
},
UpdateFunc: func(old, new interface{}) {
object := new.(*v1.ClusterRole)
mysqlObject := ctl.generateObject(*object)
if mysqlObject != nil {
db.Save(mysqlObject)
if err := db.Save(mysqlObject).Error; err != nil {
glog.Error("cluster roles update error", err)
}
}
},
DeleteFunc: func(obj interface{}) {
var item ClusterRole
object := obj.(*v1.ClusterRole)
db.Where("name=?", object.Name).Find(&item)
db.Delete(item)
object := obj.(*v1.ClusterRole)
if err := db.Delete(ClusterRole{}, "name=?", object.Name).Error; err != nil {
glog.Error("cluster roles delete error", err)
}
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *ClusterRoleCtl) CountWithConditions(conditions string) int {
var object ClusterRole
if strings.Contains(conditions, "namespace") {
conditions = ""
}
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *ClusterRoleCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *ClusterRoleCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var object ClusterRole
var list []ClusterRole
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
db := ctl.DB
listWithConditions(db, &total, &object, &list, conditions, paging, order)
@@ -125,9 +164,7 @@ func (ctl *ClusterRoleCtl) ListWithConditions(conditions string, paging *Paging)
return total, list, nil
}
func (ctl *ClusterRoleCtl) Count(namespace string) int {
var count int
db := ctl.DB
db.Model(&ClusterRole{}).Count(&count)
return count
func (ctl *ClusterRoleCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -16,7 +16,26 @@ limitations under the License.
package controllers
import "github.com/jinzhu/gorm"
import (
"fmt"
"io/ioutil"
"net/http"
"strings"
"time"
"sync"
"github.com/golang/glog"
"github.com/jinzhu/gorm"
"github.com/pkg/errors"
"kubesphere.io/kubesphere/pkg/constants"
)
const (
checkPeriod = 30 * time.Minute
sleepPeriod = 15 * time.Second
)
func listWithConditions(db *gorm.DB, total *int, object, list interface{}, conditions string, paging *Paging, order string) {
if len(conditions) == 0 {
@@ -50,3 +69,93 @@ func countWithConditions(db *gorm.DB, conditions string, object interface{}) int
}
return count
}
func makeHttpRequest(method, url, data string) ([]byte, error) {
var req *http.Request
var err error
if method == "GET" {
req, err = http.NewRequest(method, url, nil)
} else {
req, err = http.NewRequest(method, url, strings.NewReader(data))
}
req.Header.Add("Authorization", constants.OpenPitrixProxyToken)
if err != nil {
glog.Error(err)
return nil, err
}
httpClient := &http.Client{}
resp, err := httpClient.Do(req)
if err != nil {
err := fmt.Errorf("Request to %s failed, method: %s, reason: %s ", url, method, err)
glog.Error(err)
return nil, err
}
body, err := ioutil.ReadAll(resp.Body)
defer resp.Body.Close()
if resp.StatusCode >= http.StatusBadRequest {
err = errors.New(string(body))
}
return body, err
}
func handleCrash(ctl Controller) {
close(ctl.chanAlive())
if err := recover(); err != nil {
glog.Errorf("panic occur in %s controller's listAndWatch function, reason: %s", ctl.Name(), err)
return
}
}
func hasSynced(ctl Controller) bool {
totalInDb := ctl.CountWithConditions("")
totalInK8s := ctl.total()
if totalInDb == totalInK8s {
return true
}
return false
}
func checkAndResync(ctl Controller, stopChan chan struct{}) {
defer close(stopChan)
lastTime := time.Now()
for {
select {
case <-ctl.chanStop():
return
default:
if time.Now().Sub(lastTime) < checkPeriod {
time.Sleep(sleepPeriod)
break
}
lastTime = time.Now()
if !hasSynced(ctl) {
glog.Errorf("the data in db and kubernetes is inconsistent, resync %s controller", ctl.Name())
close(stopChan)
stopChan = make(chan struct{})
go ctl.sync(stopChan)
}
}
}
}
func listAndWatch(ctl Controller, wg *sync.WaitGroup) {
defer handleCrash(ctl)
defer ctl.CloseDB()
defer wg.Done()
stopChan := make(chan struct{})
go ctl.sync(stopChan)
checkAndResync(ctl, stopChan)
}

View File

@@ -0,0 +1,162 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"strings"
"time"
"github.com/golang/glog"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/informers"
"k8s.io/client-go/tools/cache"
)
func (ctl *ConfigMapCtl) generateObject(item v1.ConfigMap) *ConfigMap {
var displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
createTime := item.CreationTimestamp.Time
if createTime.IsZero() {
createTime = time.Now()
}
var entries []string
for entry := range item.Data {
entries = append(entries, entry)
}
object := &ConfigMap{
Name: item.Name,
Namespace: item.Namespace,
CreateTime: createTime,
Annotation: MapString{item.Annotations},
DisplayName: displayName,
Entries: strings.Join(entries, ","),
}
return object
}
func (ctl *ConfigMapCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *ConfigMapCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&ConfigMap{}) {
db.DropTable(&ConfigMap{})
}
db = db.CreateTable(&ConfigMap{})
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
}
for _, item := range list {
obj := ctl.generateObject(*item)
if obj != nil {
db.Create(obj)
}
}
ctl.informer.Run(stopChan)
}
func (ctl *ConfigMapCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *ConfigMapCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Core().V1().ConfigMaps().Lister()
informer := informerFactory.Core().V1().ConfigMaps().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
object := obj.(*v1.ConfigMap)
mysqlObject := ctl.generateObject(*object)
if mysqlObject != nil {
db.Create(mysqlObject)
}
},
UpdateFunc: func(old, new interface{}) {
object := new.(*v1.ConfigMap)
mysqlObject := ctl.generateObject(*object)
if mysqlObject != nil {
db.Save(mysqlObject)
}
},
DeleteFunc: func(obj interface{}) {
var item ConfigMap
object := obj.(*v1.ConfigMap)
db.Where("name=?", object.Name).Find(&item)
db.Delete(item)
},
})
ctl.informer = informer
}
func (ctl *ConfigMapCtl) CountWithConditions(conditions string) int {
var object ConfigMap
if strings.Contains(conditions, "namespace") {
conditions = ""
}
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *ConfigMapCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var object ConfigMap
var list []ConfigMap
var total int
if len(order) == 0 {
order = "createTime desc"
}
db := ctl.DB
listWithConditions(db, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *ConfigMapCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -0,0 +1,64 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"time"
"k8s.io/client-go/informers"
)
func (ctl *ControllerRevisionCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *ControllerRevisionCtl) sync(stopChan chan struct{}) {
ctl.initListerAndInformer()
ctl.informer.Run(stopChan)
}
func (ctl *ControllerRevisionCtl) total() int {
return 0
}
func (ctl *ControllerRevisionCtl) initListerAndInformer() {
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Apps().V1().ControllerRevisions().Lister()
informer := informerFactory.Apps().V1().ControllerRevisions().Informer()
ctl.informer = informer
}
func (ctl *ControllerRevisionCtl) CountWithConditions(conditions string) int {
return 0
}
func (ctl *ControllerRevisionCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
return 0, nil, nil
}
func (ctl *ControllerRevisionCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -0,0 +1,158 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"time"
"github.com/golang/glog"
"k8s.io/api/batch/v1beta1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/informers"
"k8s.io/client-go/tools/cache"
)
func (ctl *CronJobCtl) generateObject(item v1beta1.CronJob) *CronJob {
var status, displayName string
var lastScheduleTime *time.Time
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
namespace := item.Namespace
status = Running
if *item.Spec.Suspend {
status = Pause
}
schedule := item.Spec.Schedule
if item.Status.LastScheduleTime != nil {
lastScheduleTime = &item.Status.LastScheduleTime.Time
}
active := len(item.Status.Active)
object := &CronJob{
Namespace: namespace,
Name: name,
DisplayName: displayName,
LastScheduleTime: lastScheduleTime,
Active: active,
Schedule: schedule,
Status: status,
Annotation: MapString{item.Annotations},
Labels: MapString{item.ObjectMeta.Labels},
}
return object
}
func (ctl *CronJobCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *CronJobCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&CronJob{}) {
db.DropTable(&CronJob{})
}
db = db.CreateTable(&CronJob{})
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
}
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *CronJobCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *CronJobCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Batch().V1beta1().CronJobs().Lister()
informer := informerFactory.Batch().V1beta1().CronJobs().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
object := obj.(*v1beta1.CronJob)
mysqlObject := ctl.generateObject(*object)
db.Create(mysqlObject)
},
UpdateFunc: func(old, new interface{}) {
object := new.(*v1beta1.CronJob)
mysqlObject := ctl.generateObject(*object)
db.Save(mysqlObject)
},
DeleteFunc: func(obj interface{}) {
var item CronJob
object := obj.(*v1beta1.CronJob)
db.Where("name=? And namespace=?", object.Name, object.Namespace).Find(&item)
db.Delete(item)
},
})
ctl.informer = informer
}
func (ctl *CronJobCtl) CountWithConditions(conditions string) int {
var object CronJob
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *CronJobCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []CronJob
var object CronJob
var total int
if len(order) == 0 {
order = "lastScheduleTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *CronJobCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -28,8 +28,11 @@ import (
)
func (ctl *DaemonsetCtl) generateObject(item v1.DaemonSet) *Daemonset {
var app string
var status string
var app, status, displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
namespace := item.Namespace
availablePodNum := item.Status.NumberAvailable
@@ -55,36 +58,38 @@ func (ctl *DaemonsetCtl) generateObject(item v1.DaemonSet) *Daemonset {
status = Updating
}
object := &Daemonset{Namespace: namespace, Name: name, Available: availablePodNum, Desire: desirePodNum,
App: app, CreateTime: createTime, Status: status, NodeSelector: string(nodeSelectorStr), Annotation: Annotation{item.Annotations}}
object := &Daemonset{
Namespace: namespace,
Name: name,
DisplayName: displayName,
Available: availablePodNum,
Desire: desirePodNum,
App: app,
CreateTime: createTime,
Status: status,
NodeSelector: string(nodeSelectorStr),
Annotation: MapString{item.Annotations},
Labels: MapString{item.Spec.Selector.MatchLabels},
}
return object
}
func (ctl *DaemonsetCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *DaemonsetCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *DaemonsetCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Daemonset{}) {
db.DropTable(&Daemonset{})
}
db = db.CreateTable(&Daemonset{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Apps().V1().DaemonSets().Informer()
lister := kubeInformerFactory.Apps().V1().DaemonSets().Lister()
list, err := lister.List(labels.Everything())
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
@@ -93,9 +98,27 @@ func (ctl *DaemonsetCtl) listAndWatch() {
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *DaemonsetCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *DaemonsetCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Apps().V1().DaemonSets().Lister()
informer := informerFactory.Apps().V1().DaemonSets().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
@@ -117,7 +140,7 @@ func (ctl *DaemonsetCtl) listAndWatch() {
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *DaemonsetCtl) CountWithConditions(conditions string) int {
@@ -126,25 +149,21 @@ func (ctl *DaemonsetCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *DaemonsetCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *DaemonsetCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Daemonset
var object Daemonset
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *DaemonsetCtl) Count(namespace string) int {
var count int
db := ctl.DB
if len(namespace) == 0 {
db.Model(&Daemonset{}).Count(&count)
} else {
db.Model(&Daemonset{}).Where("namespace = ?", namespace).Count(&count)
}
return count
func (ctl *DaemonsetCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -21,16 +21,19 @@ import (
"github.com/golang/glog"
"k8s.io/api/apps/v1"
"k8s.io/client-go/informers"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/informers"
"k8s.io/client-go/tools/cache"
)
func (ctl *DeploymentCtl) generateObject(item v1.Deployment) *Deployment {
var app string
var status string
var app, status, displayName string
var updateTime time.Time
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
namespace := item.Namespace
availablePodNum := item.Status.AvailableReplicas
@@ -42,9 +45,13 @@ func (ctl *DeploymentCtl) generateObject(item v1.Deployment) *Deployment {
app = release + "/" + chart
}
for _, conditon := range item.Status.Conditions {
if conditon.Type == "Available" {
updateTime = conditon.LastUpdateTime.Time
for _, condition := range item.Status.Conditions {
if updateTime.IsZero() {
updateTime = condition.LastUpdateTime.Time
} else {
if updateTime.Before(condition.LastUpdateTime.Time) {
updateTime = condition.LastUpdateTime.Time
}
}
}
if updateTime.IsZero() {
@@ -61,19 +68,25 @@ func (ctl *DeploymentCtl) generateObject(item v1.Deployment) *Deployment {
}
}
return &Deployment{Namespace: namespace, Name: name, Available: availablePodNum, Desire: desirePodNum,
App: app, UpdateTime: updateTime, Status: status, Annotation: Annotation{item.Annotations}}
return &Deployment{
Namespace: namespace,
Name: name,
Available: availablePodNum,
Desire: desirePodNum,
App: app,
UpdateTime: updateTime,
Status: status,
Annotation: MapString{item.Annotations},
Labels: MapString{item.Spec.Selector.MatchLabels},
DisplayName: displayName,
}
}
func (ctl *DeploymentCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *DeploymentCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *DeploymentCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Deployment{}) {
db.DropTable(&Deployment{})
@@ -81,12 +94,8 @@ func (ctl *DeploymentCtl) listAndWatch() {
db = db.CreateTable(&Deployment{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Apps().V1().Deployments().Informer()
lister := kubeInformerFactory.Apps().V1().Deployments().Lister()
list, err := lister.List(labels.Everything())
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
@@ -95,9 +104,29 @@ func (ctl *DeploymentCtl) listAndWatch() {
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *DeploymentCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", ctl.Name(), err)
return 0
}
return len(list)
}
func (ctl *DeploymentCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Apps().V1().Deployments().Lister()
informer := informerFactory.Apps().V1().Deployments().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
@@ -118,8 +147,7 @@ func (ctl *DeploymentCtl) listAndWatch() {
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *DeploymentCtl) CountWithConditions(conditions string) int {
@@ -128,25 +156,21 @@ func (ctl *DeploymentCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *DeploymentCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *DeploymentCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Deployment
var object Deployment
var total int
order := "updateTime desc"
if len(order) == 0 {
order = "updateTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *DeploymentCtl) Count(namespace string) int {
var count int
db := ctl.DB
if len(namespace) == 0 {
db.Model(&Deployment{}).Count(&count)
} else {
db.Model(&Deployment{}).Where("namespace = ?", namespace).Count(&count)
}
return count
func (ctl *DeploymentCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -20,6 +20,8 @@ import (
"strings"
"time"
"encoding/json"
"github.com/golang/glog"
"k8s.io/api/extensions/v1beta1"
"k8s.io/apimachinery/pkg/labels"
@@ -28,9 +30,16 @@ import (
)
func (ctl *IngressCtl) generateObject(item v1beta1.Ingress) *Ingress {
var ip, tls, displayName string
name := item.Name
namespace := item.Namespace
var ip, tls string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
createTime := item.CreationTimestamp.Time
if createTime.IsZero() {
createTime = time.Now()
@@ -46,35 +55,51 @@ func (ctl *IngressCtl) generateObject(item v1beta1.Ingress) *Ingress {
ip = strings.Join(ipList, ",")
}
object := &Ingress{Namespace: namespace, Name: name, TlsTermination: tls, Ip: ip, CreateTime: createTime, Annotation: Annotation{item.Annotations}}
var ingRules []ingressRule
for _, rule := range item.Spec.Rules {
host := rule.Host
for _, path := range rule.HTTP.Paths {
var ingRule ingressRule
ingRule.Host = host
ingRule.Service = path.Backend.ServiceName
ingRule.Port = path.Backend.ServicePort.IntVal
ingRule.Path = path.Path
ingRules = append(ingRules, ingRule)
}
}
ruleStr, _ := json.Marshal(ingRules)
object := &Ingress{
Namespace: namespace,
Name: name,
DisplayName: displayName,
TlsTermination: tls,
Ip: ip,
CreateTime: createTime,
Annotation: MapString{item.Annotations},
Rules: string(ruleStr),
Labels: MapString{item.Labels},
}
return object
}
func (ctl *IngressCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *IngressCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *IngressCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Ingress{}) {
db.DropTable(&Ingress{})
}
db = db.CreateTable(&Ingress{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Extensions().V1beta1().Ingresses().Informer()
lister := kubeInformerFactory.Extensions().V1beta1().Ingresses().Lister()
list, err := lister.List(labels.Everything())
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
@@ -83,9 +108,28 @@ func (ctl *IngressCtl) listAndWatch() {
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *IngressCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *IngressCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Extensions().V1beta1().Ingresses().Lister()
informer := informerFactory.Extensions().V1beta1().Ingresses().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
@@ -107,7 +151,7 @@ func (ctl *IngressCtl) listAndWatch() {
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *IngressCtl) CountWithConditions(conditions string) int {
@@ -116,25 +160,21 @@ func (ctl *IngressCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *IngressCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *IngressCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Ingress
var object Ingress
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *IngressCtl) Count(namespace string) int {
var count int
db := ctl.DB
if len(namespace) == 0 {
db.Model(&Ingress{}).Count(&count)
} else {
db.Model(&Ingress{}).Where("namespace = ?", namespace).Count(&count)
}
return count
func (ctl *IngressCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -0,0 +1,318 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"encoding/json"
"fmt"
"time"
"github.com/golang/glog"
"k8s.io/api/batch/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"reflect"
"strings"
"kubesphere.io/kubesphere/pkg/client"
)
var k8sClient *kubernetes.Clientset
const retryTimes = 3
func (ctl *JobCtl) generateObject(item v1.Job) *Job {
var status, displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
namespace := item.Namespace
succeedPodNum := item.Status.Succeeded
desirePodNum := *item.Spec.Completions
createTime := item.CreationTimestamp.Time
updteTime := createTime
for _, condition := range item.Status.Conditions {
if condition.Type == "Failed" && condition.Status == "True" {
status = Failed
}
if condition.Type == "Complete" && condition.Status == "True" {
status = Completed
}
if updteTime.Before(condition.LastProbeTime.Time) {
updteTime = condition.LastProbeTime.Time
}
if updteTime.Before(condition.LastTransitionTime.Time) {
updteTime = condition.LastTransitionTime.Time
}
}
if desirePodNum > succeedPodNum && len(status) == 0 {
status = Running
}
object := &Job{
Namespace: namespace,
Name: name,
DisplayName: displayName,
Desire: desirePodNum,
Completed: succeedPodNum,
UpdateTime: updteTime,
CreateTime: createTime,
Status: status,
Annotation: MapString{item.Annotations},
Labels: MapString{item.Labels},
}
return object
}
func (ctl *JobCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *JobCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Job{}) {
db.DropTable(&Job{})
}
db = db.CreateTable(&Job{})
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
}
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *JobCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *JobCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Batch().V1().Jobs().Lister()
informer := informerFactory.Batch().V1().Jobs().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
object := obj.(*v1.Job)
mysqlObject := ctl.generateObject(*object)
ctl.makeRevision(object)
db.Create(mysqlObject)
},
UpdateFunc: func(old, new interface{}) {
object := new.(*v1.Job)
mysqlObject := ctl.generateObject(*object)
ctl.makeRevision(object)
db.Save(mysqlObject)
},
DeleteFunc: func(obj interface{}) {
var item Job
object := obj.(*v1.Job)
db.Where("name=? And namespace=?", object.Name, object.Namespace).Find(&item)
db.Delete(item)
},
})
ctl.informer = informer
}
func (ctl *JobCtl) CountWithConditions(conditions string) int {
var object Job
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *JobCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Job
var object Job
var total int
if len(order) == 0 {
order = "updateTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *JobCtl) Lister() interface{} {
return ctl.lister
}
func getRevisions(job v1.Job) (JobRevisions, error) {
revisions := make(JobRevisions)
if _, exist := job.Annotations["revisions"]; exist {
revisionsStr := job.Annotations["revisions"]
err := json.Unmarshal([]byte(revisionsStr), &revisions)
if err != nil {
return nil, fmt.Errorf("failed to get job %s's revisions, reason: %s", job.Name, err)
}
}
return revisions, nil
}
func getCurrentRevision(item *v1.Job) JobRevision {
var revision JobRevision
for _, condition := range item.Status.Conditions {
if condition.Type == "Failed" && condition.Status == "True" {
revision.Status = Failed
revision.Reasons = append(revision.Reasons, condition.Reason)
revision.Messages = append(revision.Messages, condition.Message)
}
if condition.Type == "Complete" && condition.Status == "True" {
revision.Status = Completed
}
}
if len(revision.Status) == 0 {
revision.Status = Running
}
revision.DesirePodNum = *item.Spec.Completions
revision.Succeed = item.Status.Succeeded
revision.Failed = item.Status.Failed
revision.StartTime = item.CreationTimestamp.Time
revision.Uid = string(item.UID)
if item.Status.CompletionTime != nil {
revision.CompletionTime = item.Status.CompletionTime.Time
}
return revision
}
func deleteJob(namespace, job string) error {
deletePolicy := metav1.DeletePropagationBackground
err := k8sClient.BatchV1().Jobs(namespace).Delete(job, &metav1.DeleteOptions{PropagationPolicy: &deletePolicy})
return err
}
func (ctl *JobCtl) makeRevision(job *v1.Job) {
revisionIndex := -1
revisions, err := getRevisions(*job)
if err != nil {
glog.Error(err)
return
}
uid := job.UID
for index, revision := range revisions {
if revision.Uid == string(uid) {
currentRevision := getCurrentRevision(job)
if reflect.DeepEqual(currentRevision, revision) {
return
} else {
revisionIndex = index
break
}
}
}
if revisionIndex == -1 {
revisionIndex = len(revisions) + 1
}
revisions[revisionIndex] = getCurrentRevision(job)
revisionsByte, err := json.Marshal(revisions)
if err != nil {
glog.Error(err)
}
if job.Annotations == nil {
job.Annotations = make(map[string]string)
}
job.Annotations["revisions"] = string(revisionsByte)
ctl.K8sClient.BatchV1().Jobs(job.Namespace).Update(job)
}
func JobReRun(namespace, jobName string) (string, error) {
k8sClient = client.NewK8sClient()
job, err := k8sClient.BatchV1().Jobs(namespace).Get(jobName, metav1.GetOptions{})
if err != nil {
return "", err
}
newJob := *job
newJob.ResourceVersion = ""
newJob.Status = v1.JobStatus{}
newJob.ObjectMeta.UID = ""
newJob.Annotations["revisions"] = strings.Replace(job.Annotations["revisions"], Running, Unfinished, -1)
delete(newJob.Spec.Selector.MatchLabels, "controller-uid")
delete(newJob.Spec.Template.ObjectMeta.Labels, "controller-uid")
err = deleteJob(namespace, jobName)
if err != nil {
glog.Errorf("failed to rerun job %s, reason: %s", jobName, err)
return "", fmt.Errorf("failed to rerun job %s", jobName)
}
for i := 0; i < retryTimes; i++ {
_, err = k8sClient.BatchV1().Jobs(namespace).Create(&newJob)
if err != nil {
time.Sleep(time.Second)
continue
}
break
}
if err != nil {
glog.Errorf("failed to rerun job %s, reason: %s", jobName, err)
return "", fmt.Errorf("failed to rerun job %s", jobName)
}
return "succeed", nil
}

View File

@@ -19,9 +19,6 @@ package controllers
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"strings"
"time"
"github.com/golang/glog"
@@ -30,12 +27,17 @@ import (
"k8s.io/apimachinery/pkg/api/resource"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/informers"
"k8s.io/client-go/tools/cache"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/kubernetes/pkg/util/slice"
"k8s.io/client-go/informers"
"k8s.io/kubernetes/pkg/apis/core"
utilversion "k8s.io/kubernetes/pkg/util/version"
"kubesphere.io/kubesphere/pkg/client"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/options"
@@ -44,17 +46,19 @@ import (
const (
provider = "kubernetes"
admin = "admin"
editor = "editor"
operator = "operator"
viewer = "viewer"
kubectlNamespace = constants.KubeSphereControlNamespace
kubectlConfigKey = "config"
openPitrixRuntimeAnnotateKey = "openpitrix_runtime"
creatorAnnotateKey = "creator"
initTimeAnnotateKey = "kubesphere.io/init-time"
workspaceLabelKey = "kubesphere.io/workspace"
)
var adminRules = []rbac.PolicyRule{{Verbs: []string{"*"}, APIGroups: []string{"*"}, Resources: []string{"*"}}}
var editorRules = []rbac.PolicyRule{{Verbs: []string{"*"}, APIGroups: []string{"", "apps", "extensions", "batch"}, Resources: []string{"*"}}}
var viewerRules = []rbac.PolicyRule{{Verbs: []string{"list", "get", "watch"}, APIGroups: []string{"", "apps", "extensions", "batch"}, Resources: []string{"*"}}}
var editorRules = []rbac.PolicyRule{{Verbs: []string{"*"}, APIGroups: []string{"", "apps", "extensions", "batch", "kubesphere.io", "account.kubesphere.io"}, Resources: []string{"*"}}}
var viewerRules = []rbac.PolicyRule{{Verbs: []string{"list", "get", "watch"}, APIGroups: []string{"", "apps", "extensions", "batch", "kubesphere.io", "account.kubesphere.io"}, Resources: []string{"*"}}}
type runTime struct {
RuntimeId string `json:"runtime_id"`
@@ -69,25 +73,6 @@ type DeleteRunTime struct {
RuntimeId []string `json:"runtime_id"`
}
func makeHttpRequest(method, url, data string) ([]byte, error) {
req, err := http.NewRequest(method, url, strings.NewReader(data))
if err != nil {
glog.Error(err)
return nil, err
}
httpClient := &http.Client{}
resp, err := httpClient.Do(req)
if err != nil {
glog.Error(err)
return nil, err
}
body, err := ioutil.ReadAll(resp.Body)
defer resp.Body.Close()
return body, err
}
func (ctl *NamespaceCtl) getKubeConfig(user string) (string, error) {
k8sClient := client.NewK8sClient()
configmap, err := k8sClient.CoreV1().ConfigMaps(kubectlNamespace).Get(user, metaV1.GetOptions{})
@@ -148,106 +133,255 @@ func (ctl *NamespaceCtl) createOpRuntime(namespace string) ([]byte, error) {
return makeHttpRequest("POST", url, string(body))
}
func (ctl *NamespaceCtl) createDefaultRoleBinding(ns, user string) error {
func (ctl *NamespaceCtl) updateSystemRoleBindings(namespace *v1.Namespace) error {
roleBinding := &rbac.RoleBinding{ObjectMeta: metaV1.ObjectMeta{Name: admin, Namespace: ns},
Subjects: []rbac.Subject{{Name: user, Kind: rbac.UserKind}}, RoleRef: rbac.RoleRef{Kind: "Role", Name: admin}}
workspace := ""
_, err := ctl.K8sClient.RbacV1().RoleBindings(ns).Create(roleBinding)
if err != nil && !errors.IsAlreadyExists(err) {
glog.Error(err)
return err
if namespace.Labels != nil {
workspace = namespace.Labels[workspaceLabelKey]
}
return nil
}
adminBinding, err := ctl.K8sClient.RbacV1().RoleBindings(namespace.Name).Get(admin, metaV1.GetOptions{})
func (ctl *NamespaceCtl) createDefaultRole(ns string) error {
adminRole := &rbac.Role{ObjectMeta: metaV1.ObjectMeta{Name: admin, Namespace: ns}, Rules: adminRules}
editorRole := &rbac.Role{ObjectMeta: metaV1.ObjectMeta{Name: editor, Namespace: ns}, Rules: editorRules}
viewerRole := &rbac.Role{ObjectMeta: metaV1.ObjectMeta{Name: viewer, Namespace: ns}, Rules: viewerRules}
_, err := ctl.K8sClient.RbacV1().Roles(ns).Create(adminRole)
if err != nil && !errors.IsAlreadyExists(err) {
return err
if err != nil {
if errors.IsNotFound(err) {
adminBinding = new(rbac.RoleBinding)
adminBinding.Name = admin
adminBinding.Namespace = namespace.Name
adminBinding.RoleRef = rbac.RoleRef{Kind: "Role", Name: admin}
} else {
return err
}
}
_, err = ctl.K8sClient.RbacV1().Roles(ns).Create(editorRole)
adminBinding.Subjects = make([]rbac.Subject, 0)
if err != nil && !errors.IsAlreadyExists(err) {
return err
if workspace != "" {
workspaceAdmin, err := ctl.K8sClient.RbacV1().ClusterRoleBindings().Get(fmt.Sprintf("system:%s:%s", workspace, constants.WorkspaceAdmin), metaV1.GetOptions{})
if err != nil {
return err
}
adminBinding.Subjects = append(adminBinding.Subjects, workspaceAdmin.Subjects...)
}
_, err = ctl.K8sClient.RbacV1().Roles(ns).Create(viewerRole)
if err != nil && !errors.IsAlreadyExists(err) {
return err
}
return nil
}
func (ctl *NamespaceCtl) createRoleAndRuntime(item v1.Namespace) {
var creator string
var runtime string
ns := item.Name
if item.Annotations == nil {
creator = ""
runtime = ""
if adminBinding.ResourceVersion == "" {
_, err = ctl.K8sClient.RbacV1().RoleBindings(namespace.Name).Create(adminBinding)
} else {
runtime = item.Annotations[openPitrixRuntimeAnnotateKey]
creator = item.Annotations[creatorAnnotateKey]
_, err = ctl.K8sClient.RbacV1().RoleBindings(namespace.Name).Update(adminBinding)
}
if err != nil {
return err
}
viewerBinding, err := ctl.K8sClient.RbacV1().RoleBindings(namespace.Name).Get(viewer, metaV1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
viewerBinding = new(rbac.RoleBinding)
viewerBinding.Name = viewer
viewerBinding.Namespace = namespace.Name
viewerBinding.RoleRef = rbac.RoleRef{Kind: "Role", Name: viewer}
} else {
return err
}
}
viewerBinding.Subjects = make([]rbac.Subject, 0)
if workspace != "" {
workspaceViewer, err := ctl.K8sClient.RbacV1().ClusterRoleBindings().Get(fmt.Sprintf("system:%s:%s", workspace, constants.WorkspaceViewer), metaV1.GetOptions{})
if err != nil {
return err
}
viewerBinding.Subjects = append(viewerBinding.Subjects, workspaceViewer.Subjects...)
}
if viewerBinding.ResourceVersion == "" {
_, err = ctl.K8sClient.RbacV1().RoleBindings(namespace.Name).Create(viewerBinding)
} else {
_, err = ctl.K8sClient.RbacV1().RoleBindings(namespace.Name).Update(viewerBinding)
}
if err != nil {
return err
}
return nil
}
func (ctl *NamespaceCtl) createDefaultRoleBinding(namespace *v1.Namespace) error {
creator := ""
if namespace.Annotations != nil {
creator = namespace.Annotations[creatorAnnotateKey]
}
// create once
if creator != "" {
creatorBindingName := fmt.Sprintf("%s-admin", creator)
creatorBinding, err := ctl.K8sClient.RbacV1().RoleBindings(namespace.Name).Get(creatorBindingName, metaV1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
creatorBinding = new(rbac.RoleBinding)
creatorBinding.Name = creatorBindingName
creatorBinding.Namespace = namespace.Name
creatorBinding.RoleRef = rbac.RoleRef{Kind: "Role", Name: admin}
} else {
return err
}
}
creatorBinding.Subjects = []rbac.Subject{{Kind: rbac.UserKind, Name: creator}}
if creatorBinding.ResourceVersion == "" {
_, err = ctl.K8sClient.RbacV1().RoleBindings(namespace.Name).Create(creatorBinding)
} else {
_, err = ctl.K8sClient.RbacV1().RoleBindings(namespace.Name).Update(creatorBinding)
}
if err != nil {
return err
}
}
return nil
}
func (ctl *NamespaceCtl) CreateDefaultRoleAndRoleBinding(namespace *v1.Namespace) error {
adminRole := &rbac.Role{ObjectMeta: metaV1.ObjectMeta{Name: admin, Namespace: namespace.Name, Annotations: map[string]string{creatorAnnotateKey: "system"}}, Rules: adminRules}
operatorRole := &rbac.Role{ObjectMeta: metaV1.ObjectMeta{Name: operator, Namespace: namespace.Name, Annotations: map[string]string{creatorAnnotateKey: "system"}}, Rules: editorRules}
viewerRole := &rbac.Role{ObjectMeta: metaV1.ObjectMeta{Name: viewer, Namespace: namespace.Name, Annotations: map[string]string{creatorAnnotateKey: "system"}}, Rules: viewerRules}
_, err := ctl.K8sClient.RbacV1().Roles(namespace.Name).Create(adminRole)
if err != nil && !errors.IsAlreadyExists(err) {
return err
} else if err == nil {
if err := ctl.createDefaultRoleBinding(namespace); err != nil {
glog.Warning("default role binding create failed", namespace.Name)
}
}
_, err = ctl.K8sClient.RbacV1().Roles(namespace.Name).Create(operatorRole)
if err != nil && !errors.IsAlreadyExists(err) {
return err
}
_, err = ctl.K8sClient.RbacV1().Roles(namespace.Name).Create(viewerRole)
if err != nil && !errors.IsAlreadyExists(err) {
return err
}
return nil
}
func (ctl *NamespaceCtl) createRoleAndRuntime(namespace *v1.Namespace) {
runtime := ""
initTime := ""
if namespace.Annotations != nil {
runtime = namespace.Annotations[openPitrixRuntimeAnnotateKey]
initTime = namespace.Annotations[initTimeAnnotateKey]
}
componentsNamespaces := []string{constants.KubeSystemNamespace, constants.OpenPitrixNamespace, constants.IstioNamespace, constants.KubeSphereNamespace}
if len(runtime) == 0 && !slice.ContainsString(componentsNamespaces, ns, nil) {
glog.Infoln("create runtime:", ns)
var runtimeCreateError error
resp, runtimeCreateError := ctl.createOpRuntime(ns)
if runtimeCreateError == nil {
var runtime runTime
runtimeCreateError = json.Unmarshal(resp, &runtime)
if runtimeCreateError == nil {
if item.Annotations == nil {
item.Annotations = make(map[string]string, 0)
}
item.Annotations[openPitrixRuntimeAnnotateKey] = runtime.RuntimeId
_, runtimeCreateError = ctl.K8sClient.CoreV1().Namespaces().Update(&item)
}
}
if runtime == "" && !slice.ContainsString(componentsNamespaces, namespace.Name, nil) {
_, runtimeCreateError := ctl.createOpRuntime(namespace.Name)
if runtimeCreateError != nil {
glog.Error("runtime create error:", runtimeCreateError)
}
}
if len(creator) > 0 {
roleCreateError := ctl.createDefaultRole(ns)
glog.Infoln("create default role:", ns)
if roleCreateError == nil {
roleBindingError := ctl.createDefaultRoleBinding(ns, creator)
glog.Infoln("create default role binding:", ns)
if roleBindingError != nil {
glog.Error("default role binding create error:", roleBindingError)
}
} else {
glog.Error("default role create error:", roleCreateError)
if initTime == "" {
err := ctl.CreateDefaultRoleAndRoleBinding(namespace)
if err == nil {
err = ctl.updateSystemRoleBindings(namespace)
if err != nil {
glog.Error("role binding update error:", err)
}
} else {
glog.Error("default role create error:", err)
}
if err == nil {
pathJson := fmt.Sprintf(`{"metadata":{"annotations":{"%s":"%s"}}}`, initTimeAnnotateKey, time.Now().UTC().Format("2006-01-02T15:04:05Z"))
_, err = ctl.K8sClient.CoreV1().Namespaces().Patch(namespace.Name, "application/strategic-merge-patch+json", []byte(pathJson))
if err != nil {
glog.Error("annotations patch error init failed:", namespace.Name, err)
}
}
}
}
func (ctl *NamespaceCtl) generateObject(item v1.Namespace) *Namespace {
func (ctl *NamespaceCtl) createCephSecretAfterNewNs(item v1.Namespace) {
// Kubernetes version must < 1.11.0
verInfo, err := ctl.K8sClient.ServerVersion()
if err != nil {
glog.Error("consult k8s server error: ", err)
return
}
if !utilversion.MustParseSemantic(verInfo.String()).LessThan(utilversion.MustParseSemantic("v1.11.0")) {
glog.Infof("disable Ceph secret controller due to k8s version %s >= v1.11.0", verInfo.String())
return
}
// Create Ceph secret in the new namespace
newNsName := item.Name
scList, _ := ctl.K8sClient.StorageV1().StorageClasses().List(metaV1.ListOptions{})
if scList == nil {
return
}
for _, sc := range scList.Items {
if sc.Provisioner == rbdPluginName {
glog.Infof("would create Ceph user secret in storage class %s at namespace %s", sc.GetName(), newNsName)
if secretName, ok := sc.Parameters[rbdUserSecretNameKey]; ok {
secret, err := ctl.K8sClient.CoreV1().Secrets(core.NamespaceSystem).Get(secretName, metaV1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
glog.Errorf("cannot find secret in namespace %s, error: %s", core.NamespaceSystem, err.Error())
continue
}
glog.Errorf("failed to find secret in namespace %s, error: %s", core.NamespaceSystem, err.Error())
continue
}
glog.Infof("succeed to find secret %s in namespace %s", secret.GetName(), secret.GetNamespace())
newSecret := &v1.Secret{
TypeMeta: metaV1.TypeMeta{
Kind: secret.Kind,
APIVersion: secret.APIVersion,
},
ObjectMeta: metaV1.ObjectMeta{
Name: secret.GetName(),
Namespace: newNsName,
Labels: secret.GetLabels(),
Annotations: secret.GetAnnotations(),
DeletionGracePeriodSeconds: secret.GetDeletionGracePeriodSeconds(),
ClusterName: secret.GetClusterName(),
},
Data: secret.Data,
StringData: secret.StringData,
Type: secret.Type,
}
glog.Infof("creating secret %s in namespace %s...", newSecret.GetName(), newSecret.GetNamespace())
_, err = ctl.K8sClient.CoreV1().Secrets(newSecret.GetNamespace()).Create(newSecret)
if err != nil {
glog.Errorf("failed to create secret in namespace %s, error: %v", newSecret.GetNamespace(), err)
continue
}
} else {
glog.Errorf("failed to find user secret name in storage class %s", sc.GetName())
}
}
}
}
func (ctl *NamespaceCtl) generateObject(item *v1.Namespace) *Namespace {
var displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
createTime := item.CreationTimestamp.Time
@@ -257,20 +391,22 @@ func (ctl *NamespaceCtl) generateObject(item v1.Namespace) *Namespace {
createTime = time.Now()
}
object := &Namespace{Name: name, CreateTime: createTime, Status: status, Annotation: Annotation{item.Annotations}}
object := &Namespace{
Name: name,
DisplayName: displayName,
CreateTime: createTime,
Status: status,
Annotation: MapString{item.Annotations},
}
return object
}
func (ctl *NamespaceCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *NamespaceCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *NamespaceCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Namespace{}) {
@@ -279,37 +415,53 @@ func (ctl *NamespaceCtl) listAndWatch() {
db = db.CreateTable(&Namespace{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Core().V1().Namespaces().Informer()
lister := kubeInformerFactory.Core().V1().Namespaces().Lister()
ctl.initListerAndInformer()
//list, err := ctl.lister.List(labels.Everything())
//if err != nil {
// glog.Error(err)
// return
//}
list, err := lister.List(labels.Everything())
//for _, item := range list {
// obj := ctl.generateObject(item)
// db.Create(obj)
// ctl.createRoleAndRuntime(item)
//}
ctl.informer.Run(stopChan)
}
func (ctl *NamespaceCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
ctl.createRoleAndRuntime(*item)
func (ctl *NamespaceCtl) initListerAndInformer() {
db := ctl.DB
}
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Core().V1().Namespaces().Lister()
informer := informerFactory.Core().V1().Namespaces().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
object := obj.(*v1.Namespace)
mysqlObject := ctl.generateObject(*object)
mysqlObject := ctl.generateObject(object)
db.Create(mysqlObject)
ctl.createRoleAndRuntime(*object)
ctl.createRoleAndRuntime(object)
ctl.createCephSecretAfterNewNs(*object)
},
UpdateFunc: func(old, new interface{}) {
object := new.(*v1.Namespace)
mysqlObject := ctl.generateObject(*object)
mysqlObject := ctl.generateObject(object)
db.Save(mysqlObject)
ctl.createRoleAndRuntime(*object)
ctl.createRoleAndRuntime(object)
},
DeleteFunc: func(obj interface{}) {
var item Namespace
@@ -317,11 +469,10 @@ func (ctl *NamespaceCtl) listAndWatch() {
db.Where("name=?", object.Name).Find(&item)
db.Delete(item)
ctl.deleteOpRuntime(*object)
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *NamespaceCtl) CountWithConditions(conditions string) int {
@@ -330,42 +481,39 @@ func (ctl *NamespaceCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *NamespaceCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *NamespaceCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Namespace
var object Namespace
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
for index := range list {
usage, err := ctl.GetNamespaceQuota(list[index].Name)
if err == nil {
list[index].Usaeg = usage
if paging != nil {
for index := range list {
usage, err := ctl.GetNamespaceQuota(list[index].Name)
if err == nil {
list[index].Usage = usage
}
}
}
return total, list, nil
}
func (ctl *NamespaceCtl) Count(namespace string) int {
var count int
db := ctl.DB
db.Model(&Namespace{}).Count(&count)
return count
}
func getUsage(namespace, resource string) int {
ctl := rec.controllers[resource]
return ctl.Count(namespace)
ctl := ResourceControllers.Controllers[resource]
return ctl.CountWithConditions(fmt.Sprintf("namespace = '%s' ", namespace))
}
func (ctl *NamespaceCtl) GetNamespaceQuota(namespace string) (v1.ResourceList, error) {
usage := make(v1.ResourceList)
resourceList := []string{Daemonsets, Deployments, Ingresses, Roles, Services, Statefulsets, PersistentVolumeClaim, Pods}
resourceList := []string{Daemonsets, Deployments, Ingresses, Roles, Services, Statefulsets, PersistentVolumeClaim, Pods, Jobs, Cronjobs}
for _, resourceName := range resourceList {
used := getUsage(namespace, resourceName)
var quantity resource.Quantity
@@ -373,10 +521,15 @@ func (ctl *NamespaceCtl) GetNamespaceQuota(namespace string) (v1.ResourceList, e
usage[v1.ResourceName(resourceName)] = quantity
}
podCtl := rec.controllers[Pods]
podCtl := ResourceControllers.Controllers[Pods]
var quantity resource.Quantity
used := podCtl.CountWithConditions(fmt.Sprintf("status=\"%s\" And namespace=\"%s\"", "Running", namespace))
quantity.Set(int64(used))
usage["runningPods"] = quantity
return usage, nil
}
func (ctl *NamespaceCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -0,0 +1,188 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"time"
"strings"
"github.com/golang/glog"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/informers"
"k8s.io/client-go/tools/cache"
)
const NodeRoleLabel = "node-role.kubernetes.io/"
func (ctl *NodeCtl) generateObject(item v1.Node) *Node {
var status, ip, displayName, msgStr string
var msg, role []string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
createTime := item.ObjectMeta.CreationTimestamp.Time
annotation := item.Annotations
// in case of multiple roles
for label, _ := range item.Labels {
if strings.HasPrefix(label, NodeRoleLabel) {
if parts := strings.Split(label, "/"); len(parts) == 2 {
role = append(role, parts[1])
}
}
}
for _, condition := range item.Status.Conditions {
if condition.Type == "Ready" {
if condition.Status == "True" {
status = Running
} else {
status = Error
}
} else {
if condition.Status == "True" {
msg = append(msg, condition.Reason)
}
}
}
if len(msg) > 0 {
msgStr = strings.Join(msg, ",")
if status == Running {
status = Warning
}
}
for _, address := range item.Status.Addresses {
if address.Type == "InternalIP" {
ip = address.Address
}
}
object := &Node{
Name: name,
DisplayName: displayName,
Ip: ip,
Status: status,
CreateTime: createTime,
Annotation: MapString{annotation},
Taints: Taints{item.Spec.Taints},
Msg: msgStr,
Role: strings.Join(role, ","),
Labels: MapString{item.Labels}}
return object
}
func (ctl *NodeCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *NodeCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Node{}) {
db.DropTable(&Node{})
}
db = db.CreateTable(&Node{})
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
}
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *NodeCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s failed, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *NodeCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Core().V1().Nodes().Lister()
informer := informerFactory.Core().V1().Nodes().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
object := obj.(*v1.Node)
mysqlObject := ctl.generateObject(*object)
db.Create(mysqlObject)
},
UpdateFunc: func(old, new interface{}) {
object := new.(*v1.Node)
mysqlObject := ctl.generateObject(*object)
db.Save(mysqlObject)
},
DeleteFunc: func(obj interface{}) {
var item Node
object := obj.(*v1.Node)
db.Where("name=? ", object.Name, object.Namespace).Find(&item)
db.Delete(item)
},
})
ctl.informer = informer
}
func (ctl *NodeCtl) CountWithConditions(conditions string) int {
var object Node
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *NodeCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Node
var object Node
var total int
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *NodeCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -166,58 +166,51 @@ func getStatusAndRestartCount(pod v1.Pod) (string, int) {
}
func (ctl *PodCtl) generateObject(item v1.Pod) *Pod {
name := item.Name
namespace := item.Namespace
podIp := item.Status.PodIP
nodeName := item.Spec.NodeName
nodeIp := item.Status.HostIP
status, restartCount := getStatusAndRestartCount(item)
createTime := item.CreationTimestamp.Time
containerStatus := item.Status.ContainerStatuses
containerSpecs := item.Spec.Containers
var containers Containers
var ownerKind, ownerName string
for _, containerSpec := range containerSpecs {
var container Container
container.Name = containerSpec.Name
container.Image = containerSpec.Image
container.Ports = containerSpec.Ports
container.Resources = containerSpec.Resources
for _, status := range containerStatus {
if container.Name == status.Name {
container.Ready = status.Ready
}
}
containers = append(containers, container)
// For ReplicaSet,ReplicaController,DaemonSet,StatefulSet,Job,CronJob, k8s will automatically
// set ownerReference for pods, in case of setting ownerReference manually.
if item.OwnerReferences != nil && len(item.OwnerReferences) > 0 {
ownerKind = item.OwnerReferences[0].Kind
ownerName = item.OwnerReferences[0].Name
}
object := &Pod{Namespace: namespace, Name: name, Node: nodeName, PodIp: podIp, Status: status, NodeIp: nodeIp,
CreateTime: createTime, Annotation: Annotation{item.Annotations}, Containers: containers, RestartCount: restartCount}
object := &Pod{
Namespace: item.Namespace,
Name: item.Name,
Node: item.Spec.NodeName,
Status: item.Status,
CreateTime: item.CreationTimestamp.Time,
OwnerKind: ownerKind,
OwnerName: ownerName,
Spec: item.Spec,
Metadata: item.ObjectMeta,
Kind: item.TypeMeta.Kind,
APIVersion: item.TypeMeta.APIVersion,
}
return object
}
func (ctl *PodCtl) listAndWatch() {
func (ctl *PodCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *PodCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Pod{}) {
db.DropTable(&Pod{})
}
db = db.CreateTable(&Pod{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Core().V1().Pods().Informer()
lister := kubeInformerFactory.Core().V1().Pods().Lister()
list, err := lister.List(labels.Everything())
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
panic(err)
return
}
for _, item := range list {
@@ -225,6 +218,26 @@ func (ctl *PodCtl) listAndWatch() {
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *PodCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *PodCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Core().V1().Pods().Lister()
informer := informerFactory.Core().V1().Pods().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
object := obj.(*v1.Pod)
@@ -249,7 +262,7 @@ func (ctl *PodCtl) listAndWatch() {
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *PodCtl) CountWithConditions(conditions string) int {
@@ -258,25 +271,21 @@ func (ctl *PodCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *PodCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *PodCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Pod
var object Pod
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *PodCtl) Count(namespace string) int {
var count int
db := ctl.DB
if len(namespace) == 0 {
db.Model(&Pod{}).Count(&count)
} else {
db.Model(&Pod{}).Where("namespace = ?", namespace).Count(&count)
}
return count
func (ctl *PodCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -30,10 +30,20 @@ import (
)
func (ctl *PvcCtl) generateObject(item *v1.PersistentVolumeClaim) *Pvc {
var displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
namespace := item.Namespace
status := fmt.Sprintf("%s", item.Status.Phase)
createTime := item.CreationTimestamp.Time
status := fmt.Sprintf("%s", item.Status.Phase)
if item.DeletionTimestamp != nil {
status = "Terminating"
}
var capacity, storageClass, accessModeStr string
if createTime.IsZero() {
@@ -58,36 +68,37 @@ func (ctl *PvcCtl) generateObject(item *v1.PersistentVolumeClaim) *Pvc {
accessModeStr = strings.Join(accessModeList, ",")
object := &Pvc{Namespace: namespace, Name: name, Status: status, Capacity: capacity,
AccessMode: accessModeStr, StorageClassName: storageClass, CreateTime: createTime, Annotation: Annotation{item.Annotations}}
object := &Pvc{
Namespace: namespace,
Name: name,
DisplayName: displayName,
Status: status,
Capacity: capacity,
AccessMode: accessModeStr,
StorageClassName: storageClass,
CreateTime: createTime,
Annotation: MapString{item.Annotations},
Labels: MapString{item.Labels},
}
return object
}
func (ctl *PvcCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *PvcCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *PvcCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Pvc{}) {
db.DropTable(&Pvc{})
}
db = db.CreateTable(&Pvc{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Core().V1().PersistentVolumeClaims().Informer()
lister := kubeInformerFactory.Core().V1().PersistentVolumeClaims().Lister()
list, err := lister.List(labels.Everything())
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
@@ -96,9 +107,28 @@ func (ctl *PvcCtl) listAndWatch() {
for _, item := range list {
obj := ctl.generateObject(item)
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *PvcCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *PvcCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Core().V1().PersistentVolumeClaims().Lister()
informer := informerFactory.Core().V1().PersistentVolumeClaims().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
@@ -119,7 +149,7 @@ func (ctl *PvcCtl) listAndWatch() {
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *PvcCtl) CountWithConditions(conditions string) int {
@@ -128,12 +158,14 @@ func (ctl *PvcCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *PvcCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *PvcCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Pvc
var object Pvc
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
@@ -153,13 +185,7 @@ func (ctl *PvcCtl) ListWithConditions(conditions string, paging *Paging) (int, i
return total, list, nil
}
func (ctl *PvcCtl) Count(namespace string) int {
var count int
db := ctl.DB
if len(namespace) == 0 {
db.Model(&Pvc{}).Count(&count)
} else {
db.Model(&Pvc{}).Where("namespace = ?", namespace).Count(&count)
}
return count
func (ctl *PvcCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -0,0 +1,64 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"time"
"k8s.io/client-go/informers"
)
func (ctl *ReplicaSetCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *ReplicaSetCtl) sync(stopChan chan struct{}) {
ctl.initListerAndInformer()
ctl.informer.Run(stopChan)
}
func (ctl *ReplicaSetCtl) total() int {
return 0
}
func (ctl *ReplicaSetCtl) initListerAndInformer() {
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Apps().V1().ReplicaSets().Lister()
informer := informerFactory.Apps().V1().ReplicaSets().Informer()
ctl.informer = informer
}
func (ctl *ReplicaSetCtl) CountWithConditions(conditions string) int {
return 0
}
func (ctl *ReplicaSetCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
return 0, nil, nil
}
func (ctl *ReplicaSetCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -0,0 +1,57 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"time"
"github.com/pkg/errors"
"k8s.io/client-go/informers"
)
func (ctl *RoleBindingCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *RoleBindingCtl) sync(stopChan chan struct{}) {
ctl.initListerAndInformer()
ctl.informer.Run(stopChan)
}
func (ctl *RoleBindingCtl) total() int {
return 0
}
func (ctl *RoleBindingCtl) initListerAndInformer() {
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Rbac().V1().RoleBindings().Lister()
ctl.informer = informerFactory.Rbac().V1().RoleBindings().Informer()
}
func (ctl *RoleBindingCtl) CountWithConditions(conditions string) int {
return 0
}
func (ctl *RoleBindingCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
return 0, nil, errors.New("not implement")
}
func (ctl *RoleBindingCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -28,8 +28,14 @@ import (
)
func (ctl *RoleCtl) generateObject(item v1.Role) *Role {
var displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) == 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
if strings.HasPrefix(name, "system:") {
if strings.HasPrefix(name, systemPrefix) || item.Annotations == nil || len(item.Annotations[creator]) == 0 {
return nil
}
namespace := item.Namespace
@@ -38,35 +44,32 @@ func (ctl *RoleCtl) generateObject(item v1.Role) *Role {
createTime = time.Now()
}
object := &Role{Namespace: namespace, Name: name, CreateTime: createTime, Annotation: Annotation{item.Annotations}}
object := &Role{
Namespace: namespace,
Name: name,
DisplayName: displayName,
CreateTime: createTime,
Annotation: MapString{item.Annotations},
}
return object
}
func (ctl *RoleCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *RoleCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *RoleCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Role{}) {
db.DropTable(&Role{})
}
db = db.CreateTable(&Role{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Rbac().V1().Roles().Informer()
lister := kubeInformerFactory.Rbac().V1().Roles().Lister()
list, err := lister.List(labels.Everything())
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
@@ -74,10 +77,39 @@ func (ctl *RoleCtl) listAndWatch() {
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
if obj != nil {
db.Create(obj)
}
}
ctl.informer.Run(stopChan)
}
func (ctl *RoleCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s failed, reason:%s", err, ctl.Name())
return 0
}
count := 0
for _, item := range list {
if !strings.HasPrefix(item.Name, systemPrefix) && item.Annotations != nil && len(item.Annotations[creator]) > 0 {
count++
}
}
return count
}
func (ctl *RoleCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Rbac().V1().Roles().Lister()
informer := informerFactory.Rbac().V1().Roles().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
@@ -103,7 +135,7 @@ func (ctl *RoleCtl) listAndWatch() {
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *RoleCtl) CountWithConditions(conditions string) int {
@@ -112,21 +144,21 @@ func (ctl *RoleCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *RoleCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *RoleCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Role
var object Role
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *RoleCtl) Count(namespace string) int {
var count int
db := ctl.DB
db.Model(&Role{}).Where("namespace = ?", namespace).Count(&count)
return count
func (ctl *RoleCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -17,59 +17,86 @@ limitations under the License.
package controllers
import (
"errors"
"fmt"
"time"
"github.com/golang/glog"
"github.com/jinzhu/gorm"
"k8s.io/client-go/kubernetes"
"os"
"sync"
"syscall"
"kubesphere.io/kubesphere/pkg/client"
)
type resourceControllers struct {
controllers map[string]Controller
Controllers map[string]Controller
k8sClient *kubernetes.Clientset
}
var stopChan chan struct{}
var rec resourceControllers
var ResourceControllers resourceControllers
func (rec *resourceControllers) runContoller(name string) {
func (rec *resourceControllers) runController(name string, stopChan chan struct{}, wg *sync.WaitGroup) {
var ctl Controller
attr := CommonAttribute{DB: client.NewDBClient(), K8sClient: rec.k8sClient, stopChan: stopChan, aliveChan: make(chan struct{})}
attr := CommonAttribute{DB: client.NewDBClient(), K8sClient: rec.k8sClient, stopChan: stopChan,
aliveChan: make(chan struct{}), Name: name}
switch name {
case Deployments:
ctl = &DeploymentCtl{attr}
ctl = &DeploymentCtl{CommonAttribute: attr}
case Statefulsets:
ctl = &StatefulsetCtl{attr}
ctl = &StatefulsetCtl{CommonAttribute: attr}
case Daemonsets:
ctl = &DaemonsetCtl{attr}
ctl = &DaemonsetCtl{CommonAttribute: attr}
case Ingresses:
ctl = &IngressCtl{attr}
ctl = &IngressCtl{CommonAttribute: attr}
case PersistentVolumeClaim:
ctl = &PvcCtl{attr}
ctl = &PvcCtl{CommonAttribute: attr}
case Roles:
ctl = &RoleCtl{attr}
ctl = &RoleCtl{CommonAttribute: attr}
case ClusterRoles:
ctl = &ClusterRoleCtl{attr}
ctl = &ClusterRoleCtl{CommonAttribute: attr}
case Services:
ctl = &ServiceCtl{attr}
ctl = &ServiceCtl{CommonAttribute: attr}
case Pods:
ctl = &PodCtl{attr}
ctl = &PodCtl{CommonAttribute: attr}
case Namespaces:
ctl = &NamespaceCtl{attr}
ctl = &NamespaceCtl{CommonAttribute: attr}
case StorageClasses:
ctl = &StorageClassCtl{attr}
ctl = &StorageClassCtl{CommonAttribute: attr}
case Jobs:
ctl = &JobCtl{CommonAttribute: attr}
case Cronjobs:
ctl = &CronJobCtl{CommonAttribute: attr}
case Nodes:
ctl = &NodeCtl{CommonAttribute: attr}
case Replicasets:
ctl = &ReplicaSetCtl{CommonAttribute: attr}
case ControllerRevisions:
ctl = &ControllerRevisionCtl{CommonAttribute: attr}
case ConfigMaps:
ctl = &ConfigMapCtl{CommonAttribute: attr}
case Secrets:
ctl = &SecretCtl{CommonAttribute: attr}
case ClusterRoleBindings:
ctl = &ClusterRoleBindingCtl{CommonAttribute: attr}
case RoleBindings:
ctl = &RoleBindingCtl{CommonAttribute: attr}
default:
return
}
rec.controllers[name] = ctl
go ctl.listAndWatch()
rec.Controllers[name] = ctl
wg.Add(1)
go listAndWatch(ctl, wg)
}
func dbHealthCheck(db *gorm.DB) {
defer db.Close()
for {
count := 0
var err error
@@ -78,41 +105,52 @@ func dbHealthCheck(db *gorm.DB) {
if err != nil {
count++
}
time.Sleep(1 * time.Second)
time.Sleep(5 * time.Second)
}
if count > 3 {
panic(err)
syscall.Kill(os.Getpid(), syscall.SIGTERM)
}
}
}
func Run() {
func Run(stopChan chan struct{}, wg *sync.WaitGroup) {
defer wg.Done()
stopChan := make(chan struct{})
defer close(stopChan)
rec = resourceControllers{k8sClient: client.NewK8sClient(), controllers: make(map[string]Controller)}
k8sClient := client.NewK8sClient()
ResourceControllers = resourceControllers{k8sClient: k8sClient, Controllers: make(map[string]Controller)}
for _, item := range []string{Deployments, Statefulsets, Daemonsets, PersistentVolumeClaim, Pods, Services,
Ingresses, Roles, ClusterRoles, Namespaces, StorageClasses} {
rec.runContoller(item)
Ingresses, Roles, RoleBindings, ClusterRoles, ClusterRoleBindings, Namespaces, StorageClasses, Jobs, Cronjobs, Nodes, Replicasets,
ControllerRevisions, ConfigMaps, Secrets} {
ResourceControllers.runController(item, stopChan, wg)
}
go dbHealthCheck(client.NewDBClient())
for {
for ctlName, controller := range rec.controllers {
for ctlName, controller := range ResourceControllers.Controllers {
select {
case <-stopChan:
return
case _, isClose := <-controller.chanAlive():
if !isClose {
glog.Errorf("controller %s have stopped, restart it", ctlName)
rec.runContoller(ctlName)
ResourceControllers.runController(ctlName, stopChan, wg)
}
default:
time.Sleep(5 * time.Second)
time.Sleep(3 * time.Second)
}
}
}
}
func GetLister(controller string) (interface{}, error) {
if ctl, ok := ResourceControllers.Controllers[controller]; ok {
if ctl.Lister() != nil {
return ctl.Lister(), nil
}
}
return nil, errors.New(fmt.Sprintf("lister of %s not alive", controller))
}

View File

@@ -0,0 +1,157 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"strings"
"time"
"github.com/golang/glog"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/informers"
"k8s.io/client-go/tools/cache"
)
func (ctl *SecretCtl) generateObject(item v1.Secret) *Secret {
var displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
createTime := item.CreationTimestamp.Time
if createTime.IsZero() {
createTime = time.Now()
}
object := &Secret{
Name: item.Name,
Namespace: item.Namespace,
CreateTime: createTime,
Annotation: MapString{item.Annotations},
DisplayName: displayName,
Entries: len(item.Data),
Type: string(item.Type),
}
return object
}
func (ctl *SecretCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *SecretCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Secret{}) {
db.DropTable(&Secret{})
}
db = db.CreateTable(&Secret{})
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
}
for _, item := range list {
obj := ctl.generateObject(*item)
if obj != nil {
db.Create(obj)
}
}
ctl.informer.Run(stopChan)
}
func (ctl *SecretCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *SecretCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Core().V1().Secrets().Lister()
informer := informerFactory.Core().V1().Secrets().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
object := obj.(*v1.Secret)
mysqlObject := ctl.generateObject(*object)
if mysqlObject != nil {
db.Create(mysqlObject)
}
},
UpdateFunc: func(old, new interface{}) {
object := new.(*v1.Secret)
mysqlObject := ctl.generateObject(*object)
if mysqlObject != nil {
db.Save(mysqlObject)
}
},
DeleteFunc: func(obj interface{}) {
var item Secret
object := obj.(*v1.Secret)
db.Where("name=?", object.Name).Find(&item)
db.Delete(item)
},
})
ctl.informer = informer
}
func (ctl *SecretCtl) CountWithConditions(conditions string) int {
var object Secret
if strings.Contains(conditions, "namespace") {
conditions = ""
}
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *SecretCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var object Secret
var list []Secret
var total int
if len(order) == 0 {
order = "createTime desc"
}
db := ctl.DB
listWithConditions(db, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *SecretCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -26,7 +26,7 @@ import (
"k8s.io/client-go/tools/cache"
)
func (ctl *ServiceCtl) loadBalancerStatusStringer(item v1.Service) string {
func loadBalancerStatusStringer(item v1.Service) string {
ingress := item.Status.LoadBalancer.Ingress
result := sets.NewString()
for i := range ingress {
@@ -41,7 +41,7 @@ func (ctl *ServiceCtl) loadBalancerStatusStringer(item v1.Service) string {
return r
}
func (ctl *ServiceCtl) getExternalIp(item v1.Service) string {
func getExternalIp(item v1.Service) string {
switch item.Spec.Type {
case "ClusterIP", "NodePort":
if len(item.Spec.ExternalIPs) > 0 {
@@ -51,7 +51,7 @@ func (ctl *ServiceCtl) getExternalIp(item v1.Service) string {
return item.Spec.ExternalName
case "LoadBalancer":
lbIps := ctl.loadBalancerStatusStringer(item)
lbIps := loadBalancerStatusStringer(item)
if len(item.Spec.ExternalIPs) > 0 {
results := []string{}
if len(lbIps) > 0 {
@@ -68,14 +68,26 @@ func (ctl *ServiceCtl) getExternalIp(item v1.Service) string {
return ""
}
func (ctl *ServiceCtl) generateObject(item v1.Service) *Service {
func generateSvcObject(item v1.Service) *Service {
var app string
var displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
namespace := item.Namespace
createTime := item.CreationTimestamp.Time
externalIp := ctl.getExternalIp(item)
externalIp := getExternalIp(item)
serviceType := item.Spec.Type
vip := item.Spec.ClusterIP
release := item.ObjectMeta.Labels["release"]
chart := item.ObjectMeta.Labels["chart"]
if len(release) > 0 && len(chart) > 0 {
app = release + "/" + chart
}
ports := ""
var nodePorts []string
@@ -83,8 +95,8 @@ func (ctl *ServiceCtl) generateObject(item v1.Service) *Service {
createTime = time.Now()
}
if len(item.Spec.ClusterIP) == 0 {
if len(item.Spec.Selector) == 0 {
if len(item.Spec.ClusterIP) == 0 || item.Spec.ClusterIP == "None" {
if len(item.Spec.Selector) != 0 {
serviceType = "Headless(Selector)"
}
@@ -119,27 +131,31 @@ func (ctl *ServiceCtl) generateObject(item v1.Service) *Service {
object := &Service{
Namespace: namespace,
Name: name,
DisplayName: displayName,
ServiceType: string(serviceType),
ExternalIp: externalIp,
VirtualIp: vip,
CreateTime: createTime,
Ports: ports,
NodePorts: strings.Join(nodePorts, ","),
Annotation: Annotation{item.Annotations},
Annotation: MapString{item.Annotations},
Labels: MapString{item.Labels},
App: app,
}
return object
}
func (ctl *ServiceCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *ServiceCtl) generateObject(item v1.Service) *Service {
return generateSvcObject(item)
}
func (ctl *ServiceCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *ServiceCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Service{}) {
@@ -148,12 +164,8 @@ func (ctl *ServiceCtl) listAndWatch() {
db = db.CreateTable(&Service{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Core().V1().Services().Informer()
lister := kubeInformerFactory.Core().V1().Services().Lister()
list, err := lister.List(labels.Everything())
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
@@ -164,6 +176,25 @@ func (ctl *ServiceCtl) listAndWatch() {
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *ServiceCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *ServiceCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Core().V1().Services().Lister()
informer := informerFactory.Core().V1().Services().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
@@ -185,7 +216,7 @@ func (ctl *ServiceCtl) listAndWatch() {
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *ServiceCtl) CountWithConditions(conditions string) int {
@@ -194,25 +225,21 @@ func (ctl *ServiceCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *ServiceCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *ServiceCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Service
var object Service
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *ServiceCtl) Count(namespace string) int {
var count int
db := ctl.DB
if len(namespace) == 0 {
db.Model(&Service{}).Count(&count)
} else {
db.Model(&Service{}).Where("namespace = ?", namespace).Count(&count)
}
return count
func (ctl *ServiceCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -30,6 +30,11 @@ import (
func (ctl *StatefulsetCtl) generateObject(item v1.StatefulSet) *Statefulset {
var app string
var status string
var displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
namespace := item.Namespace
availablePodNum := item.Status.ReadyReplicas
@@ -56,33 +61,37 @@ func (ctl *StatefulsetCtl) generateObject(item v1.StatefulSet) *Statefulset {
}
}
statefulSetObject := &Statefulset{Namespace: namespace, Name: name, Available: availablePodNum, Desire: desirePodNum,
App: app, CreateTime: createTime, Status: status, Annotation: Annotation{item.Annotations}}
statefulSetObject := &Statefulset{
Namespace: namespace,
Name: name,
DisplayName: displayName,
Available: availablePodNum,
Desire: desirePodNum,
App: app,
CreateTime: createTime,
Status: status,
Annotation: MapString{item.Annotations},
Labels: MapString{item.Spec.Selector.MatchLabels},
}
return statefulSetObject
}
func (ctl *StatefulsetCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *StatefulsetCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *StatefulsetCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&Statefulset{}) {
db.DropTable(&Statefulset{})
}
db = db.CreateTable(&Statefulset{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Apps().V1().StatefulSets().Informer()
lister := kubeInformerFactory.Apps().V1().StatefulSets().Lister()
list, err := lister.List(labels.Everything())
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
@@ -91,9 +100,28 @@ func (ctl *StatefulsetCtl) listAndWatch() {
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *StatefulsetCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *StatefulsetCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Apps().V1().StatefulSets().Lister()
informer := informerFactory.Apps().V1().StatefulSets().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
@@ -115,7 +143,7 @@ func (ctl *StatefulsetCtl) listAndWatch() {
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *StatefulsetCtl) CountWithConditions(conditions string) int {
@@ -124,25 +152,21 @@ func (ctl *StatefulsetCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *StatefulsetCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *StatefulsetCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []Statefulset
var object Statefulset
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
return total, list, nil
}
func (ctl *StatefulsetCtl) Count(namespace string) int {
var count int
db := ctl.DB
if len(namespace) == 0 {
db.Model(&Statefulset{}).Count(&count)
} else {
db.Model(&Statefulset{}).Where("namespace = ?", namespace).Count(&count)
}
return count
func (ctl *StatefulsetCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -20,19 +20,36 @@ import (
"fmt"
"time"
"github.com/golang/glog"
"k8s.io/api/storage/v1"
utilversion "k8s.io/kubernetes/pkg/util/version"
"github.com/golang/glog"
coreV1 "k8s.io/api/core/v1"
"k8s.io/api/storage/v1"
"k8s.io/apimachinery/pkg/api/errors"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/informers"
"k8s.io/client-go/tools/cache"
"k8s.io/kubernetes/pkg/apis/core"
)
const (
rbdPluginName = "kubernetes.io/rbd"
rbdUserSecretNameKey = "userSecretName"
)
func (ctl *StorageClassCtl) generateObject(item v1.StorageClass) *StorageClass {
var displayName string
if item.Annotations != nil && len(item.Annotations[DisplayName]) > 0 {
displayName = item.Annotations[DisplayName]
}
name := item.Name
createTime := item.CreationTimestamp.Time
isDefault := false
provisioner := item.Provisioner
if item.Annotations["storageclass.beta.kubernetes.io/is-default-class"] == "true" {
isDefault = true
}
@@ -41,20 +58,23 @@ func (ctl *StorageClassCtl) generateObject(item v1.StorageClass) *StorageClass {
createTime = time.Now()
}
object := &StorageClass{Name: name, CreateTime: createTime, IsDefault: isDefault, Annotation: Annotation{item.Annotations}}
object := &StorageClass{
Name: name,
DisplayName: displayName,
CreateTime: createTime,
IsDefault: isDefault,
Annotation: MapString{item.Annotations},
Provisioner: provisioner,
}
return object
}
func (ctl *StorageClassCtl) listAndWatch() {
defer func() {
close(ctl.aliveChan)
if err := recover(); err != nil {
glog.Error(err)
return
}
}()
func (ctl *StorageClassCtl) Name() string {
return ctl.CommonAttribute.Name
}
func (ctl *StorageClassCtl) sync(stopChan chan struct{}) {
db := ctl.DB
if db.HasTable(&StorageClass{}) {
@@ -63,12 +83,8 @@ func (ctl *StorageClassCtl) listAndWatch() {
db = db.CreateTable(&StorageClass{})
k8sClient := ctl.K8sClient
kubeInformerFactory := informers.NewSharedInformerFactory(k8sClient, time.Second*resyncCircle)
informer := kubeInformerFactory.Storage().V1().StorageClasses().Informer()
lister := kubeInformerFactory.Storage().V1().StorageClasses().Lister()
list, err := lister.List(labels.Everything())
ctl.initListerAndInformer()
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Error(err)
return
@@ -77,15 +93,123 @@ func (ctl *StorageClassCtl) listAndWatch() {
for _, item := range list {
obj := ctl.generateObject(*item)
db.Create(obj)
}
ctl.informer.Run(stopChan)
}
func (ctl *StorageClassCtl) total() int {
list, err := ctl.lister.List(labels.Everything())
if err != nil {
glog.Errorf("count %s falied, reason:%s", err, ctl.Name())
return 0
}
return len(list)
}
func (ctl *StorageClassCtl) createCephSecretAfterNewSc(item v1.StorageClass) {
// Kubernetes version must < 1.11.0
verInfo, err := ctl.K8sClient.ServerVersion()
if err != nil {
glog.Error("consult k8s server error: ", err)
return
}
if !utilversion.MustParseSemantic(verInfo.String()).LessThan(utilversion.MustParseSemantic("v1.11.0")) {
glog.Infof("disable Ceph secret controller due to k8s version %s >= v1.11.0", verInfo.String())
return
}
// Find Ceph secret in the new storage class
if item.Provisioner != rbdPluginName {
return
}
var secret *coreV1.Secret
if secretName, ok := item.Parameters[rbdUserSecretNameKey]; ok {
secret, err = ctl.K8sClient.CoreV1().Secrets(core.NamespaceSystem).Get(secretName, metaV1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
glog.Errorf("cannot find secret %s in namespace %s", secretName, core.NamespaceSystem)
return
}
glog.Error("failed to find secret, error: ", err)
return
}
glog.Infof("succeed to find secret %s in namespace %s", secret.GetName(), secret.GetNamespace())
} else {
glog.Errorf("failed to find user secret name in storage class %s", item.GetName())
return
}
// Create or update Ceph secret in each namespace
nsList, err := ctl.K8sClient.CoreV1().Namespaces().List(metaV1.ListOptions{})
if err != nil {
glog.Error("failed to list namespace, error: ", err)
return
}
for _, ns := range nsList.Items {
if ns.GetName() == core.NamespaceSystem {
glog.Infof("skip creating Ceph secret in namespace %s", core.NamespaceSystem)
continue
}
newSecret := &coreV1.Secret{
TypeMeta: metaV1.TypeMeta{
Kind: secret.Kind,
APIVersion: secret.APIVersion,
},
ObjectMeta: metaV1.ObjectMeta{
Name: secret.GetName(),
Namespace: ns.GetName(),
Labels: secret.GetLabels(),
Annotations: secret.GetAnnotations(),
DeletionGracePeriodSeconds: secret.GetDeletionGracePeriodSeconds(),
ClusterName: secret.GetClusterName(),
},
Data: secret.Data,
StringData: secret.StringData,
Type: secret.Type,
}
_, err := ctl.K8sClient.CoreV1().Secrets(newSecret.GetNamespace()).Get(newSecret.GetName(), metaV1.GetOptions{})
if err != nil {
if errors.IsNotFound(err) {
// Create secret
_, err := ctl.K8sClient.CoreV1().Secrets(newSecret.GetNamespace()).Create(newSecret)
if err != nil {
glog.Errorf("failed to create secret in namespace %s, error: %v", newSecret.GetNamespace(), err)
} else {
glog.Infof("succeed to create secret %s in namespace %s", newSecret.GetName(),
newSecret.GetNamespace())
}
} else {
glog.Errorf("failed to find secret in namespace %s, error: %v", newSecret.GetNamespace(), err)
}
} else {
// Update secret
_, err = ctl.K8sClient.CoreV1().Secrets(newSecret.GetNamespace()).Update(newSecret)
if err != nil {
glog.Errorf("failed to update secret in namespace %s, error: %v", newSecret.GetNamespace(), err)
continue
} else {
glog.Infof("succeed to update secret %s in namespace %s", newSecret.GetName(), newSecret.GetNamespace())
}
}
}
}
func (ctl *StorageClassCtl) initListerAndInformer() {
db := ctl.DB
informerFactory := informers.NewSharedInformerFactory(ctl.K8sClient, time.Second*resyncCircle)
ctl.lister = informerFactory.Storage().V1().StorageClasses().Lister()
informer := informerFactory.Storage().V1().StorageClasses().Informer()
informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
object := obj.(*v1.StorageClass)
mysqlObject := ctl.generateObject(*object)
db.Create(mysqlObject)
ctl.createCephSecretAfterNewSc(*object)
},
UpdateFunc: func(old, new interface{}) {
object := new.(*v1.StorageClass)
@@ -101,8 +225,7 @@ func (ctl *StorageClassCtl) listAndWatch() {
},
})
informer.Run(ctl.stopChan)
ctl.informer = informer
}
func (ctl *StorageClassCtl) CountWithConditions(conditions string) int {
@@ -111,18 +234,20 @@ func (ctl *StorageClassCtl) CountWithConditions(conditions string) int {
return countWithConditions(ctl.DB, conditions, &object)
}
func (ctl *StorageClassCtl) ListWithConditions(conditions string, paging *Paging) (int, interface{}, error) {
func (ctl *StorageClassCtl) ListWithConditions(conditions string, paging *Paging, order string) (int, interface{}, error) {
var list []StorageClass
var object StorageClass
var total int
order := "createTime desc"
if len(order) == 0 {
order = "createTime desc"
}
listWithConditions(ctl.DB, &total, &object, &list, conditions, paging, order)
for index, storageClass := range list {
name := storageClass.Name
pvcCtl := PvcCtl{CommonAttribute{K8sClient: ctl.K8sClient, DB: ctl.DB}}
pvcCtl := ResourceControllers.Controllers[PersistentVolumeClaim]
list[index].Count = pvcCtl.CountWithConditions(fmt.Sprintf("storage_class=\"%s\"", name))
}
@@ -130,9 +255,7 @@ func (ctl *StorageClassCtl) ListWithConditions(conditions string, paging *Paging
return total, list, nil
}
func (ctl *StorageClassCtl) Count(name string) int {
var count int
db := ctl.DB
db.Model(&StorageClass{}).Count(&count)
return count
func (ctl *StorageClassCtl) Lister() interface{} {
return ctl.lister
}

View File

@@ -19,6 +19,9 @@ package controllers
import (
"time"
"github.com/golang/glog"
v12 "k8s.io/apimachinery/pkg/apis/meta/v1"
"database/sql/driver"
"encoding/json"
"errors"
@@ -26,25 +29,30 @@ import (
"github.com/jinzhu/gorm"
"k8s.io/api/core/v1"
"k8s.io/client-go/kubernetes"
appV1 "k8s.io/client-go/listers/apps/v1"
batchv1 "k8s.io/client-go/listers/batch/v1"
batchv1beta1 "k8s.io/client-go/listers/batch/v1beta1"
coreV1 "k8s.io/client-go/listers/core/v1"
"k8s.io/client-go/listers/extensions/v1beta1"
rbacV1 "k8s.io/client-go/listers/rbac/v1"
storageV1 "k8s.io/client-go/listers/storage/v1"
"k8s.io/client-go/tools/cache"
)
const (
resyncCircle = 180
Stopped = "stopped"
PvcPending = "Pending"
Running = "running"
Updating = "updating"
tablePods = "pods"
tableDeployments = "deployments"
tableDaemonsets = "daemonsets"
tableStatefulsets = "statefulsets"
tableNamespaces = "namespaces"
tableIngresses = "ingresses"
tablePersistentVolumeClaim = "pvcs"
tableRoles = "roles"
tableClusterRoles = "cluster_roles"
tableServices = "services"
tableStorageClasses = "storage_classes"
resyncCircle = 600
Stopped = "stopped"
PvcPending = "pending"
Running = "running"
Updating = "updating"
Failed = "failed"
Unfinished = "unfinished"
Completed = "completed"
Pause = "pause"
Warning = "warning"
Error = "error"
DisplayName = "displayName"
creator = "creator"
Pods = "pods"
Deployments = "deployments"
@@ -54,16 +62,26 @@ const (
Ingresses = "ingresses"
PersistentVolumeClaim = "persistent-volume-claims"
Roles = "roles"
RoleBindings = "role-bindings"
ClusterRoles = "cluster-roles"
ClusterRoleBindings = "cluster-role-bindings"
Services = "services"
StorageClasses = "storage-classes"
Applications = "applications"
Jobs = "jobs"
Cronjobs = "cronjobs"
Nodes = "nodes"
Replicasets = "replicasets"
ControllerRevisions = "controllerrevisions"
ConfigMaps = "configmaps"
Secrets = "secrets"
)
type Annotation struct {
Values map[string]string `gorm:"type:TEXT"`
type MapString struct {
Values map[string]string `json:"values" gorm:"type:TEXT"`
}
func (annotation *Annotation) Scan(val interface{}) error {
func (annotation *MapString) Scan(val interface{}) error {
switch val := val.(type) {
case string:
return json.Unmarshal([]byte(val), annotation)
@@ -75,118 +93,185 @@ func (annotation *Annotation) Scan(val interface{}) error {
return nil
}
func (annotation Annotation) Value() (driver.Value, error) {
func (annotation MapString) Value() (driver.Value, error) {
bytes, err := json.Marshal(annotation)
return string(bytes), err
}
type Deployment struct {
Name string `gorm:"primary_key" json:"name"`
Namespace string `gorm:"primary_key" json:"namespace"`
App string `json:"app,omitempty"`
Available int32 `json:"available"`
Desire int32 `json:"desire"`
Status string `json:"status"`
Annotation Annotation `json:"annotations"`
UpdateTime time.Time `gorm:"column:updateTime" json:"updateTime,omitempty"`
type Taints struct {
Values []v1.Taint `json:"values" gorm:"type:TEXT"`
}
func (Deployment) TableName() string {
return tableDeployments
func (taints *Taints) Scan(val interface{}) error {
switch val := val.(type) {
case string:
return json.Unmarshal([]byte(val), taints)
case []byte:
return json.Unmarshal(val, taints)
default:
return errors.New("not support")
}
return nil
}
func (taints Taints) Value() (driver.Value, error) {
bytes, err := json.Marshal(taints)
return string(bytes), err
}
type Deployment struct {
Name string `gorm:"primary_key" json:"name"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Namespace string `gorm:"primary_key" json:"namespace"`
App string `json:"app,omitempty"`
Available int32 `json:"available"`
Desire int32 `json:"desire"`
Status string `json:"status"`
Labels MapString `json:"labels"`
Annotation MapString `json:"annotations"`
UpdateTime time.Time `gorm:"column:updateTime" json:"updateTime,omitempty"`
}
type Statefulset struct {
Name string `gorm:"primary_key" json:"name,omitempty"`
Namespace string `gorm:"primary_key" json:"namespace,omitempty"`
App string `json:"app,omitempty"`
Name string `gorm:"primary_key" json:"name,omitempty"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Namespace string `gorm:"primary_key" json:"namespace,omitempty"`
App string `json:"app,omitempty"`
Available int32 `json:"available"`
Desire int32 `json:"desire"`
Status string `json:"status"`
Annotation Annotation `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
func (Statefulset) TableName() string {
return tableStatefulsets
Available int32 `json:"available"`
Desire int32 `json:"desire"`
Status string `json:"status"`
Annotation MapString `json:"annotations"`
Labels MapString `json:"labels"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
type Daemonset struct {
Name string `gorm:"primary_key" json:"name,omitempty"`
Namespace string `gorm:"primary_key" json:"namespace,omitempty"`
App string `json:"app,omitempty"`
Name string `gorm:"primary_key" json:"name,omitempty"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Namespace string `gorm:"primary_key" json:"namespace,omitempty"`
App string `json:"app,omitempty"`
Available int32 `json:"available"`
Desire int32 `json:"desire"`
Status string `json:"status"`
NodeSelector string `json:"nodeSelector, omitempty"`
Annotation Annotation `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
func (Daemonset) TableName() string {
return tableDaemonsets
Available int32 `json:"available"`
Desire int32 `json:"desire"`
Status string `json:"status"`
NodeSelector string `json:"nodeSelector, omitempty"`
Annotation MapString `json:"annotations"`
Labels MapString `json:"labels"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
type Service struct {
Name string `gorm:"primary_key" json:"name"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Namespace string `gorm:"primary_key" json:"namespace"`
ServiceType string `json:"type,omitempty"`
ServiceType string `gorm:"column:type" json:"type,omitempty"`
VirtualIp string `json:"virtualIp,omitempty"`
ExternalIp string `json:"externalIp,omitempty"`
App string `json:"app,omitempty"`
VirtualIp string `gorm:"column:virtualIp" json:"virtualIp,omitempty"`
ExternalIp string `gorm:"column:externalIp" json:"externalIp,omitempty"`
Ports string `json:"ports,omitempty"`
NodePorts string `json:"nodePorts,omitempty"`
Annotation Annotation `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
func (Service) TableName() string {
return tableServices
Ports string `json:"ports,omitempty"`
NodePorts string `gorm:"column:nodePorts" json:"nodePorts,omitempty"`
Annotation MapString `json:"annotations"`
Labels MapString `json:"labels"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
type Pvc struct {
Name string `gorm:"primary_key" json:"name"`
Namespace string `gorm:"primary_key" json:"namespace"`
Status string `json:"status,omitempty"`
Capacity string `json:"capacity,omitempty"`
AccessMode string `json:"accessMode,omitempty"`
Annotation Annotation `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
StorageClassName string `gorm:"column:storage_class" json:"storage_class,omitempty"`
InUse bool `gorm:"-" json:"inUse"`
Name string `gorm:"primary_key" json:"name"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Namespace string `gorm:"primary_key" json:"namespace"`
Status string `json:"status,omitempty"`
Capacity string `json:"capacity,omitempty"`
AccessMode string `gorm:"column:accessMode" json:"accessMode,omitempty"`
Annotation MapString `json:"annotations"`
Labels MapString `json:"labels"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
StorageClassName string `gorm:"column:storage_class" json:"storage_class,omitempty"`
InUse bool `gorm:"column:inUse" json:"inUse"`
}
func (Pvc) TableName() string {
return tablePersistentVolumeClaim
type ingressRule struct {
Host string `json:"host"`
Path string `json:"path"`
Service string `json:"service"`
Port int32 `json:"port"`
}
type Ingress struct {
Name string `gorm:"primary_key" json:"name"`
Namespace string `gorm:"primary_key" json:"namespace"`
Ip string `json:"ip,omitempty"`
TlsTermination string `json:"tlsTermination,omitempty"`
Annotation Annotation `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
func (Ingress) TableName() string {
return tableIngresses
Name string `gorm:"primary_key" json:"name"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Namespace string `gorm:"primary_key" json:"namespace"`
Ip string `json:"ip,omitempty"`
Rules string `gorm:"type:text" json:"rules, omitempty"`
TlsTermination string `gorm:"column:tlsTermination" json:"tlsTermination,omitempty"`
Annotation MapString `json:"annotations"`
Labels MapString `json:"labels"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
type Pod struct {
Name string `gorm:"primary_key" json:"name"`
Namespace string `gorm:"primary_key" json:"namespace"`
Status string `json:"status,omitempty"`
Node string `json:"node,omitempty"`
NodeIp string `json:"nodeIp,omitempty"`
PodIp string `json:"podIp,omitempty"`
Containers Containers `gorm:"type:text" json:"containers,omitempty"`
Annotation Annotation `json:"annotations"`
RestartCount int `json:"restartCount"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
// search and sort field, not seen in response
Name string `gorm:"primary_key" json:"-"`
Namespace string `gorm:"primary_key" json:"-"`
Node string `json:"-"`
OwnerKind string `gorm:"column:ownerKind" json:"-"`
OwnerName string `gorm:"column:ownerName" json:"-"`
CreateTime time.Time `gorm:"column:createTime" json:"-"`
// Kubernetes Standard Pod Specification
Kind string `json:"kind,omitempty"`
APIVersion string `gorm:"column:apiVersion" json:"apiVersion,omitempty"`
Spec v1.PodSpec `sql:"-" json:"spec,omitempty"`
Metadata v12.ObjectMeta `sql:"-" json:"metadata,omitempty"`
Status v1.PodStatus `sql:"-" json:"status,omitempty"`
// shadow field, used only for database
MetadataString string `gorm:"column:metadata;type:text" json:"-"`
SpecString string `gorm:"column:podSpec;type:text" json:"-"`
StatusString string `gorm:"column:status;type:text" json:"-"`
}
func (pod *Pod) AfterFind(scope *gorm.Scope) (err error) {
if err = json.Unmarshal([]byte(pod.SpecString), &pod.Spec); err != nil {
glog.Errorln(err)
}
if err = json.Unmarshal([]byte(pod.MetadataString), &pod.Metadata); err != nil {
glog.Errorln(err)
}
if err = json.Unmarshal([]byte(pod.StatusString), &pod.Status); err != nil {
glog.Errorln(err)
}
return nil
}
func (pod *Pod) BeforeSave(scope *gorm.Scope) (err error) {
if bytes, err := json.Marshal(pod.Spec); err == nil {
pod.SpecString = string(bytes)
} else {
glog.Errorln(err)
}
if bytes, err := json.Marshal(pod.Metadata); err == nil {
pod.MetadataString = string(bytes)
} else {
glog.Errorln(err)
}
if bytes, err := json.Marshal(pod.Status); err == nil {
pod.StatusString = string(bytes)
} else {
glog.Errorln(err)
}
return nil
}
type Container struct {
@@ -215,74 +300,137 @@ func (containers Containers) Value() (driver.Value, error) {
return string(bytes), err
}
func (Pod) TableName() string {
return tablePods
}
type Role struct {
Name string `gorm:"primary_key" json:"name"`
Namespace string `gorm:"primary_key" json:"namespace"`
Annotation Annotation `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
func (Role) TableName() string {
return tableRoles
Name string `gorm:"primary_key" json:"name"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Namespace string `gorm:"primary_key" json:"namespace"`
Annotation MapString `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
type ClusterRole struct {
Name string `gorm:"primary_key" json:"name"`
Annotation Annotation `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
func (ClusterRole) TableName() string {
return tableClusterRoles
Name string `gorm:"primary_key" json:"name"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Annotation MapString `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
type Namespace struct {
Name string `gorm:"primary_key" json:"name"`
Creator string `json:"creator,omitempty"`
Status string `json:"status"`
Name string `gorm:"primary_key" json:"name"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Creator string `json:"creator,omitempty"`
Status string `json:"status"`
Descrition string `json:"description,omitempty"`
Annotation Annotation `json:"annotations"`
Annotation MapString `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
Usaeg v1.ResourceList `gorm:"-" json:"usage,omitempty"`
}
func (Namespace) TableName() string {
return tableNamespaces
Usage v1.ResourceList `gorm:"-" json:"usage,omitempty"`
}
type StorageClass struct {
Name string `gorm:"primary_key" json:"name"`
Creator string `json:"creator,omitempty"`
Annotation Annotation `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
IsDefault bool `json:"default"`
Count int `json:"count"`
Name string `gorm:"primary_key" json:"name"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Creator string `json:"creator,omitempty"`
Annotation MapString `json:"annotations"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
IsDefault bool `json:"default"`
Count int `json:"count"`
Provisioner string `json:"provisioner"`
}
func (StorageClass) TableName() string {
return tableStorageClasses
type JobRevisions map[int]JobRevision
type JobRevision struct {
Status string `json:"status"`
Reasons []string `json:"reasons"`
Messages []string `json:"messages"`
Succeed int32 `json:"succeed"`
DesirePodNum int32 `json:"desire"`
Failed int32 `json:"failed"`
Uid string `json:"uid"`
StartTime time.Time `json:"start-time"`
CompletionTime time.Time `json:"completion-time"`
}
type Job struct {
Name string `gorm:"primary_key" json:"name,omitempty"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Namespace string `gorm:"primary_key" json:"namespace,omitempty"`
Completed int32 `json:"completed"`
Desire int32 `json:"desire"`
Status string `json:"status"`
Annotation MapString `json:"annotations"`
Labels MapString `json:"labels"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
UpdateTime time.Time `gorm:"column:updateTime" json:"updateTime,omitempty"`
}
type CronJob struct {
Name string `gorm:"primary_key" json:"name,omitempty"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Namespace string `gorm:"primary_key" json:"namespace,omitempty"`
Active int `json:"active"`
Schedule string `json:"schedule"`
Status string `json:"status"`
Annotation MapString `json:"annotations"`
Labels MapString `json:"labels"`
LastScheduleTime *time.Time `gorm:"column:lastScheduleTime" json:"lastScheduleTime,omitempty"`
}
type Node struct {
Name string `gorm:"primary_key" json:"name,omitempty"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
Ip string `json:"ip"`
Status string `json:"status"`
Annotation MapString `json:"annotations"`
Labels MapString `json:"labels"`
Taints Taints `json:"taints"`
Msg string `json:"msg"`
Role string `json:"role"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
}
type ConfigMap struct {
Name string `gorm:"primary_key" json:"name"`
Namespace string `gorm:"primary_key" json:"namespace"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
Annotation MapString `json:"annotations"`
Entries string `gorm:"type:text" json:"entries"`
}
type Secret struct {
Name string `gorm:"primary_key" json:"name"`
Namespace string `gorm:"primary_key" json:"namespace"`
DisplayName string `json:"displayName,omitempty" gorm:"column:displayName"`
CreateTime time.Time `gorm:"column:createTime" json:"createTime,omitempty"`
Annotation MapString `json:"annotations"`
Entries int `json:"entries"`
Type string `json:"type"`
}
type Paging struct {
Limit, Offset int
Limit, Offset, Page int
}
type Controller interface {
listAndWatch()
chanStop() chan struct{}
chanAlive() chan struct{}
Count(namespace string) int
CountWithConditions(condition string) int
ListWithConditions(condition string, paging *Paging) (int, interface{}, error)
total() int
initListerAndInformer()
sync(stopChan chan struct{})
Name() string
CloseDB()
Lister() interface{}
ListWithConditions(condition string, paging *Paging, order string) (int, interface{}, error)
}
type CommonAttribute struct {
K8sClient *kubernetes.Clientset
Name string
DB *gorm.DB
stopChan chan struct{}
aliveChan chan struct{}
@@ -298,46 +446,126 @@ func (ca *CommonAttribute) chanAlive() chan struct{} {
return ca.aliveChan
}
func (ca *CommonAttribute) CloseDB() {
ca.DB.Close()
}
type DeploymentCtl struct {
CommonAttribute
lister appV1.DeploymentLister
informer cache.SharedIndexInformer
}
type StatefulsetCtl struct {
CommonAttribute
lister appV1.StatefulSetLister
informer cache.SharedIndexInformer
}
type DaemonsetCtl struct {
CommonAttribute
lister appV1.DaemonSetLister
informer cache.SharedIndexInformer
}
type ServiceCtl struct {
CommonAttribute
lister coreV1.ServiceLister
informer cache.SharedIndexInformer
}
type PvcCtl struct {
CommonAttribute
lister coreV1.PersistentVolumeClaimLister
informer cache.SharedIndexInformer
}
type PodCtl struct {
CommonAttribute
lister coreV1.PodLister
informer cache.SharedIndexInformer
}
type IngressCtl struct {
lister v1beta1.IngressLister
informer cache.SharedIndexInformer
CommonAttribute
}
type NamespaceCtl struct {
CommonAttribute
lister coreV1.NamespaceLister
informer cache.SharedIndexInformer
}
type StorageClassCtl struct {
lister storageV1.StorageClassLister
informer cache.SharedIndexInformer
CommonAttribute
}
type RoleCtl struct {
lister rbacV1.RoleLister
informer cache.SharedIndexInformer
CommonAttribute
}
type ClusterRoleCtl struct {
lister rbacV1.ClusterRoleLister
informer cache.SharedIndexInformer
CommonAttribute
}
type ClusterRoleBindingCtl struct {
lister rbacV1.ClusterRoleBindingLister
informer cache.SharedIndexInformer
CommonAttribute
}
type RoleBindingCtl struct {
lister rbacV1.RoleBindingLister
informer cache.SharedIndexInformer
CommonAttribute
}
type JobCtl struct {
lister batchv1.JobLister
informer cache.SharedIndexInformer
CommonAttribute
}
type CronJobCtl struct {
lister batchv1beta1.CronJobLister
informer cache.SharedIndexInformer
CommonAttribute
}
type NodeCtl struct {
lister coreV1.NodeLister
informer cache.SharedIndexInformer
CommonAttribute
}
type ReplicaSetCtl struct {
lister appV1.ReplicaSetLister
informer cache.SharedIndexInformer
CommonAttribute
}
type ControllerRevisionCtl struct {
lister appV1.ControllerRevisionLister
informer cache.SharedIndexInformer
CommonAttribute
}
type ConfigMapCtl struct {
lister coreV1.ConfigMapLister
informer cache.SharedIndexInformer
CommonAttribute
}
type SecretCtl struct {
lister coreV1.SecretLister
informer cache.SharedIndexInformer
CommonAttribute
}

View File

@@ -1,18 +1,162 @@
package iam
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"strings"
"github.com/golang/glog"
"k8s.io/api/rbac/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
v12 "k8s.io/client-go/listers/rbac/v1"
"k8s.io/kubernetes/pkg/util/slice"
"kubesphere.io/kubesphere/pkg/client"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/models/controllers"
ksErr "kubesphere.io/kubesphere/pkg/util/errors"
)
const ClusterRoleKind = "ClusterRole"
// Get user list based on workspace role
func WorkspaceRoleUsers(workspace string, roleName string) ([]User, error) {
lister, err := controllers.GetLister(controllers.ClusterRoleBindings)
if err != nil {
return nil, err
}
clusterRoleBindingLister := lister.(v12.ClusterRoleBindingLister)
workspaceRoleBinding, err := clusterRoleBindingLister.Get(fmt.Sprintf("system:%s:%s", workspace, roleName))
if err != nil {
return nil, err
}
names := make([]string, 0)
for _, subject := range workspaceRoleBinding.Subjects {
if subject.Kind == v1.UserKind {
names = append(names, subject.Name)
}
}
users, err := GetUsers(names)
if err != nil {
return nil, err
}
for i := 0; i < len(users); i++ {
users[i].WorkspaceRole = roleName
}
return users, nil
}
func GetUsers(names []string) ([]User, error) {
var users []User
if names == nil || len(names) == 0 {
return make([]User, 0), nil
}
result, err := http.Get(fmt.Sprintf("http://%s/apis/account.kubesphere.io/v1alpha1/users?name=%s", constants.AccountAPIServer, strings.Join(names, ",")))
if err != nil {
return nil, err
}
defer result.Body.Close()
data, err := ioutil.ReadAll(result.Body)
if err != nil {
return nil, err
}
if result.StatusCode > 200 {
return nil, ksErr.Wrap(data)
}
err = json.Unmarshal(data, &users)
if err != nil {
return nil, err
}
return users, nil
}
func GetUser(name string) (*User, error) {
result, err := http.Get(fmt.Sprintf("http://%s/apis/account.kubesphere.io/v1alpha1/users/%s", constants.AccountAPIServer, name))
if err != nil {
return nil, err
}
defer result.Body.Close()
data, err := ioutil.ReadAll(result.Body)
if err != nil {
return nil, err
}
if result.StatusCode > 200 {
return nil, ksErr.Wrap(data)
}
var user User
err = json.Unmarshal(data, &user)
if err != nil {
return nil, err
}
return &user, nil
}
// Get rules
func WorkspaceRoleRules(workspace string, roleName string) (*v1.ClusterRole, []Rule, error) {
clusterRoleName := fmt.Sprintf("system:%s:%s", workspace, roleName)
workspaceRole, err := GetClusterRole(clusterRoleName)
if err != nil {
return nil, nil, err
}
for i := 0; i < len(workspaceRole.Rules); i++ {
workspaceRole.Rules[i].ResourceNames = nil
}
rules := make([]Rule, 0)
for i := 0; i < len(WorkspaceRoleRuleMapping); i++ {
rule := Rule{Name: WorkspaceRoleRuleMapping[i].Name}
rule.Actions = make([]Action, 0)
for j := 0; j < len(WorkspaceRoleRuleMapping[i].Actions); j++ {
if rulesMatchesAction(workspaceRole.Rules, WorkspaceRoleRuleMapping[i].Actions[j]) {
rule.Actions = append(rule.Actions, WorkspaceRoleRuleMapping[i].Actions[j])
}
}
if len(rule.Actions) > 0 {
rules = append(rules, rule)
}
}
workspaceRole.Name = roleName
return workspaceRole, rules, nil
}
func GetUserNamespaces(username string, requiredRule v1.PolicyRule) (allNamespace bool, namespaces []string, err error) {
clusterRoles, err := GetClusterRoles(username)
@@ -27,18 +171,22 @@ func GetUserNamespaces(username string, requiredRule v1.PolicyRule) (allNamespac
}
if requiredRule.Size() == 0 {
if ruleValidate(clusterRules, v1.PolicyRule{
Verbs: []string{"get", "list"},
APIGroups: []string{""},
Resources: []string{"namespaces"},
if RulesMatchesRequired(clusterRules, v1.PolicyRule{
Verbs: []string{"get"},
APIGroups: []string{"kubesphere.io"},
Resources: []string{"workspaces/namespaces"},
}) {
return true, nil, nil
}
} else if ruleValidate(clusterRules, requiredRule) {
return true, nil, nil
} else {
if RulesMatchesRequired(clusterRules, requiredRule) {
return true, nil, nil
}
}
roles, err := GetRoles(username)
roles, err := GetRoles("", username)
if err != nil {
return false, nil, err
@@ -58,7 +206,7 @@ func GetUserNamespaces(username string, requiredRule v1.PolicyRule) (allNamespac
namespaces = make([]string, 0)
for namespace, rules := range rulesMapping {
if requiredRule.Size() == 0 || ruleValidate(rules, requiredRule) {
if requiredRule.Size() == 0 || RulesMatchesRequired(rules, requiredRule) {
namespaces = append(namespaces, namespace)
}
}
@@ -66,70 +214,55 @@ func GetUserNamespaces(username string, requiredRule v1.PolicyRule) (allNamespac
return false, namespaces, nil
}
func DeleteRoleBindings(username string) error {
k8s := client.NewK8sClient()
roleBindings, err := k8s.RbacV1().RoleBindings("").List(meta_v1.ListOptions{})
if err != nil {
return err
}
for _, roleBinding := range roleBindings.Items {
length1 := len(roleBinding.Subjects)
for index, subject := range roleBinding.Subjects {
if subject.Kind == v1.UserKind && subject.Name == username {
roleBinding.Subjects = append(roleBinding.Subjects[:index], roleBinding.Subjects[index+1:]...)
index--
}
}
length2 := len(roleBinding.Subjects)
if length2 == 0 {
k8s.RbacV1().RoleBindings(roleBinding.Namespace).Delete(roleBinding.Name, &meta_v1.DeleteOptions{})
} else if length2 < length1 {
k8s.RbacV1().RoleBindings(roleBinding.Namespace).Update(&roleBinding)
}
}
clusterRoleBindingList, err := k8s.RbacV1().ClusterRoleBindings().List(meta_v1.ListOptions{})
for _, roleBinding := range clusterRoleBindingList.Items {
length1 := len(roleBinding.Subjects)
for index, subject := range roleBinding.Subjects {
if subject.Kind == v1.UserKind && subject.Name == username {
roleBinding.Subjects = append(roleBinding.Subjects[:index], roleBinding.Subjects[index+1:]...)
index--
}
}
length2 := len(roleBinding.Subjects)
if length2 == 0 {
k8s.RbacV1().ClusterRoleBindings().Delete(roleBinding.Name, &meta_v1.DeleteOptions{})
} else if length2 < length1 {
k8s.RbacV1().ClusterRoleBindings().Update(&roleBinding)
}
}
return nil
}
func GetRole(namespace string, name string) (*v1.Role, error) {
k8s := client.NewK8sClient()
role, err := k8s.RbacV1().Roles(namespace).Get(name, meta_v1.GetOptions{})
lister, err := controllers.GetLister(controllers.Roles)
if err != nil {
return nil, err
}
return role, nil
roleLister := lister.(v12.RoleLister)
role, err := roleLister.Roles(namespace).Get(name)
if err != nil {
return nil, err
}
return role.DeepCopy(), nil
}
func GetWorkspaceUsers(workspace string, workspaceRole string) ([]string, error) {
lister, err := controllers.GetLister(controllers.ClusterRoleBindings)
if err != nil {
return nil, err
}
clusterRoleBindingLister := lister.(v12.ClusterRoleBindingLister)
clusterRoleBinding, err := clusterRoleBindingLister.Get(fmt.Sprintf("system:%s:%s", workspace, workspaceRole))
if err != nil {
return nil, err
}
users := make([]string, 0)
for _, s := range clusterRoleBinding.Subjects {
if s.Kind == v1.UserKind && !slice.ContainsString(users, s.Name, nil) {
users = append(users, s.Name)
}
}
return users, nil
}
func GetClusterRoleBindings(name string) ([]v1.ClusterRoleBinding, error) {
k8s := client.NewK8sClient()
roleBindingList, err := k8s.RbacV1().ClusterRoleBindings().List(meta_v1.ListOptions{})
lister, err := controllers.GetLister(controllers.ClusterRoleBindings)
if err != nil {
return nil, err
}
clusterRoleBindingLister := lister.(v12.ClusterRoleBindingLister)
clusterRoleBindings, err := clusterRoleBindingLister.List(labels.Everything())
if err != nil {
return nil, err
@@ -137,9 +270,9 @@ func GetClusterRoleBindings(name string) ([]v1.ClusterRoleBinding, error) {
items := make([]v1.ClusterRoleBinding, 0)
for _, roleBinding := range roleBindingList.Items {
if roleBinding.RoleRef.Name == name {
items = append(items, roleBinding)
for _, clusterRoleBinding := range clusterRoleBindings {
if clusterRoleBinding.RoleRef.Name == name {
items = append(items, *clusterRoleBinding)
}
}
@@ -147,9 +280,15 @@ func GetClusterRoleBindings(name string) ([]v1.ClusterRoleBinding, error) {
}
func GetRoleBindings(namespace string, name string) ([]v1.RoleBinding, error) {
k8s := client.NewK8sClient()
lister, err := controllers.GetLister(controllers.RoleBindings)
roleBindingList, err := k8s.RbacV1().RoleBindings(namespace).List(meta_v1.ListOptions{})
if err != nil {
return nil, err
}
roleBindingLister := lister.(v12.RoleBindingLister)
roleBindings, err := roleBindingLister.RoleBindings(namespace).List(labels.Everything())
if err != nil {
return nil, err
@@ -157,9 +296,9 @@ func GetRoleBindings(namespace string, name string) ([]v1.RoleBinding, error) {
items := make([]v1.RoleBinding, 0)
for _, roleBinding := range roleBindingList.Items {
for _, roleBinding := range roleBindings {
if roleBinding.RoleRef.Name == name {
items = append(items, roleBinding)
items = append(items, *roleBinding)
}
}
@@ -167,30 +306,61 @@ func GetRoleBindings(namespace string, name string) ([]v1.RoleBinding, error) {
}
func GetClusterRole(name string) (*v1.ClusterRole, error) {
k8s := client.NewK8sClient()
role, err := k8s.RbacV1().ClusterRoles().Get(name, meta_v1.GetOptions{})
lister, err := controllers.GetLister(controllers.ClusterRoles)
if err != nil {
return nil, err
}
return role, nil
clusterRoleLister := lister.(v12.ClusterRoleLister)
role, err := clusterRoleLister.Get(name)
if err != nil {
return nil, err
}
return role.DeepCopy(), nil
}
func GetRoles(username string) ([]v1.Role, error) {
k8s := client.NewK8sClient()
roleBindings, err := k8s.RbacV1().RoleBindings("").List(meta_v1.ListOptions{})
func GetRoles(namespace string, username string) ([]v1.Role, error) {
lister, err := controllers.GetLister(controllers.RoleBindings)
if err != nil {
return nil, err
}
roleBindingLister := lister.(v12.RoleBindingLister)
lister, err = controllers.GetLister(controllers.Roles)
if err != nil {
return nil, err
}
roleLister := lister.(v12.RoleLister)
lister, err = controllers.GetLister(controllers.ClusterRoles)
if err != nil {
return nil, err
}
clusterRoleLister := lister.(v12.ClusterRoleLister)
roleBindings, err := roleBindingLister.RoleBindings(namespace).List(labels.Everything())
if err != nil {
return nil, err
}
roles := make([]v1.Role, 0)
for _, roleBinding := range roleBindings.Items {
for _, roleBinding := range roleBindings {
for _, subject := range roleBinding.Subjects {
if subject.Kind == v1.UserKind && subject.Name == username {
if roleBinding.RoleRef.Kind == ClusterRoleKind {
clusterRole, err := k8s.RbacV1().ClusterRoles().Get(roleBinding.RoleRef.Name, meta_v1.GetOptions{})
clusterRole, err := clusterRoleLister.Get(roleBinding.RoleRef.Name)
if err == nil {
var role = v1.Role{TypeMeta: (*clusterRole).TypeMeta, ObjectMeta: (*clusterRole).ObjectMeta, Rules: (*clusterRole).Rules}
role.Namespace = roleBinding.Namespace
@@ -205,9 +375,9 @@ func GetRoles(username string) ([]v1.Role, error) {
} else {
if subject.Kind == v1.UserKind && subject.Name == username {
rule, err := k8s.RbacV1().Roles(roleBinding.Namespace).Get(roleBinding.RoleRef.Name, meta_v1.GetOptions{})
role, err := roleLister.Roles(roleBinding.Namespace).Get(roleBinding.RoleRef.Name)
if err == nil {
roles = append(roles, *rule)
roles = append(roles, *role)
break
} else if apierrors.IsNotFound(err) {
glog.Infoln(err.Error())
@@ -227,10 +397,26 @@ func GetRoles(username string) ([]v1.Role, error) {
return roles, nil
}
// Get cluster roles by username
func GetClusterRoles(username string) ([]v1.ClusterRole, error) {
k8s := client.NewK8sClient()
clusterRoleBindings, err := k8s.RbacV1().ClusterRoleBindings().List(meta_v1.ListOptions{})
lister, err := controllers.GetLister(controllers.ClusterRoleBindings)
if err != nil {
return nil, err
}
clusterRoleBindingLister := lister.(v12.ClusterRoleBindingLister)
lister, err = controllers.GetLister(controllers.ClusterRoles)
if err != nil {
return nil, err
}
clusterRoleLister := lister.(v12.ClusterRoleLister)
clusterRoleBindings, err := clusterRoleBindingLister.List(labels.Everything())
if err != nil {
return nil, err
@@ -238,20 +424,25 @@ func GetClusterRoles(username string) ([]v1.ClusterRole, error) {
roles := make([]v1.ClusterRole, 0)
for _, roleBinding := range clusterRoleBindings.Items {
for _, roleBinding := range clusterRoleBindings {
for _, subject := range roleBinding.Subjects {
if subject.Kind == v1.UserKind && subject.Name == username {
if roleBinding.RoleRef.Kind == ClusterRoleKind {
role, err := k8s.RbacV1().ClusterRoles().Get(roleBinding.RoleRef.Name, meta_v1.GetOptions{})
role, err := clusterRoleLister.Get(roleBinding.RoleRef.Name)
if err == nil {
role = role.DeepCopy()
if role.Annotations == nil {
role.Annotations = make(map[string]string, 0)
}
role.Annotations["rbac.authorization.k8s.io/clusterrolebinding"] = roleBinding.Name
if roleBinding.Annotations != nil &&
roleBinding.Annotations["rbac.authorization.k8s.io/clusterrole"] == roleBinding.RoleRef.Name {
role.Annotations["rbac.authorization.k8s.io/clusterrole"] = "true"
}
roles = append(roles, *role)
break
} else if apierrors.IsNotFound(err) {
glog.Infoln(err.Error())
glog.Warning(err)
break
} else {
return nil, err
@@ -264,34 +455,189 @@ func GetClusterRoles(username string) ([]v1.ClusterRole, error) {
return roles, nil
}
func ruleValidate(rules []v1.PolicyRule, rule v1.PolicyRule) bool {
func GetUserRules(username string) (map[string][]Rule, error) {
for _, apiGroup := range rule.APIGroups {
if len(rule.NonResourceURLs) == 0 {
for _, resource := range rule.Resources {
items := make(map[string][]Rule, 0)
userRoles, err := GetRoles("", username)
//if len(Rule.ResourceNames) == 0 {
if err != nil {
return nil, err
}
for _, verb := range rule.Verbs {
if !verbValidate(rules, apiGroup, "", resource, "", verb) {
return false
}
rulesMapping := make(map[string][]v1.PolicyRule, 0)
for _, role := range userRoles {
rules := rulesMapping[role.Namespace]
if rules == nil {
rules = make([]v1.PolicyRule, 0)
}
rules = append(rules, role.Rules...)
rulesMapping[role.Namespace] = rules
}
for namespace, policyRules := range rulesMapping {
rules := convertToRules(policyRules)
if len(rules) > 0 {
items[namespace] = rules
}
}
return items, nil
}
func convertToRules(policyRules []v1.PolicyRule) []Rule {
rules := make([]Rule, 0)
for i := 0; i < (len(RoleRuleMapping)); i++ {
rule := Rule{Name: RoleRuleMapping[i].Name}
rule.Actions = make([]Action, 0)
for j := 0; j < (len(RoleRuleMapping[i].Actions)); j++ {
if rulesMatchesAction(policyRules, RoleRuleMapping[i].Actions[j]) {
rule.Actions = append(rule.Actions, RoleRuleMapping[i].Actions[j])
}
}
if len(rule.Actions) > 0 {
rules = append(rules, rule)
}
}
return rules
}
func GetUserClusterRules(username string) ([]Rule, error) {
rules := make([]Rule, 0)
clusterRoles, err := GetClusterRoles(username)
if err != nil {
return nil, err
}
clusterRules := make([]v1.PolicyRule, 0)
for _, role := range clusterRoles {
clusterRules = append(clusterRules, role.Rules...)
}
for i := 0; i < (len(ClusterRoleRuleMapping)); i++ {
rule := Rule{Name: ClusterRoleRuleMapping[i].Name}
rule.Actions = make([]Action, 0)
for j := 0; j < (len(ClusterRoleRuleMapping[i].Actions)); j++ {
if rulesMatchesAction(clusterRules, ClusterRoleRuleMapping[i].Actions[j]) {
rule.Actions = append(rule.Actions, ClusterRoleRuleMapping[i].Actions[j])
}
}
if len(rule.Actions) > 0 {
rules = append(rules, rule)
}
}
return rules, nil
}
func GetClusterRoleRules(name string) ([]Rule, error) {
clusterRole, err := GetClusterRole(name)
if err != nil {
return nil, err
}
rules := make([]Rule, 0)
for i := 0; i < len(ClusterRoleRuleMapping); i++ {
rule := Rule{Name: ClusterRoleRuleMapping[i].Name}
rule.Actions = make([]Action, 0)
for j := 0; j < (len(ClusterRoleRuleMapping[i].Actions)); j++ {
if rulesMatchesAction(clusterRole.Rules, ClusterRoleRuleMapping[i].Actions[j]) {
rule.Actions = append(rule.Actions, ClusterRoleRuleMapping[i].Actions[j])
}
}
if len(rule.Actions) > 0 {
rules = append(rules, rule)
}
}
return rules, nil
}
func GetRoleRules(namespace string, name string) ([]Rule, error) {
role, err := GetRole(namespace, name)
if err != nil {
return nil, err
}
rules := make([]Rule, 0)
for i := 0; i < len(RoleRuleMapping); i++ {
rule := Rule{Name: RoleRuleMapping[i].Name}
rule.Actions = make([]Action, 0)
for j := 0; j < len(RoleRuleMapping[i].Actions); j++ {
if rulesMatchesAction(role.Rules, RoleRuleMapping[i].Actions[j]) {
rule.Actions = append(rule.Actions, RoleRuleMapping[i].Actions[j])
}
}
if len(rule.Actions) > 0 {
rules = append(rules, rule)
}
}
return rules, nil
}
func rulesMatchesAction(rules []v1.PolicyRule, action Action) bool {
for _, rule := range action.Rules {
if !RulesMatchesRequired(rules, rule) {
return false
}
}
return true
}
func RulesMatchesRequired(rules []v1.PolicyRule, required v1.PolicyRule) bool {
for _, rule := range rules {
if ruleMatchesRequired(rule, required) {
return true
}
}
return false
}
func ruleMatchesRequired(rule v1.PolicyRule, required v1.PolicyRule) bool {
if len(required.NonResourceURLs) == 0 {
for _, apiGroup := range required.APIGroups {
for _, resource := range required.Resources {
resources := strings.Split(resource, "/")
resource = resources[0]
var subsource string
if len(resources) > 1 {
subsource = resources[1]
}
//} else {
// for _, resourceName := range Rule.ResourceNames {
// for _, verb := range Rule.Verbs {
// if !verbValidate(rules, apiGroup, "", resource, resourceName, verb) {
// return false
// }
// }
// }
//}
if len(required.ResourceNames) == 0 {
for _, verb := range required.Verbs {
if !ruleMatchesRequest(rule, apiGroup, "", resource, subsource, "", verb) {
return false
}
}
} else {
for _, resourceName := range required.ResourceNames {
for _, verb := range required.Verbs {
if !ruleMatchesRequest(rule, apiGroup, "", resource, subsource, resourceName, verb) {
return false
}
}
}
}
}
} else {
for _, nonResourceURL := range rule.NonResourceURLs {
for _, verb := range rule.Verbs {
if !verbValidate(rules, apiGroup, nonResourceURL, "", "", verb) {
}
} else {
for _, apiGroup := range required.APIGroups {
for _, nonResourceURL := range required.NonResourceURLs {
for _, verb := range required.Verbs {
if !ruleMatchesRequest(rule, apiGroup, nonResourceURL, "", "", "", verb) {
return false
}
}
@@ -301,22 +647,94 @@ func ruleValidate(rules []v1.PolicyRule, rule v1.PolicyRule) bool {
return true
}
func verbValidate(rules []v1.PolicyRule, apiGroup string, nonResourceURL string, resource string, resourceName string, verb string) bool {
for _, rule := range rules {
if slice.ContainsString(rule.APIGroups, apiGroup, nil) || slice.ContainsString(rule.APIGroups, v1.APIGroupAll, nil) {
if slice.ContainsString(rule.Verbs, verb, nil) || slice.ContainsString(rule.Verbs, v1.VerbAll, nil) {
if nonResourceURL == "" {
if slice.ContainsString(rule.Resources, resource, nil) || slice.ContainsString(rule.Resources, v1.ResourceAll, nil) {
if resourceName == "" {
return true
} else if slice.ContainsString(rule.ResourceNames, resourceName, nil) || slice.ContainsString(rule.Resources, v1.ResourceAll, nil) {
return true
}
}
} else if slice.ContainsString(rule.NonResourceURLs, nonResourceURL, nil) || slice.ContainsString(rule.NonResourceURLs, v1.NonResourceAll, nil) {
return true
}
}
func ruleMatchesResources(rule v1.PolicyRule, apiGroup string, resource string, subresource string, resourceName string) bool {
if resource == "" {
return false
}
if !hasString(rule.APIGroups, apiGroup) && !hasString(rule.APIGroups, v1.ResourceAll) {
return false
}
if len(rule.ResourceNames) > 0 && !hasString(rule.ResourceNames, resourceName) {
return false
}
combinedResource := resource
if subresource != "" {
combinedResource = combinedResource + "/" + subresource
}
for _, res := range rule.Resources {
// match "*"
if res == v1.ResourceAll || res == combinedResource {
return true
}
// match "*/subresource"
if len(subresource) > 0 && strings.HasPrefix(res, "*/") && subresource == strings.TrimLeft(res, "*/") {
return true
}
// match "resource/*"
if strings.HasSuffix(res, "/*") && resource == strings.TrimRight(res, "/*") {
return true
}
}
return false
}
func ruleMatchesRequest(rule v1.PolicyRule, apiGroup string, nonResourceURL string, resource string, subresource string, resourceName string, verb string) bool {
if !hasString(rule.Verbs, verb) && !hasString(rule.Verbs, v1.VerbAll) {
return false
}
if nonResourceURL == "" {
return ruleMatchesResources(rule, apiGroup, resource, subresource, resourceName)
} else {
return ruleMatchesNonResource(rule, nonResourceURL)
}
}
func ruleMatchesNonResource(rule v1.PolicyRule, nonResourceURL string) bool {
if nonResourceURL == "" {
return false
}
for _, spec := range rule.NonResourceURLs {
if pathMatches(nonResourceURL, spec) {
return true
}
}
return false
}
func pathMatches(path, spec string) bool {
// Allow wildcard match
if spec == "*" {
return true
}
// Allow exact match
if spec == path {
return true
}
// Allow a trailing * subpath match
if strings.HasSuffix(spec, "*") && strings.HasPrefix(path, strings.TrimRight(spec, "*")) {
return true
}
return false
}
func hasString(slice []string, value string) bool {
for _, s := range slice {
if s == value {
return true
}
}
return false

File diff suppressed because it is too large Load Diff

View File

@@ -1,161 +0,0 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package iam
import (
"k8s.io/api/rbac/v1"
)
func GetUserRules(username string) (map[string][]Rule, error) {
items := make(map[string][]Rule, 0)
userRoles, err := GetRoles(username)
if err != nil {
return nil, err
}
rulesMapping := make(map[string][]v1.PolicyRule, 0)
for _, role := range userRoles {
rules := rulesMapping[role.Namespace]
if rules == nil {
rules = make([]v1.PolicyRule, 0)
}
rules = append(rules, role.Rules...)
rulesMapping[role.Namespace] = rules
}
for namespace, policyRules := range rulesMapping {
rules := convertToRules(policyRules)
if len(rules) > 0 {
items[namespace] = rules
}
}
return items, nil
}
func convertToRules(policyRules []v1.PolicyRule) []Rule {
rules := make([]Rule, 0)
for i := 0; i < (len(RoleRuleGroup)); i++ {
rule := Rule{Name: RoleRuleGroup[i].Name}
rule.Actions = make([]Action, 0)
for j := 0; j < (len(RoleRuleGroup[i].Actions)); j++ {
if actionValidate(policyRules, RoleRuleGroup[i].Actions[j]) {
rule.Actions = append(rule.Actions, RoleRuleGroup[i].Actions[j])
}
}
if len(rule.Actions) > 0 {
rules = append(rules, rule)
}
}
return rules
}
func GetUserClusterRules(username string) ([]Rule, error) {
rules := make([]Rule, 0)
clusterRoles, err := GetClusterRoles(username)
if err != nil {
return nil, err
}
clusterRules := make([]v1.PolicyRule, 0)
for _, role := range clusterRoles {
clusterRules = append(clusterRules, role.Rules...)
}
for i := 0; i < (len(ClusterRoleRuleGroup)); i++ {
rule := Rule{Name: ClusterRoleRuleGroup[i].Name}
rule.Actions = make([]Action, 0)
for j := 0; j < (len(ClusterRoleRuleGroup[i].Actions)); j++ {
if actionValidate(clusterRules, ClusterRoleRuleGroup[i].Actions[j]) {
rule.Actions = append(rule.Actions, ClusterRoleRuleGroup[i].Actions[j])
}
}
if len(rule.Actions) > 0 {
rules = append(rules, rule)
}
}
return rules, nil
}
func GetClusterRoleRules(name string) ([]Rule, error) {
clusterRole, err := GetClusterRole(name)
if err != nil {
return nil, err
}
rules := make([]Rule, 0)
for i := 0; i < len(ClusterRoleRuleGroup); i++ {
rule := Rule{Name: ClusterRoleRuleGroup[i].Name}
rule.Actions = make([]Action, 0)
for j := 0; j < (len(ClusterRoleRuleGroup[i].Actions)); j++ {
if actionValidate(clusterRole.Rules, ClusterRoleRuleGroup[i].Actions[j]) {
rule.Actions = append(rule.Actions, ClusterRoleRuleGroup[i].Actions[j])
}
}
if len(rule.Actions) > 0 {
rules = append(rules, rule)
}
}
return rules, nil
}
func GetRoleRules(namespace string, name string) ([]Rule, error) {
role, err := GetRole(namespace, name)
if err != nil {
return nil, err
}
rules := make([]Rule, 0)
for i := 0; i < len(RoleRuleGroup); i++ {
rule := Rule{Name: RoleRuleGroup[i].Name}
rule.Actions = make([]Action, 0)
for j := 0; j < len(RoleRuleGroup[i].Actions); j++ {
if actionValidate(role.Rules, RoleRuleGroup[i].Actions[j]) {
rule.Actions = append(rule.Actions, RoleRuleGroup[i].Actions[j])
}
}
if len(rule.Actions) > 0 {
rules = append(rules, rule)
}
}
return rules, nil
}
func actionValidate(rules []v1.PolicyRule, action Action) bool {
for _, rule := range action.Rules {
if !ruleValidate(rules, rule) {
return false
}
}
return true
}

39
pkg/models/iam/types.go Normal file
View File

@@ -0,0 +1,39 @@
package iam
import (
"k8s.io/api/rbac/v1"
)
type Action struct {
Name string `json:"name"`
Rules []v1.PolicyRule `json:"rules"`
}
type Rule struct {
Name string `json:"name"`
Actions []Action `json:"actions"`
}
type SimpleRule struct {
Name string `json:"name"`
Actions []string `json:"actions"`
}
type User struct {
Username string `json:"username"`
Groups []string `json:"groups"`
Password string `json:"password,omitempty"`
AvatarUrl string `json:"avatar_url"`
Description string `json:"description"`
Email string `json:"email"`
LastLoginTime string `json:"last_login_time"`
Status int `json:"status"`
ClusterRole string `json:"cluster_role"`
ClusterRules []SimpleRule `json:"cluster_rules,omitempty"`
Roles map[string]string `json:"roles,omitempty"`
Rules map[string][]SimpleRule `json:"rules,omitempty"`
Role string `json:"role,omitempty"`
WorkspaceRoles map[string]string `json:"workspace_roles,omitempty"`
WorkspaceRole string `json:"workspace_role,omitempty"`
WorkspaceRules map[string][]SimpleRule `json:"workspace_rules,omitempty"`
}

View File

@@ -29,15 +29,23 @@ import (
"k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"crypto/tls"
"io/ioutil"
"net/http"
"time"
kubeclient "kubesphere.io/kubesphere/pkg/client"
"kubesphere.io/kubesphere/pkg/constants"
)
const TYPE = "kubernetes.io/dockerconfigjson"
const SECRET = "Secret"
const APIVERSION = "v1"
const (
TYPE = "kubernetes.io/dockerconfigjson"
SECRET = "Secret"
APIVERSION = "v1"
TYPEHARBOR = "harbor"
TYPEDOCKERHUB = "dockerhub"
TYPEDOCKERREGISTRY = "docker-registry"
)
type AuthInfo struct {
Username string `json:"username"`
@@ -45,6 +53,50 @@ type AuthInfo struct {
ServerHost string `json:"serverhost"`
}
type DockerConfigEntry struct {
Username string `json:"username"`
Password string `json:"password"`
Auth string `json:"auth"`
}
type RegistryInfo struct {
user, password, registryType, url string
}
type dockerConfig map[string]map[string]DockerConfigEntry
type harborRepo struct {
RepoName string `json:"repository_name"`
}
type harborRepos struct {
Repos []harborRepo `json:"repository"`
}
type registryRepos struct {
Repositories []string
}
type registryTags struct {
Name string `json:"name"`
Tags []string `json:"tags"`
}
type dockerhubRepo struct {
RepoName string `json:"repo_name"`
}
type dockerhubRepos struct {
Repositories []dockerhubRepo `json:"results"`
}
type dockerhubTag struct {
TagName string `json:"name"`
}
type dockerhubTags struct {
Tags []dockerhubTag `json:"results"`
}
func NewAuthInfo(para Registries) *AuthInfo {
return &AuthInfo{
@@ -440,3 +492,237 @@ func GetReisgtries(name string) (Registries, error) {
return reg, nil
}
// by image secret to get registry'info, like username, password, registry url ...
func getRegistryInfo(namespace, registryName string) *RegistryInfo {
var registry RegistryInfo
k8sClient := kubeclient.NewK8sClient()
secret, err := k8sClient.CoreV1().Secrets(namespace).Get(registryName, meta_v1.GetOptions{})
if err != nil {
glog.Error(err)
return nil
}
registry.registryType = secret.Annotations["type"]
data := secret.Data[v1.DockerConfigJsonKey]
authsMap := make(dockerConfig)
err = json.Unmarshal(data, &authsMap)
if err != nil {
glog.Error(err)
return nil
}
for url, config := range authsMap["auths"] {
registry.url = url
registry.user = config.Username
registry.password = config.Password
break
}
return &registry
}
func ImageSearch(namespace, registryName, searchWord string) []string {
registry := getRegistryInfo(namespace, registryName)
if registry == nil {
return nil
}
switch registry.registryType {
case TYPEDOCKERHUB:
return searchDockerHub(registry.url, searchWord)
case TYPEDOCKERREGISTRY:
return searchDockerRegistry(registry.url, searchWord)
case TYPEHARBOR:
return searchHarbor(registry.url, registry.user, registry.password, searchWord)
}
return nil
}
func GetImageTags(namespace, registryName, imageName string) []string {
registry := getRegistryInfo(namespace, registryName)
if registry == nil {
return nil
}
switch registry.registryType {
case TYPEDOCKERHUB:
return getTagInDockerHub(registry.url, imageName)
case TYPEDOCKERREGISTRY:
return getTagInDockerRegistry(registry.url, imageName)
case TYPEHARBOR:
return getTagInHarbor(registry.url, registry.user, registry.password, imageName)
}
return nil
}
func httpGet(url, username, password string, insecure bool) ([]byte, error) {
var httpClient *http.Client
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return nil, err
}
if insecure {
httpClient = &http.Client{}
} else {
req.SetBasicAuth(username, password)
tr := &http.Transport{TLSClientConfig: &tls.Config{InsecureSkipVerify: true}}
httpClient = &http.Client{Timeout: 20 * time.Second, Transport: tr}
}
resp, err := httpClient.Do(req)
if err != nil {
err := fmt.Errorf("Request to %s failed reason: %s ", url, err)
return nil, err
}
body, err := ioutil.ReadAll(resp.Body)
defer resp.Body.Close()
if resp.StatusCode >= http.StatusBadRequest || err != nil {
return nil, err
}
return body, nil
}
func searchHarbor(url, username, password, searchWord string) []string {
url = strings.TrimSuffix(url, "/") + fmt.Sprintf("/api/search?q=%s", searchWord)
body, err := httpGet(url, username, password, false)
if err != nil || len(body) == 0 {
glog.Error(err)
return nil
}
var repos harborRepos
repoList := make([]string, 0, 100)
err = json.Unmarshal(body, &repos)
if err != nil {
glog.Error(err)
return nil
}
for _, repo := range repos.Repos {
repoList = append(repoList, repo.RepoName)
}
return repoList
}
func searchDockerRegistry(url, searchword string) []string {
url = strings.TrimSuffix(url, "/") + "/v2/_catalog"
body, err := httpGet(url, "", "", true)
if err != nil || len(body) == 0 {
glog.Error(err)
return nil
}
var repos registryRepos
err = json.Unmarshal(body, &repos)
if err != nil {
glog.Error(err)
return nil
}
repoList := make([]string, 0, 100)
for _, repo := range repos.Repositories {
if strings.HasPrefix(repo, searchword) {
repoList = append(repoList, repo)
}
}
return repoList
}
func searchDockerHub(url, searchWord string) []string {
url = fmt.Sprintf("https://hub.docker.com/v2/search/repositories/?page=1&query=%s&page_size=50", searchWord)
body, err := httpGet(url, "", "", true)
if err != nil || len(body) == 0 {
glog.Error(err)
return nil
}
var repos dockerhubRepos
err = json.Unmarshal(body, &repos)
if err != nil {
glog.Error(err)
return nil
}
repoList := make([]string, 0, 50)
for _, repo := range repos.Repositories {
repoList = append(repoList, repo.RepoName)
}
return repoList
}
func getTagInHarbor(url, username, password, imageName string) []string {
url = strings.TrimSuffix(url, "/") + fmt.Sprintf("/api/repositories/%s/tags", imageName)
body, err := httpGet(url, username, password, false)
if err != nil || len(body) == 0 {
glog.Error(err)
return nil
}
var tagList []string
err = json.Unmarshal(body, &tagList)
if err != nil {
glog.Error(err)
return nil
}
return tagList
}
func getTagInDockerRegistry(url, imageName string) []string {
url = strings.TrimSuffix(url, "/") + fmt.Sprintf("/v2/%s/tags/list", imageName)
body, err := httpGet(url, "", "", true)
if err != nil || len(body) == 0 {
glog.Error(err)
return nil
}
var tags registryTags
err = json.Unmarshal(body, &tags)
if err != nil {
glog.Error(err)
return nil
}
return tags.Tags
}
func getTagInDockerHub(url, imageName string) []string {
if !strings.Contains(imageName, "/") {
imageName = fmt.Sprintf("library/%s", imageName)
}
url = fmt.Sprintf("https://hub.docker.com/v2/repositories/%s/tags/?page=1&page_size=200", imageName)
body, err := httpGet(url, "", "", true)
if err != nil || len(body) == 0 {
glog.Error(err)
return nil
}
var tags dockerhubTags
err = json.Unmarshal(body, &tags)
if err != nil {
glog.Error(err)
return nil
}
tagList := make([]string, 0, 200)
for _, tag := range tags.Tags {
tagList = append(tagList, tag.TagName)
}
return tagList
}

View File

@@ -31,11 +31,12 @@ import (
"github.com/golang/glog"
"gopkg.in/yaml.v2"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/api/core/v1"
"kubesphere.io/kubesphere/pkg/client"
"kubesphere.io/kubesphere/pkg/constants"
"kubesphere.io/kubesphere/pkg/options"
@@ -46,6 +47,7 @@ const (
keyPath = "/etc/kubernetes/pki/ca.key"
clusterName = "kubernetes"
kubectlConfigKey = "config"
defaultNamespace = "default"
)
type clusterInfo struct {
@@ -59,8 +61,9 @@ type cluster struct {
}
type contextInfo struct {
Cluster string `yaml:"cluster"`
User string `yaml:"user"`
Cluster string `yaml:"cluster"`
User string `yaml:"user"`
NameSpace string `yaml:"namespace"`
}
type contextObject struct {
@@ -186,14 +189,14 @@ func newCertificate(info CertInformation) *x509.Certificate {
}
func generateCaAndKey(user, caPath, keyPath string) (string, string, error) {
crtinfo := CertInformation{CommonName: user, IsCA: false}
crtInfo := CertInformation{CommonName: user, IsCA: false}
crt, pri, err := Parse(caPath, keyPath)
if err != nil {
glog.Error(err)
return "", "", err
}
cert, key, err := createCRT(crt, pri, crtinfo)
cert, key, err := createCRT(crt, pri, crtInfo)
if err != nil {
glog.Error(err)
return "", "", err
@@ -217,7 +220,7 @@ func createKubeConfig(userName string) (string, error) {
tmpKubeConfig.Clusters = append(tmpKubeConfig.Clusters, tmpCluster)
contextName := userName + "@" + clusterName
tmpContext := contextObject{Context: contextInfo{User: userName, Cluster: clusterName}, Name: contextName}
tmpContext := contextObject{Context: contextInfo{User: userName, Cluster: clusterName, NameSpace: defaultNamespace}, Name: contextName}
tmpKubeConfig.Contexts = append(tmpKubeConfig.Contexts, tmpContext)
cert, key, err := generateCaAndKey(userName, caPath, keyPath)
@@ -240,42 +243,48 @@ func createKubeConfig(userName string) (string, error) {
func CreateKubeConfig(user string) error {
k8sClient := client.NewK8sClient()
config, err := createKubeConfig(user)
if err != nil {
glog.Errorln(err)
return err
_, err := k8sClient.CoreV1().ConfigMaps(constants.KubeSphereControlNamespace).Get(user, metaV1.GetOptions{})
if errors.IsNotFound(err) {
config, err := createKubeConfig(user)
if err != nil {
glog.Errorln(err)
return err
}
data := map[string]string{"config": string(config)}
configMap := v1.ConfigMap{TypeMeta: metaV1.TypeMeta{Kind: "Configmap", APIVersion: "v1"}, ObjectMeta: metaV1.ObjectMeta{Name: user}, Data: data}
_, err = k8sClient.CoreV1().ConfigMaps(constants.KubeSphereControlNamespace).Create(&configMap)
if err != nil && !errors.IsAlreadyExists(err) {
glog.Errorf("create user %s's kubeConfig failed, reason:", user, err)
return err
}
}
data := map[string]string{"config": string(config)}
var configmap = v1.ConfigMap{TypeMeta: metav1.TypeMeta{Kind: "Configmap", APIVersion: "v1"}, ObjectMeta: metav1.ObjectMeta{Name: user}, Data: data}
_, err = k8sClient.CoreV1().ConfigMaps(constants.KubeSphereControlNamespace).Create(&configmap)
if err != nil && !errors.IsAlreadyExists(err) {
glog.Errorf("create user %s's kubeConfig failed, reason:", user, err)
return err
}
return nil
}
func GetKubeConfig(user string) (string, error) {
k8sClient := client.NewK8sClient()
configmap, err := k8sClient.CoreV1().ConfigMaps(constants.KubeSphereControlNamespace).Get(user, metav1.GetOptions{})
configMap, err := k8sClient.CoreV1().ConfigMaps(constants.KubeSphereControlNamespace).Get(user, metaV1.GetOptions{})
if err != nil {
glog.Errorf("cannot get user %s's kubeConfig, reason:", user, err)
return "", err
}
return configmap.Data[kubectlConfigKey], nil
return configMap.Data[kubectlConfigKey], nil
}
func DelKubeConfig(user string) error {
k8sClient := client.NewK8sClient()
_, err := k8sClient.CoreV1().ConfigMaps(constants.KubeSphereControlNamespace).Get(user, metav1.GetOptions{})
_, err := k8sClient.CoreV1().ConfigMaps(constants.KubeSphereControlNamespace).Get(user, metaV1.GetOptions{})
if errors.IsNotFound(err) {
return nil
}
deletePolicy := metav1.DeletePropagationBackground
err = k8sClient.CoreV1().ConfigMaps(constants.KubeSphereControlNamespace).Delete(user, &metav1.DeleteOptions{PropagationPolicy: &deletePolicy})
deletePolicy := metaV1.DeletePropagationBackground
err = k8sClient.CoreV1().ConfigMaps(constants.KubeSphereControlNamespace).Delete(user, &metaV1.DeleteOptions{PropagationPolicy: &deletePolicy})
if err != nil {
glog.Errorf("delete user %s's kubeConfig failed, reason:", user, err)
return err

View File

@@ -14,7 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
package models
package kubectl
import (
"fmt"
@@ -35,21 +35,20 @@ import (
const (
namespace = constants.KubeSphereControlNamespace
retry = 5
)
type kubectlPodInfo struct {
type KubectlPodInfo struct {
Namespace string `json:"namespace"`
Pod string `json:"pod"`
Container string `json:"container"`
}
func GetKubectlPod(user string) (kubectlPodInfo, error) {
func GetKubectlPod(user string) (KubectlPodInfo, error) {
k8sClient := client.NewK8sClient()
deploy, err := k8sClient.AppsV1beta2().Deployments(namespace).Get(user, metav1.GetOptions{})
if err != nil {
glog.Errorln(err)
return kubectlPodInfo{}, err
return KubectlPodInfo{}, err
}
selectors := deploy.Spec.Selector.MatchLabels
@@ -57,16 +56,16 @@ func GetKubectlPod(user string) (kubectlPodInfo, error) {
podList, err := k8sClient.CoreV1().Pods(namespace).List(metav1.ListOptions{LabelSelector: labelSelector})
if err != nil {
glog.Errorln(err)
return kubectlPodInfo{}, err
return KubectlPodInfo{}, err
}
pod, err := selectCorrectPod(namespace, podList.Items)
if err != nil {
glog.Errorln(err)
return kubectlPodInfo{}, err
return KubectlPodInfo{}, err
}
info := kubectlPodInfo{Namespace: pod.Namespace, Pod: pod.Name, Container: pod.Status.ContainerStatuses[0].Name}
info := KubectlPodInfo{Namespace: pod.Namespace, Pod: pod.Name, Container: pod.Status.ContainerStatuses[0].Name}
return info, nil
@@ -91,7 +90,12 @@ func selectCorrectPod(namespace string, pods []v1.Pod) (kubectlPod v1.Pod, err e
return kubectPodList[random], nil
}
func CreateKubectlPod(user string) error {
func CreateKubectlDeploy(user string) error {
k8sClient := client.NewK8sClient()
_, err := k8sClient.AppsV1().Deployments(namespace).Get(user, metav1.GetOptions{})
if err == nil {
return nil
}
replica := int32(1)
selector := metav1.LabelSelector{MatchLabels: map[string]string{"user": user}}
@@ -122,17 +126,12 @@ func CreateKubectlPod(user string) error {
},
}
k8sClient := client.NewK8sClient()
_, err := k8sClient.AppsV1beta2().Deployments(namespace).Create(&deployment)
if errors.IsAlreadyExists(err) {
return nil
}
_, err = k8sClient.AppsV1beta2().Deployments(namespace).Create(&deployment)
return err
}
func DelKubectlPod(user string) error {
func DelKubectlDeploy(user string) error {
k8sClient := client.NewK8sClient()
_, err := k8sClient.AppsV1beta2().Deployments(namespace).Get(user, metav1.GetOptions{})
if errors.IsNotFound(err) {

0
pkg/models/metrics/containers.go Normal file → Executable file
View File

File diff suppressed because it is too large Load Diff

200
pkg/models/metrics/metricsrule.go Executable file
View File

@@ -0,0 +1,200 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package metrics
import (
"strings"
)
func MakeWorkloadPromQL(metricName, nsName, wlFilter string) string {
if wlFilter == "" {
wlFilter = ".*"
}
var promql = RulePromQLTmplMap[metricName]
promql = strings.Replace(promql, "$2", nsName, -1)
promql = strings.Replace(promql, "$3", wlFilter, -1)
return promql
}
func MakeSpecificWorkloadRule(wkKind, wkName, namespace string) string {
var rule = PodInfoRule
if namespace == "" {
namespace = ".*"
}
// alertnatives values: Deployment StatefulSet ReplicaSet DaemonSet
wkKind = strings.ToLower(wkKind)
switch wkKind {
case "deployment":
wkKind = ReplicaSet
if wkName != "" {
wkName = "~\"^" + wkName + `-(\\w)+$"`
} else {
wkName = "~\".*\""
}
rule = strings.Replace(rule, "$1", wkKind, -1)
rule = strings.Replace(rule, "$2", wkName, -1)
rule = strings.Replace(rule, "$3", namespace, -1)
return rule
case "replicaset":
wkKind = ReplicaSet
case "statefulset":
wkKind = StatefulSet
case "daemonset":
wkKind = DaemonSet
}
if wkName == "" {
wkName = "~\".*\""
} else {
wkName = "\"" + wkName + "\""
}
rule = strings.Replace(rule, "$1", wkKind, -1)
rule = strings.Replace(rule, "$2", wkName, -1)
rule = strings.Replace(rule, "$3", namespace, -1)
return rule
}
func MakeAllWorkspacesPromQL(metricsName, nsFilter string) string {
var promql = RulePromQLTmplMap[metricsName]
nsFilter = "!~\"" + nsFilter + "\""
promql = strings.Replace(promql, "$1", nsFilter, -1)
return promql
}
func MakeSpecificWorkspacePromQL(metricsName, nsFilter string) string {
var promql = RulePromQLTmplMap[metricsName]
nsFilter = "=~\"" + nsFilter + "\""
promql = strings.Replace(promql, "$1", nsFilter, -1)
return promql
}
func MakeContainerPromQL(nsName, nodeId, podName, containerName, metricName, containerFilter string) string {
var promql string
if nsName != "" {
// get container metrics from namespace-pod
promql = RulePromQLTmplMap[metricName]
promql = strings.Replace(promql, "$1", nsName, -1)
} else {
// get container metrics from node-pod
promql = RulePromQLTmplMap[metricName+"_node"]
promql = strings.Replace(promql, "$1", nodeId, -1)
}
promql = strings.Replace(promql, "$2", podName, -1)
if containerName == "" {
if containerFilter == "" {
containerFilter = ".*"
}
promql = strings.Replace(promql, "$3", containerFilter, -1)
} else {
promql = strings.Replace(promql, "$3", containerName, -1)
}
return promql
}
func MakePodPromQL(metricName, nsName, nodeID, podName, podFilter string) string {
if podFilter == "" {
podFilter = ".*"
}
var promql = ""
if nsName != "" {
// get pod metrics by namespace
if podName != "" {
// specific pod
promql = RulePromQLTmplMap[metricName]
promql = strings.Replace(promql, "$1", nsName, -1)
promql = strings.Replace(promql, "$2", podName, -1)
} else {
// all pods
metricName += "_all"
promql = RulePromQLTmplMap[metricName]
promql = strings.Replace(promql, "$1", nsName, -1)
promql = strings.Replace(promql, "$2", podFilter, -1)
}
} else if nodeID != "" {
// get pod metrics by nodeid
metricName += "_node"
promql = RulePromQLTmplMap[metricName]
promql = strings.Replace(promql, "$3", nodeID, -1)
if podName != "" {
// specific pod
promql = strings.Replace(promql, "$2", podName, -1)
} else {
promql = strings.Replace(promql, "$2", podFilter, -1)
}
}
return promql
}
func MakeNamespacePromQL(nsName string, nsFilter string, metricsName string) string {
var recordingRule = RulePromQLTmplMap[metricsName]
if nsName != "" {
nsFilter = nsName
} else {
if nsFilter == "" {
nsFilter = ".*"
}
}
recordingRule = strings.Replace(recordingRule, "$1", nsFilter, -1)
return recordingRule
}
// cluster rule
func MakeClusterRule(metricsName string) string {
var rule = RulePromQLTmplMap[metricsName]
return rule
}
// node rule
func MakeNodeRule(nodeID string, nodesFilter string, metricsName string) string {
var rule = RulePromQLTmplMap[metricsName]
if nodesFilter == "" {
nodesFilter = ".*"
}
if strings.Contains(metricsName, "disk_size") || strings.Contains(metricsName, "pod") || strings.Contains(metricsName, "usage") || strings.Contains(metricsName, "inode") || strings.Contains(metricsName, "load") {
// disk size promql
if nodeID != "" {
nodesFilter = "{" + "node" + "=" + "\"" + nodeID + "\"" + "}"
} else {
nodesFilter = "{" + "node" + "=~" + "\"" + nodesFilter + "\"" + "}"
}
rule = strings.Replace(rule, "$1", nodesFilter, -1)
} else {
// cpu, memory, network, disk_iops rules
if nodeID != "" {
// specific node
rule = rule + "{" + "node" + "=" + "\"" + nodeID + "\"" + "}"
} else {
// all nodes or specific nodes filted with re2 syntax
rule = rule + "{" + "node" + "=~" + "\"" + nodesFilter + "\"" + "}"
}
}
return rule
}

View File

@@ -0,0 +1,579 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package metrics
const (
ResultTypeVector = "vector"
ResultTypeMatrix = "matrix"
MetricStatus = "status"
MetricStatusError = "error"
MetricStatusSuccess = "success"
ResultItemMetric = "metric"
ResultItemMetricResource = "resource"
ResultItemValue = "value"
ResultItemValues = "values"
ResultSortTypeDesc = "desc"
ResultSortTypeAsce = "asce"
)
const (
MetricNameWorkloadCount = "workload_count"
MetricNameNamespacePodCount = "namespace_pod_count"
MetricNameWorkspaceAllOrganizationCount = "workspace_all_organization_count"
MetricNameWorkspaceAllAccountCount = "workspace_all_account_count"
MetricNameWorkspaceAllProjectCount = "workspace_all_project_count"
MetricNameWorkspaceAllDevopsCount = "workspace_all_devops_project_count"
MetricNameClusterAllProjectCount = "cluster_namespace_count"
MetricNameWorkspaceNamespaceCount = "workspace_namespace_count"
MetricNameWorkspaceDevopsCount = "workspace_devops_project_count"
MetricNameWorkspaceMemberCount = "workspace_member_count"
MetricNameWorkspaceRoleCount = "workspace_role_count"
MetricNameComponentOnLine = "component_online_count"
MetricNameComponentLine = "component_count"
)
const (
WorkspaceResourceKindOrganization = "organization"
WorkspaceResourceKindAccount = "account"
WorkspaceResourceKindNamespace = "namespace"
WorkspaceResourceKindDevops = "devops"
WorkspaceResourceKindMember = "member"
WorkspaceResourceKindRole = "role"
)
const (
MetricLevelCluster = "cluster"
MetricLevelClusterWorkspace = "cluster_workspace"
MetricLevelNode = "node"
MetricLevelWorkspace = "workspace"
MetricLevelNamespace = "namespace"
MetricLevelPod = "pod"
MetricLevelPodName = "pod_name"
MetricLevelContainer = "container"
MetricLevelContainerName = "container_name"
MetricLevelWorkload = "workload"
)
const (
ReplicaSet = "ReplicaSet"
StatefulSet = "StatefulSet"
DaemonSet = "DaemonSet"
Deployment = "Deployment"
)
const (
NodeStatusRule = `kube_node_status_condition{condition="Ready"} > 0`
PodInfoRule = `kube_pod_info{created_by_kind="$1",created_by_name=$2,namespace="$3"}`
NamespaceLabelRule = `kube_namespace_labels`
WorkloadReplicaSetOwnerRule = `kube_pod_owner{namespace="$1", owner_name!="<none>", owner_kind="ReplicaSet"}`
WorkspaceNamespaceLabelRule = `sum(kube_namespace_labels{label_kubesphere_io_workspace != ""}) by (label_kubesphere_io_workspace)`
ExcludedVirtualNetworkInterfaces = `interface!~"^(cali.+|tunl.+|dummy.+|kube.+|flannel.+|cni.+|docker.+|veth.+|lo.*)"`
)
const (
WorkspaceJoinedKey = "label_kubesphere_io_workspace"
)
type MetricMap map[string]string
var ClusterMetricsNames = []string{
"cluster_cpu_utilisation",
"cluster_cpu_usage",
"cluster_cpu_total",
"cluster_memory_utilisation",
"cluster_memory_available",
"cluster_memory_total",
"cluster_memory_usage_wo_cache",
"cluster_net_utilisation",
"cluster_net_bytes_transmitted",
"cluster_net_bytes_received",
"cluster_disk_read_iops",
"cluster_disk_write_iops",
"cluster_disk_read_throughput",
"cluster_disk_write_throughput",
"cluster_disk_size_usage",
"cluster_disk_size_utilisation",
"cluster_disk_size_capacity",
"cluster_disk_size_available",
"cluster_disk_inode_total",
"cluster_disk_inode_usage",
"cluster_disk_inode_utilisation",
"cluster_node_online",
"cluster_node_offline",
"cluster_node_total",
"cluster_pod_count",
"cluster_pod_quota",
"cluster_pod_utilisation",
"cluster_pod_running_count",
"cluster_pod_succeeded_count",
"cluster_pod_abnormal_count",
"cluster_ingresses_extensions_count",
"cluster_cronjob_count",
"cluster_pvc_count",
"cluster_daemonset_count",
"cluster_deployment_count",
"cluster_endpoint_count",
"cluster_hpa_count",
"cluster_job_count",
"cluster_statefulset_count",
"cluster_replicaset_count",
"cluster_service_count",
"cluster_secret_count",
"cluster_namespace_count",
"cluster_load1",
"cluster_load5",
"cluster_load15",
}
var NodeMetricsNames = []string{
"node_cpu_utilisation",
"node_cpu_total",
"node_cpu_usage",
"node_memory_utilisation",
"node_memory_usage_wo_cache",
"node_memory_available",
"node_memory_total",
"node_net_utilisation",
"node_net_bytes_transmitted",
"node_net_bytes_received",
"node_disk_read_iops",
"node_disk_write_iops",
"node_disk_read_throughput",
"node_disk_write_throughput",
"node_disk_size_capacity",
"node_disk_size_available",
"node_disk_size_usage",
"node_disk_size_utilisation",
"node_disk_inode_total",
"node_disk_inode_usage",
"node_disk_inode_utilisation",
"node_pod_count",
"node_pod_quota",
"node_pod_utilisation",
"node_pod_running_count",
"node_pod_succeeded_count",
"node_pod_abnormal_count",
"node_load1",
"node_load5",
"node_load15",
}
var WorkspaceMetricsNames = []string{
"workspace_cpu_usage",
"workspace_memory_usage",
"workspace_memory_usage_wo_cache",
"workspace_net_bytes_transmitted",
"workspace_net_bytes_received",
"workspace_pod_count",
"workspace_pod_running_count",
"workspace_pod_succeeded_count",
"workspace_pod_abnormal_count",
"workspace_ingresses_extensions_count",
"workspace_cronjob_count",
"workspace_pvc_count",
"workspace_daemonset_count",
"workspace_deployment_count",
"workspace_endpoint_count",
"workspace_hpa_count",
"workspace_job_count",
"workspace_statefulset_count",
"workspace_replicaset_count",
"workspace_service_count",
"workspace_secret_count",
"workspace_all_project_count",
}
var NamespaceMetricsNames = []string{
"namespace_cpu_usage",
"namespace_memory_usage",
"namespace_memory_usage_wo_cache",
"namespace_net_bytes_transmitted",
"namespace_net_bytes_received",
"namespace_pod_count",
"namespace_pod_running_count",
"namespace_pod_succeeded_count",
"namespace_pod_abnormal_count",
"namespace_configmap_count_used",
"namespace_jobs_batch_count_used",
"namespace_roles_count_used",
"namespace_memory_limit_used",
"namespace_pvc_used",
"namespace_memory_request_used",
"namespace_pvc_count_used",
"namespace_cronjobs_batch_count_used",
"namespace_ingresses_extensions_count_used",
"namespace_cpu_limit_used",
"namespace_storage_request_used",
"namespace_deployment_count_used",
"namespace_pod_count_used",
"namespace_statefulset_count_used",
"namespace_daemonset_count_used",
"namespace_secret_count_used",
"namespace_service_count_used",
"namespace_cpu_request_used",
"namespace_service_loadbalancer_used",
"namespace_configmap_count_hard",
"namespace_jobs_batch_count_hard",
"namespace_roles_count_hard",
"namespace_memory_limit_hard",
"namespace_pvc_hard",
"namespace_memory_request_hard",
"namespace_pvc_count_hard",
"namespace_cronjobs_batch_count_hard",
"namespace_ingresses_extensions_count_hard",
"namespace_cpu_limit_hard",
"namespace_storage_request_hard",
"namespace_deployment_count_hard",
"namespace_pod_count_hard",
"namespace_statefulset_count_hard",
"namespace_daemonset_count_hard",
"namespace_secret_count_hard",
"namespace_service_count_hard",
"namespace_cpu_request_hard",
"namespace_service_loadbalancer_hard",
"namespace_cronjob_count",
"namespace_pvc_count",
"namespace_daemonset_count",
"namespace_deployment_count",
"namespace_endpoint_count",
"namespace_hpa_count",
"namespace_job_count",
"namespace_statefulset_count",
"namespace_replicaset_count",
"namespace_service_count",
"namespace_secret_count",
"namespace_ingresses_extensions_count",
}
var PodMetricsNames = []string{
"pod_cpu_usage",
"pod_memory_usage",
"pod_memory_usage_wo_cache",
"pod_net_bytes_transmitted",
"pod_net_bytes_received",
}
var WorkloadMetricsNames = []string{
"workload_pod_cpu_usage",
"workload_pod_memory_usage",
"workload_pod_memory_usage_wo_cache",
"workload_pod_net_bytes_transmitted",
"workload_pod_net_bytes_received",
"workload_deployment_replica",
"workload_deployment_replica_available",
"workload_statefulset_replica",
"workload_statefulset_replica_available",
"workload_daemonset_replica",
"workload_daemonset_replica_available",
}
var ContainerMetricsNames = []string{
"container_cpu_usage",
"container_memory_usage",
"container_memory_usage_wo_cache",
//"container_net_bytes_transmitted",
//"container_net_bytes_received",
}
var RulePromQLTmplMap = MetricMap{
//cluster
"cluster_cpu_utilisation": ":node_cpu_utilisation:avg1m",
"cluster_cpu_usage": `:node_cpu_utilisation:avg1m * sum(node:node_num_cpu:sum)`,
"cluster_cpu_total": "sum(node:node_num_cpu:sum)",
"cluster_memory_utilisation": ":node_memory_utilisation:",
"cluster_memory_available": "sum(node:node_memory_bytes_available:sum)",
"cluster_memory_total": "sum(node:node_memory_bytes_total:sum)",
"cluster_memory_usage_wo_cache": "sum(node:node_memory_bytes_total:sum) - sum(node:node_memory_bytes_available:sum)",
"cluster_net_utilisation": ":node_net_utilisation:sum_irate",
"cluster_net_bytes_transmitted": "sum(node:node_net_bytes_transmitted:sum_irate)",
"cluster_net_bytes_received": "sum(node:node_net_bytes_received:sum_irate)",
"cluster_disk_read_iops": "sum(node:data_volume_iops_reads:sum)",
"cluster_disk_write_iops": "sum(node:data_volume_iops_writes:sum)",
"cluster_disk_read_throughput": "sum(node:data_volume_throughput_bytes_read:sum)",
"cluster_disk_write_throughput": "sum(node:data_volume_throughput_bytes_written:sum)",
"cluster_disk_size_usage": `sum(max((node_filesystem_size{device=~"/dev/.+", job="node-exporter"} - node_filesystem_avail{device=~"/dev/.+", job="node-exporter"}) * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:) by (node))`,
"cluster_disk_size_utilisation": `1 - sum(max(node_filesystem_avail{device=~"/dev/.+", job="node-exporter"} * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:) by (node)) / sum(max(node_filesystem_size{device=~"/dev/.+", job="node-exporter"} * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:) by (node))`,
"cluster_disk_size_capacity": `sum(max(node_filesystem_size{device=~"/dev/.+", job="node-exporter"} * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:) by (node))`,
"cluster_disk_size_available": `sum(max(node_filesystem_avail{device=~"/dev/.+", job="node-exporter"} * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:) by (node))`,
"cluster_disk_inode_total": `sum(node:disk_inodes_total:)`,
"cluster_disk_inode_usage": `sum(node:disk_inodes_total:) - sum(node:disk_inodes_free:)`,
"cluster_disk_inode_utilisation": `1 - sum(node:disk_inodes_free:) / sum(node:disk_inodes_total:)`,
"cluster_namespace_count": `count(kube_namespace_annotations)`,
// cluster_pod_count = cluster_pod_running_count + cluster_pod_succeeded_count + cluster_pod_abnormal_count
"cluster_pod_count": `sum((kube_pod_status_scheduled{condition="true"} > 0) * on (pod) group_left(node) (sum by (node, pod) (kube_pod_info)) unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0))`,
"cluster_pod_quota": `sum(kube_node_status_capacity_pods unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0))`,
"cluster_pod_utilisation": `sum(kube_pod_info unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0)) / sum(kube_node_status_capacity_pods unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0))`,
"cluster_pod_running_count": `count(kube_pod_info unless on (pod) (kube_pod_status_phase{phase=~"Failed|Pending|Unknown|Succeeded"} > 0) unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0))`,
"cluster_pod_succeeded_count": `count(kube_pod_info unless on (pod) (kube_pod_status_phase{phase=~"Failed|Pending|Unknown|Running"} > 0) unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0))`,
"cluster_pod_abnormal_count": `count(kube_pod_info unless on (pod) (kube_pod_status_phase{phase=~"Succeeded|Running"} > 0) unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0))`,
"cluster_node_online": `sum(kube_node_status_condition{condition="Ready",status="true"})`,
"cluster_node_offline": `sum(kube_node_status_condition{condition="Ready",status=~"unknown|false"})`,
"cluster_node_total": `sum(kube_node_status_condition{condition="Ready"})`,
"cluster_ingresses_extensions_count": `sum(kube_ingress_labels)`,
"cluster_configmap_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/configmaps"}) by (resource, type)`,
"cluster_jobs_batch_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/jobs.batch"}) by (resource, type)`,
"cluster_roles_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/roles.rbac.authorization.k8s.io"}) by (resource, type)`,
"cluster_memory_limit_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="limits.memory"}) by (resource, type)`,
"cluster_pvc_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="persistentvolumeclaims"}) by (resource, type)`,
"cluster_memory_request_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="requests.memory"}) by (resource, type)`,
"cluster_pvc_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/persistentvolumeclaims"}) by (resource, type)`,
"cluster_cronjobs_batch_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/cronjobs.batch"}) by (resource, type)`,
"cluster_ingresses_extensions_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/ingresses.extensions"}) by (resource, type)`,
"cluster_cpu_limit_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="limits.cpu"}) by (resource, type)`,
"cluster_storage_request_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="requests.storage"}) by (resource, type)`,
"cluster_deployment_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/deployments.apps"}) by (resource, type)`,
"cluster_pod_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/pods"}) by (resource, type)`,
"cluster_statefulset_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/statefulsets.apps"}) by (resource, type)`,
"cluster_daemonset_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/daemonsets.apps"}) by (resource, type)`,
"cluster_secret_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/secrets"}) by (resource, type)`,
"cluster_service_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="count/services"}) by (resource, type)`,
"cluster_cpu_request_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="requests.cpu"}) by (resource, type)`,
"cluster_service_loadbalancer_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", resource="services.loadbalancers"}) by (resource, type)`,
"cluster_cronjob_count": `sum(kube_cronjob_labels)`,
"cluster_pvc_count": `sum(kube_persistentvolumeclaim_info)`,
"cluster_daemonset_count": `sum(kube_daemonset_labels)`,
"cluster_deployment_count": `sum(kube_deployment_labels)`,
"cluster_endpoint_count": `sum(kube_endpoint_labels)`,
"cluster_hpa_count": `sum(kube_hpa_labels)`,
"cluster_job_count": `sum(kube_job_labels)`,
"cluster_statefulset_count": `sum(kube_statefulset_labels)`,
"cluster_replicaset_count": `count(kube_replicaset_created)`,
"cluster_service_count": `sum(kube_service_info)`,
"cluster_secret_count": `sum(kube_secret_info)`,
"cluster_pv_count": `sum(kube_persistentvolume_labels)`,
"cluster_load1": `sum(node_load1{job="node-exporter"}) / sum(node:node_num_cpu:sum)`,
"cluster_load5": `sum(node_load5{job="node-exporter"}) / sum(node:node_num_cpu:sum)`,
"cluster_load15": `sum(node_load15{job="node-exporter"}) / sum(node:node_num_cpu:sum)`,
//node
"node_cpu_utilisation": "node:node_cpu_utilisation:avg1m",
"node_cpu_total": "node:node_num_cpu:sum",
"node_memory_utilisation": "node:node_memory_utilisation:",
"node_memory_available": "node:node_memory_bytes_available:sum",
"node_memory_total": "node:node_memory_bytes_total:sum",
"node_memory_usage_wo_cache": "node:node_memory_bytes_total:sum$1 - node:node_memory_bytes_available:sum$1",
"node_net_utilisation": "node:node_net_utilisation:sum_irate",
"node_net_bytes_transmitted": "node:node_net_bytes_transmitted:sum_irate",
"node_net_bytes_received": "node:node_net_bytes_received:sum_irate",
"node_disk_read_iops": "node:data_volume_iops_reads:sum",
"node_disk_write_iops": "node:data_volume_iops_writes:sum",
"node_disk_read_throughput": "node:data_volume_throughput_bytes_read:sum",
"node_disk_write_throughput": "node:data_volume_throughput_bytes_written:sum",
"node_disk_size_capacity": `max(node_filesystem_size{device=~"/dev/.+", job="node-exporter"} * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:$1) by (node)`,
"node_disk_size_available": `max(node_filesystem_avail{device=~"/dev/.+", job="node-exporter"} * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:$1) by (node)`,
"node_disk_size_usage": `max((node_filesystem_size{device=~"/dev/.+", job="node-exporter"} - node_filesystem_avail{device=~"/dev/.+", job="node-exporter"}) * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:$1) by (node)`,
"node_disk_size_utilisation": `max(((node_filesystem_size{device=~"/dev/.+", job="node-exporter"} - node_filesystem_avail{device=~"/dev/.+", job="node-exporter"}) / node_filesystem_size{device=~"/dev/.+", job="node-exporter"}) * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:$1) by (node)`,
"node_disk_inode_total": `node:disk_inodes_total:$1`,
"node_disk_inode_usage": `node:disk_inodes_total:$1 - node:disk_inodes_free:$1`,
"node_disk_inode_utilisation": `(1 - (node:disk_inodes_free:$1 / node:disk_inodes_total:$1))`,
"node_pod_count": `sum by (node) ((kube_pod_status_scheduled{condition="true"} > 0) * on (pod) group_left(node) kube_pod_info$1 unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0))`,
"node_pod_quota": `sum(kube_node_status_capacity_pods$1) by (node) unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0)`,
"node_pod_utilisation": `(sum(kube_pod_info$1) by (node) / sum(kube_node_status_capacity_pods$1) by (node)) unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0)`,
"node_pod_running_count": `count(kube_pod_info$1 unless on (pod) (kube_pod_status_phase{phase=~"Failed|Pending|Unknown|Succeeded"} > 0)) by (node) unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0)`,
"node_pod_succeeded_count": `count(kube_pod_info$1 unless on (pod) (kube_pod_status_phase{phase=~"Failed|Pending|Unknown|Running"} > 0)) by (node) unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0)`,
"node_pod_abnormal_count": `count(kube_pod_info$1 unless on (pod) (kube_pod_status_phase{phase=~"Succeeded|Running"} > 0)) by (node) unless on (node) (kube_node_status_condition{condition="Ready",status=~"unknown|false"} > 0)`,
// without log node: unless on(node) kube_node_labels{label_role="log"}
"node_cpu_usage": `node:node_cpu_utilisation:avg1m$1 * node:node_num_cpu:sum$1`,
"node_load1": `sum by (node) (node_load1{job="node-exporter"} * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:$1) / node:node_num_cpu:sum`,
"node_load5": `sum by (node) (node_load5{job="node-exporter"} * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:$1) / node:node_num_cpu:sum`,
"node_load15": `sum by (node) (node_load15{job="node-exporter"} * on (namespace, pod) group_left(node) node_namespace_pod:kube_pod_info:$1) / node:node_num_cpu:sum`,
//namespace
"namespace_cpu_usage": `namespace:container_cpu_usage_seconds_total:sum_rate{namespace!="", namespace=~"$1"} * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_memory_usage": `namespace:container_memory_usage_bytes:sum{namespace!="", namespace=~"$1"} * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_memory_usage_wo_cache": `namespace:container_memory_usage_bytes_wo_cache:sum{namespace!="", namespace=~"$1"}* on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_net_bytes_transmitted": `sum by (namespace) (irate(container_network_transmit_bytes_total{namespace!="", namespace=~"$1", pod_name!="", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m]))* on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_net_bytes_received": `sum by (namespace) (irate(container_network_receive_bytes_total{namespace!="", namespace=~"$1", pod_name!="", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m])) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pod_count": `sum(kube_pod_status_phase{phase!~"Failed|Succeeded", namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pod_running_count": `sum(kube_pod_status_phase{phase="Running", namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pod_succeeded_count": `sum(kube_pod_status_phase{phase="Succeeded", namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pod_abnormal_count": `sum(kube_pod_status_phase{phase=~"Failed|Pending|Unknown", namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_configmap_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/configmaps"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_jobs_batch_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/jobs.batch"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_roles_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/roles.rbac.authorization.k8s.io"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_memory_limit_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="limits.memory"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pvc_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="persistentvolumeclaims"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_memory_request_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="requests.memory"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pvc_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/persistentvolumeclaims"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_cronjobs_batch_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/cronjobs.batch"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_ingresses_extensions_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/ingresses.extensions"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_cpu_limit_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="limits.cpu"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_storage_request_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="requests.storage"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_deployment_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/deployments.apps"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pod_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/pods"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_statefulset_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/statefulsets.apps"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_daemonset_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/daemonsets.apps"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_secret_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/secrets"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_service_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="count/services"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_cpu_request_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="requests.cpu"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_service_loadbalancer_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace=~"$1", resource="services.loadbalancers"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_configmap_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/configmaps"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_jobs_batch_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/jobs.batch"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_roles_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/roles.rbac.authorization.k8s.io"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_memory_limit_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="limits.memory"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pvc_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="persistentvolumeclaims"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_memory_request_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="requests.memory"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pvc_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/persistentvolumeclaims"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_cronjobs_batch_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/cronjobs.batch"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_ingresses_extensions_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/ingresses.extensions"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_cpu_limit_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="limits.cpu"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_storage_request_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="requests.storage"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_deployment_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/deployments.apps"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pod_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/pods"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_statefulset_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/statefulsets.apps"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_daemonset_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/daemonsets.apps"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_secret_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/secrets"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_service_count_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="count/services"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_cpu_request_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="requests.cpu"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_service_loadbalancer_hard": `sum(kube_resourcequota{resourcequota!="quota", type="hard", namespace!="", namespace=~"$1", resource="services.loadbalancers"}) by (namespace, resource, type) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_cronjob_count": `sum(kube_cronjob_labels{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_pvc_count": `sum(kube_persistentvolumeclaim_info{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_daemonset_count": `sum(kube_daemonset_labels{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_deployment_count": `sum(kube_deployment_labels{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_endpoint_count": `sum(kube_endpoint_labels{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_hpa_count": `sum(kube_hpa_labels{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_job_count": `sum(kube_job_labels{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_statefulset_count": `sum(kube_statefulset_labels{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_replicaset_count": `count(kube_replicaset_created{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_service_count": `sum(kube_service_info{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_secret_count": `sum(kube_secret_info{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
"namespace_ingresses_extensions_count": `sum(kube_ingress_labels{namespace!="", namespace=~"$1"}) by (namespace) * on (namespace) group_left(label_kubesphere_io_workspace)(kube_namespace_labels)`,
// pod
"pod_cpu_usage": `sum(irate(container_cpu_usage_seconds_total{job="kubelet", namespace="$1", pod_name!="", pod_name="$2", image!=""}[5m])) by (namespace, pod_name)`,
"pod_memory_usage": `sum(container_memory_usage_bytes{job="kubelet", namespace="$1", pod_name!="", pod_name="$2", image!=""}) by (namespace, pod_name)`,
"pod_memory_usage_wo_cache": `sum(container_memory_usage_bytes{job="kubelet", namespace="$1", pod_name!="", pod_name="$2", image!=""} - container_memory_cache{job="kubelet", namespace="$1", pod_name!="", pod_name="$2",image!=""}) by (namespace, pod_name)`,
"pod_net_bytes_transmitted": `sum by (namespace, pod_name) (irate(container_network_transmit_bytes_total{namespace="$1", pod_name!="", pod_name="$2", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m]))`,
"pod_net_bytes_received": `sum by (namespace, pod_name) (irate(container_network_receive_bytes_total{namespace="$1", pod_name!="", pod_name="$2", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m]))`,
"pod_cpu_usage_all": `sum(irate(container_cpu_usage_seconds_total{job="kubelet", namespace="$1", pod_name!="", pod_name=~"$2", image!=""}[5m])) by (namespace, pod_name)`,
"pod_memory_usage_all": `sum(container_memory_usage_bytes{job="kubelet", namespace="$1", pod_name!="", pod_name=~"$2", image!=""}) by (namespace, pod_name)`,
"pod_memory_usage_wo_cache_all": `sum(container_memory_usage_bytes{job="kubelet", namespace="$1", pod_name!="", pod_name=~"$2", image!=""} - container_memory_cache{job="kubelet", namespace="$1", pod_name!="", pod_name=~"$2", image!=""}) by (namespace, pod_name)`,
"pod_net_bytes_transmitted_all": `sum by (namespace, pod_name) (irate(container_network_transmit_bytes_total{namespace="$1", pod_name!="", pod_name=~"$2", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m]))`,
"pod_net_bytes_received_all": `sum by (namespace, pod_name) (irate(container_network_receive_bytes_total{namespace="$1", pod_name!="", pod_name=~"$2", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m]))`,
"pod_cpu_usage_node": `sum by (node, pod_name) (irate(container_cpu_usage_seconds_total{job="kubelet",pod_name!="", pod_name=~"$2", image!=""}[5m]) * on (namespace, pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$3"}, "pod_name", "", "pod", "_name"))`,
"pod_memory_usage_node": `sum by (node, pod_name) (container_memory_usage_bytes{job="kubelet",pod_name!="", pod_name=~"$2", image!=""} * on (namespace, pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$3"}, "pod_name", "", "pod", "_name"))`,
"pod_memory_usage_wo_cache_node": `sum by (node, pod_name) ((container_memory_usage_bytes{job="kubelet",pod_name!="", pod_name=~"$2", image!=""} - container_memory_cache{job="kubelet",pod_name!="", pod_name=~"$2", image!=""}) * on (namespace, pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$3"}, "pod_name", "", "pod", "_name"))`,
"pod_net_bytes_transmitted_node": `sum by (node, pod_name) (irate(container_network_transmit_bytes_total{pod_name!="", pod_name=~"$2", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m]) * on (pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$3"}, "pod_name", "", "pod", "_name"))`,
"pod_net_bytes_received_node": `sum by (node, pod_name) (irate(container_network_receive_bytes_total{pod_name!="", pod_name=~"$2", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m]) * on (pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$3"}, "pod_name", "", "pod", "_name"))`,
// workload
"workload_pod_cpu_usage": `label_join(sum(label_replace(label_replace(label_replace(label_join(label_join(label_replace(sum(irate(container_cpu_usage_seconds_total{job="kubelet", namespace="$2", pod_name!="", pod_name=~"$3", image!=""}[5m])) by (namespace, pod_name) * on (pod_name) group_left(owner_kind) label_join(label_replace(kube_pod_owner{namespace="$2", pod=~".*"}, "owner_kind", "POD", "owner_kind", "<none>"), "pod_name", "", "pod", "_name") , "postfix", "-POD", "owner_kind", "POD"), "pod_name", "", "pod_name", "postfix"), "dist", "-", "owner_kind", "pod_name"), "pod_name", "$1", "dist", "ReplicaSet-(.+)-(.+)"), "workload", "$1", "pod_name", "(.+)-(.+)"), "owner_kind", "Deployment", "owner_kind", "ReplicaSet.*")) by (namespace, workload, owner_kind), "workload", ":", "owner_kind", "workload")`,
"workload_pod_memory_usage": `label_join(sum(label_replace(label_replace(label_replace(label_join(label_join(label_replace(sum(container_memory_usage_bytes{job="kubelet", namespace="$2", pod_name!="", pod_name=~"$3", image!=""}) by (namespace, pod_name) * on (pod_name) group_left(owner_kind) label_join(label_replace(kube_pod_owner{namespace="$2", pod=~".*"}, "owner_kind", "POD", "owner_kind", "<none>"), "pod_name", "", "pod", "_name") , "postfix", "-POD", "owner_kind", "POD"), "pod_name", "", "pod_name", "postfix"), "dist", "-", "owner_kind", "pod_name"), "pod_name", "$1", "dist", "ReplicaSet-(.+)-(.+)"), "workload", "$1", "pod_name", "(.+)-(.+)"), "owner_kind", "Deployment", "owner_kind", "ReplicaSet.*")) by (namespace, workload, owner_kind), "workload", ":", "owner_kind", "workload")`,
"workload_pod_memory_usage_wo_cache": `label_join(sum(label_replace(label_replace(label_replace(label_join(label_join(label_replace(sum(container_memory_usage_bytes{job="kubelet", namespace="$2", pod_name!="", pod_name=~"$3", image!=""} - container_memory_cache{job="kubelet", namespace="$2", pod_name!="", pod_name=~"$3", image!=""}) by (namespace, pod_name) * on (pod_name) group_left(owner_kind) label_join(label_replace(kube_pod_owner{namespace="$2", pod=~".*"}, "owner_kind", "POD", "owner_kind", "<none>"), "pod_name", "", "pod", "_name") , "postfix", "-POD", "owner_kind", "POD"), "pod_name", "", "pod_name", "postfix"), "dist", "-", "owner_kind", "pod_name"), "pod_name", "$1", "dist", "ReplicaSet-(.+)-(.+)"), "workload", "$1", "pod_name", "(.+)-(.+)"), "owner_kind", "Deployment", "owner_kind", "ReplicaSet.*")) by (namespace, workload, owner_kind), "workload", ":", "owner_kind", "workload")`,
"workload_pod_net_bytes_transmitted": `label_join(sum(label_replace(label_replace(label_replace(label_join(label_join(label_replace(sum(irate(container_network_transmit_bytes_total{namespace="$2", pod_name!="", pod_name=~"$3", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m])) by (namespace, pod_name) * on (pod_name) group_left(owner_kind) label_join(label_replace(kube_pod_owner{namespace="$2", pod=~".*"}, "owner_kind", "POD", "owner_kind", "<none>"), "pod_name", "", "pod", "_name") , "postfix", "-POD", "owner_kind", "POD"), "pod_name", "", "pod_name", "postfix"), "dist", "-", "owner_kind", "pod_name"), "pod_name", "$1", "dist", "ReplicaSet-(.+)-(.+)"), "workload", "$1", "pod_name", "(.+)-(.+)"), "owner_kind", "Deployment", "owner_kind", "ReplicaSet.*")) by (namespace, workload, owner_kind), "workload", ":", "owner_kind", "workload")`,
"workload_pod_net_bytes_received": `label_join(sum(label_replace(label_replace(label_replace(label_join(label_join(label_replace(sum(irate(container_network_receive_bytes_total{namespace="$2", pod_name!="", pod_name=~"$3", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m])) by (namespace, pod_name) * on (pod_name) group_left(owner_kind) label_join(label_replace(kube_pod_owner{namespace="$2", pod=~".*"}, "owner_kind", "POD", "owner_kind", "<none>"), "pod_name", "", "pod", "_name") , "postfix", "-POD", "owner_kind", "POD"), "pod_name", "", "pod_name", "postfix"), "dist", "-", "owner_kind", "pod_name"), "pod_name", "$1", "dist", "ReplicaSet-(.+)-(.+)"), "workload", "$1", "pod_name", "(.+)-(.+)"), "owner_kind", "Deployment", "owner_kind", "ReplicaSet.*")) by (namespace, workload, owner_kind), "workload", ":", "owner_kind", "workload")`,
"workload_deployment_replica": `label_join(sum (label_join(label_replace(kube_deployment_spec_replicas{namespace="$2", deployment=~"$3"}, "owner_kind", "Deployment", "", ""), "workload", "", "deployment")) by (namespace, owner_kind, workload), "workload", ":", "owner_kind", "workload")`,
"workload_deployment_replica_available": `label_join(sum (label_join(label_replace(kube_deployment_status_replicas_available{namespace="$2", deployment=~"$3"}, "owner_kind", "Deployment", "", ""), "workload", "", "deployment")) by (namespace, owner_kind, workload), "workload", ":", "owner_kind", "workload")`,
"workload_statefulset_replica": `label_join(sum (label_join(label_replace(kube_statefulset_replicas{namespace="$2", statefulset=~"$3"}, "owner_kind", "StatefulSet", "", ""), "workload", "", "statefulset")) by (namespace, owner_kind, workload), "workload", ":", "owner_kind", "workload")`,
"workload_statefulset_replica_available": `label_join(sum (label_join(label_replace(kube_statefulset_status_replicas_current{namespace="$2", statefulset=~"$3"}, "owner_kind", "StatefulSet", "", ""), "workload", "", "statefulset")) by (namespace, owner_kind, workload), "workload", ":", "owner_kind", "workload")`,
"workload_daemonset_replica": `label_join(sum (label_join(label_replace(kube_daemonset_status_desired_number_scheduled{namespace="$2", daemonset=~"$3"}, "owner_kind", "DaemonSet", "", ""), "workload", "", "daemonset")) by (namespace, owner_kind, workload), "workload", ":", "owner_kind", "workload")`,
"workload_daemonset_replica_available": `label_join(sum (label_join(label_replace(kube_daemonset_status_number_available{namespace="$2", daemonset=~"$3"}, "owner_kind", "DaemonSet", "", ""), "workload", "", "daemonset")) by (namespace, owner_kind, workload), "workload", ":", "owner_kind", "workload")`,
// container
"container_cpu_usage": `sum(irate(container_cpu_usage_seconds_total{namespace="$1", pod_name="$2", container_name!="POD", container_name=~"$3"}[5m])) by (namespace, pod_name, container_name)`,
"container_memory_usage": `sum(container_memory_usage_bytes{namespace="$1", pod_name="$2", container_name!="POD", container_name=~"$3"}) by (namespace, pod_name, container_name)`,
"container_memory_usage_wo_cache": `container_memory_usage_bytes{namespace="$1", pod_name="$2", container_name!="POD", container_name=~"$3"} - ignoring(id, image, endpoint, instance, job, name, service) container_memory_cache{namespace="$1", pod_name="$2", container_name!="POD", container_name=~"$3"}`,
"container_net_bytes_transmitted": `sum(irate(container_network_transmit_bytes_total{job="kubelet", namespace="$1", pod_name="$2", container_name="POD", ` + ExcludedVirtualNetworkInterfaces + `}[5m])) by (namespace, pod_name, container_name)`,
"container_net_bytes_received": `sum(irate(container_network_receive_bytes_total{job="kubelet", namespace="$1", pod_name="$2", container_name="POD", ` + ExcludedVirtualNetworkInterfaces + `}[5m])) by (namespace, pod_name, container_name)`,
"container_cpu_usage_node": `sum by (node, pod_name, container_name) (irate(container_cpu_usage_seconds_total{job="kubelet", pod_name="$2", container_name!="POD", container_name!="", container_name=~"$3", image!=""}[5m]) * on (pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$1"}, "pod_name", "", "pod", "_name"))`,
"container_memory_usage_node": `sum by (node, pod_name, container_name) (container_memory_usage_bytes{job="kubelet", pod_name="$2", container_name!="POD", container_name!="", container_name=~"$3", image!=""} * on (pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$1"}, "pod_name", "", "pod", "_name"))`,
"container_memory_usage_wo_cache_node": `sum by (node, pod_name, container_name) ((container_memory_usage_bytes{job="kubelet", pod_name="$2", container_name!="POD", container_name!="", container_name=~"$3", image!=""} - container_memory_cache{job="kubelet", pod_name="$2", container_name!="POD", container_name!="", container_name=~"$3", image!=""}) * on (pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$1"}, "pod_name", "", "pod", "_name"))`,
"container_net_bytes_transmitted_node": `sum by (node, pod_name, container_name) (irate(container_network_transmit_bytes_total{job="kubelet", ` + ExcludedVirtualNetworkInterfaces + `, pod_name="$2", container_name="POD", container_name!="", image!=""}[5m]) * on (pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$1"}, "pod_name", "", "pod", "_name"))`,
"container_net_bytes_received_node": `sum by (node, pod_name, container_name) (irate(container_network_receive_bytes_total{job="kubelet", ` + ExcludedVirtualNetworkInterfaces + `, pod_name="$2", container_name="POD", container_name!="", image!=""}[5m]) * on (pod_name) group_left(node) label_join(node_namespace_pod:kube_pod_info:{node="$1"}, "pod_name", "", "pod", "_name"))`,
// workspace
"workspace_cpu_usage": `sum(namespace:container_cpu_usage_seconds_total:sum_rate{namespace!="", namespace$1})`,
"workspace_memory_usage": `sum(namespace:container_memory_usage_bytes:sum{namespace!="", namespace$1})`,
"workspace_memory_usage_wo_cache": `sum(namespace:container_memory_usage_bytes_wo_cache:sum{namespace!="", namespace$1})`,
"workspace_net_bytes_transmitted": `sum(sum by (namespace) (irate(container_network_transmit_bytes_total{namespace!="", namespace$1, pod_name!="", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m])))`,
"workspace_net_bytes_received": `sum(sum by (namespace) (irate(container_network_receive_bytes_total{namespace!="", namespace$1, pod_name!="", ` + ExcludedVirtualNetworkInterfaces + `, job="kubelet"}[5m])))`,
"workspace_pod_count": `sum(kube_pod_status_phase{phase!~"Failed|Succeeded", namespace!="", namespace$1})`,
"workspace_pod_running_count": `sum(kube_pod_status_phase{phase="Running", namespace!="", namespace$1})`,
"workspace_pod_succeeded_count": `sum(kube_pod_status_phase{phase="Succeeded", namespace!="", namespace$1})`,
"workspace_pod_abnormal_count": `sum(kube_pod_status_phase{phase=~"Failed|Pending|Unknown", namespace!="", namespace$1})`,
"workspace_configmap_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/configmaps"}) by (resource, type)`,
"workspace_jobs_batch_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/jobs.batch"}) by (resource, type)`,
"workspace_roles_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/roles.rbac.authorization.k8s.io"}) by (resource, type)`,
"workspace_memory_limit_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="limits.memory"}) by (resource, type)`,
"workspace_pvc_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="persistentvolumeclaims"}) by (resource, type)`,
"workspace_memory_request_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="requests.memory"}) by (resource, type)`,
"workspace_pvc_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/persistentvolumeclaims"}) by (resource, type)`,
"workspace_cronjobs_batch_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/cronjobs.batch"}) by (resource, type)`,
"workspace_ingresses_extensions_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/ingresses.extensions"}) by (resource, type)`,
"workspace_cpu_limit_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="limits.cpu"}) by (resource, type)`,
"workspace_storage_request_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="requests.storage"}) by (resource, type)`,
"workspace_deployment_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/deployments.apps"}) by (resource, type)`,
"workspace_pod_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/pods"}) by (resource, type)`,
"workspace_statefulset_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/statefulsets.apps"}) by (resource, type)`,
"workspace_daemonset_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/daemonsets.apps"}) by (resource, type)`,
"workspace_secret_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/secrets"}) by (resource, type)`,
"workspace_service_count_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="count/services"}) by (resource, type)`,
"workspace_cpu_request_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="requests.cpu"}) by (resource, type)`,
"workspace_service_loadbalancer_used": `sum(kube_resourcequota{resourcequota!="quota", type="used", namespace!="", namespace$1, resource="services.loadbalancers"}) by (resource, type)`,
"workspace_ingresses_extensions_count": `sum(kube_ingress_labels{namespace!="", namespace$1})`,
"workspace_cronjob_count": `sum(kube_cronjob_labels{namespace!="", namespace$1})`,
"workspace_pvc_count": `sum(kube_persistentvolumeclaim_info{namespace!="", namespace$1})`,
"workspace_daemonset_count": `sum(kube_daemonset_labels{namespace!="", namespace$1})`,
"workspace_deployment_count": `sum(kube_deployment_labels{namespace!="", namespace$1})`,
"workspace_endpoint_count": `sum(kube_endpoint_labels{namespace!="", namespace$1})`,
"workspace_hpa_count": `sum(kube_hpa_labels{namespace!="", namespace$1})`,
"workspace_job_count": `sum(kube_job_labels{namespace!="", namespace$1})`,
"workspace_statefulset_count": `sum(kube_statefulset_labels{namespace!="", namespace$1})`,
"workspace_replicaset_count": `count(kube_replicaset_created{namespace!="", namespace$1})`,
"workspace_service_count": `sum(kube_service_info{namespace!="", namespace$1})`,
"workspace_secret_count": `sum(kube_secret_info{namespace!="", namespace$1})`,
"workspace_all_project_count": `count(kube_namespace_annotations)`,
}

View File

@@ -0,0 +1,51 @@
package metrics
import (
"net/url"
"strings"
"k8s.io/api/core/v1"
"kubesphere.io/kubesphere/pkg/client"
)
func GetNamespacesWithMetrics(namespaces []*v1.Namespace) []*v1.Namespace {
var nsNameList []string
for i := range namespaces {
nsNameList = append(nsNameList, namespaces[i].Name)
}
nsFilter := "^(" + strings.Join(nsNameList, "|") + ")$"
var timeRelateParams = make(url.Values)
params := client.MonitoringRequestParams{
NsFilter: nsFilter,
Params: timeRelateParams,
QueryType: client.DefaultQueryType,
MetricsFilter: "namespace_cpu_usage|namespace_memory_usage_wo_cache|namespace_pod_count",
}
rawMetrics := MonitorAllMetrics(&params, MetricLevelNamespace)
for _, result := range rawMetrics.Results {
for _, data := range result.Data.Result {
metricDescMap, ok := data["metric"].(map[string]interface{})
if ok {
if ns, exist := metricDescMap["namespace"]; exist {
timeAndValue, ok := data["value"].([]interface{})
if ok && len(timeAndValue) == 2 {
for i := 0; i < len(namespaces); i++ {
if namespaces[i].Name == ns {
if namespaces[i].Annotations == nil {
namespaces[i].Annotations = make(map[string]string, 0)
}
namespaces[i].Annotations[result.MetricName] = timeAndValue[1].(string)
}
}
}
}
}
}
}
return namespaces
}

0
pkg/models/metrics/nodes.go Normal file → Executable file
View File

0
pkg/models/metrics/pods.go Normal file → Executable file
View File

298
pkg/models/metrics/util.go Normal file
View File

@@ -0,0 +1,298 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package metrics
import (
"math"
"sort"
"strconv"
"unicode"
"runtime/debug"
"github.com/golang/glog"
)
const (
DefaultPageLimit = 5
DefaultPage = 1
)
type FormatedMetricDataWrapper struct {
fmtMetricData FormatedMetricData
by func(p, q *map[string]interface{}) bool
}
func (wrapper FormatedMetricDataWrapper) Len() int {
return len(wrapper.fmtMetricData.Result)
}
func (wrapper FormatedMetricDataWrapper) Less(i, j int) bool {
return wrapper.by(&wrapper.fmtMetricData.Result[i], &wrapper.fmtMetricData.Result[j])
}
func (wrapper FormatedMetricDataWrapper) Swap(i, j int) {
wrapper.fmtMetricData.Result[i], wrapper.fmtMetricData.Result[j] = wrapper.fmtMetricData.Result[j], wrapper.fmtMetricData.Result[i]
}
// sorted metric by ascending or descending order
func Sort(sortMetricName string, sortType string, fmtLevelMetric *FormatedLevelMetric, resourceType string) (*FormatedLevelMetric, int) {
defer func() {
if err := recover(); err != nil {
glog.Errorln(err)
debug.PrintStack()
}
}()
if sortMetricName == "" {
return fmtLevelMetric, -1
}
// default sort type is descending order
if sortType == "" {
sortType = ResultSortTypeDesc
}
var currentResourceMap = make(map[string]int)
// indexMap store sorted index for each node/namespace/pod
var indexMap = make(map[string]int)
i := 0
for _, metricItem := range fmtLevelMetric.Results {
if metricItem.Data.ResultType == ResultTypeVector && metricItem.Status == MetricStatusSuccess {
if metricItem.MetricName == sortMetricName {
if sortType == ResultSortTypeAsce {
// desc
sort.Sort(FormatedMetricDataWrapper{metricItem.Data, func(p, q *map[string]interface{}) bool {
value1 := (*p)[ResultItemValue].([]interface{})
value2 := (*q)[ResultItemValue].([]interface{})
v1, _ := strconv.ParseFloat(value1[len(value1)-1].(string), 64)
v2, _ := strconv.ParseFloat(value2[len(value2)-1].(string), 64)
if v1 == v2 {
resourceName1 := (*p)[ResultItemMetric].(map[string]interface{})[resourceType]
resourceName2 := (*q)[ResultItemMetric].(map[string]interface{})[resourceType]
return resourceName1.(string) < resourceName2.(string)
}
return v1 < v2
}})
} else {
// desc
sort.Sort(FormatedMetricDataWrapper{metricItem.Data, func(p, q *map[string]interface{}) bool {
value1 := (*p)[ResultItemValue].([]interface{})
value2 := (*q)[ResultItemValue].([]interface{})
v1, _ := strconv.ParseFloat(value1[len(value1)-1].(string), 64)
v2, _ := strconv.ParseFloat(value2[len(value2)-1].(string), 64)
if v1 == v2 {
resourceName1 := (*p)[ResultItemMetric].(map[string]interface{})[resourceType]
resourceName2 := (*q)[ResultItemMetric].(map[string]interface{})[resourceType]
return resourceName1.(string) > resourceName2.(string)
}
return v1 > v2
}})
}
for _, r := range metricItem.Data.Result {
// for some reasons, 'metric' may not contain `resourceType` field
// example: {"metric":{},"value":[1541142931.731,"3"]}
k, exist := r[ResultItemMetric].(map[string]interface{})[resourceType]
key := k.(string)
if exist {
if _, exist := indexMap[key]; !exist {
indexMap[key] = i
i = i + 1
}
}
}
}
// iterator all metric to find max metricItems length
for _, r := range metricItem.Data.Result {
k, ok := r[ResultItemMetric].(map[string]interface{})[resourceType]
if ok {
currentResourceMap[k.(string)] = 1
}
}
}
}
var keys []string
for k := range currentResourceMap {
keys = append(keys, k)
}
sort.Strings(keys)
for _, resource := range keys {
if _, exist := indexMap[resource]; !exist {
indexMap[resource] = i
i = i + 1
}
}
// sort other metric
for i := 0; i < len(fmtLevelMetric.Results); i++ {
re := fmtLevelMetric.Results[i]
if re.Data.ResultType == ResultTypeVector && re.Status == MetricStatusSuccess {
sortedMetric := make([]map[string]interface{}, len(indexMap))
for j := 0; j < len(re.Data.Result); j++ {
r := re.Data.Result[j]
k, exist := r[ResultItemMetric].(map[string]interface{})[resourceType]
if exist {
index, exist := indexMap[k.(string)]
if exist {
sortedMetric[index] = r
}
}
}
fmtLevelMetric.Results[i].Data.Result = sortedMetric
}
}
return fmtLevelMetric, len(indexMap)
}
func Page(pageNum string, limitNum string, fmtLevelMetric *FormatedLevelMetric, maxLength int) interface{} {
if maxLength <= 0 {
return fmtLevelMetric
}
// matrix type can not be sorted
for _, metricItem := range fmtLevelMetric.Results {
if metricItem.Data.ResultType != ResultTypeVector {
return fmtLevelMetric
}
}
var page = DefaultPage
if pageNum != "" {
p, err := strconv.Atoi(pageNum)
if err != nil {
glog.Errorln(err)
} else {
if p > 0 {
page = p
}
}
} else {
// the default mode is none paging
return fmtLevelMetric
}
var limit = DefaultPageLimit
if limitNum != "" {
l, err := strconv.Atoi(limitNum)
if err != nil {
glog.Errorln(err)
} else {
if l > 0 {
limit = l
}
}
}
// the i page: [(page-1) * limit, (page) * limit - 1]
start := (page - 1) * limit
end := (page)*limit - 1
for i := 0; i < len(fmtLevelMetric.Results); i++ {
// only pageing when result type is `vector` and result status is `success`
if fmtLevelMetric.Results[i].Data.ResultType != ResultTypeVector || fmtLevelMetric.Results[i].Status != MetricStatusSuccess {
continue
}
resultLen := len(fmtLevelMetric.Results[i].Data.Result)
if start >= resultLen {
fmtLevelMetric.Results[i].Data.Result = nil
continue
}
if end >= resultLen {
end = resultLen - 1
}
slice := fmtLevelMetric.Results[i].Data.Result[start : end+1]
fmtLevelMetric.Results[i].Data.Result = slice
}
allPage := int(math.Ceil(float64(maxLength) / float64(limit)))
return &PagedFormatedLevelMetric{
Message: "paged",
TotalPage: allPage,
TotalItem: maxLength,
CurrentPage: page,
Metric: *fmtLevelMetric,
}
}
// maybe this function is time consuming
func ReformatJson(metric string, metricsName string, needDelParams ...string) *FormatedMetric {
var formatMetric FormatedMetric
err := jsonIter.Unmarshal([]byte(metric), &formatMetric)
if err != nil {
glog.Errorln("Unmarshal metric json failed", err.Error(), metric)
}
if formatMetric.MetricName == "" {
if metricsName != "" {
formatMetric.MetricName = metricsName
}
}
// retrive metrics success
if formatMetric.Status == MetricStatusSuccess {
result := formatMetric.Data.Result
for _, res := range result {
metric, exist := res[ResultItemMetric]
metricMap, sure := metric.(map[string]interface{})
if exist && sure {
delete(metricMap, "__name__")
}
if len(needDelParams) > 0 {
for _, p := range needDelParams {
delete(metricMap, p)
}
}
}
}
return &formatMetric
}
func ReformatNodeStatusField(nodeMetric *FormatedMetric) *FormatedMetric {
metricCount := len(nodeMetric.Data.Result)
for i := 0; i < metricCount; i++ {
metric, exist := nodeMetric.Data.Result[i][ResultItemMetric]
if exist {
status, exist := metric.(map[string]interface{})[MetricStatus]
if exist {
status = UpperFirstLetter(status.(string))
metric.(map[string]interface{})[MetricStatus] = status
}
}
}
return nodeMetric
}
func UpperFirstLetter(str string) string {
for i, ch := range str {
return string(unicode.ToUpper(ch)) + str[i+1:]
}
return ""
}

View File

@@ -22,6 +22,8 @@ import (
"k8s.io/apimachinery/pkg/api/resource"
metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"fmt"
"kubesphere.io/kubesphere/pkg/client"
"kubesphere.io/kubesphere/pkg/models/controllers"
)
@@ -38,14 +40,17 @@ const (
persistentvolumeclaimsKey = "persistentvolumeclaims"
storageClassesKey = "count/storageClass"
namespaceKey = "count/namespace"
jobsKey = "count/jobs.batch"
cronJobsKey = "count/cronjobs.batch"
)
var resourceMap = map[string]string{daemonsetsKey: controllers.Daemonsets, deploymentsKey: controllers.Deployments,
ingressKey: controllers.Ingresses, rolesKey: controllers.Roles, servicesKey: controllers.Services,
statefulsetsKey: controllers.Statefulsets, persistentvolumeclaimsKey: controllers.PersistentVolumeClaim, podsKey: controllers.Pods,
namespaceKey: controllers.Namespaces, storageClassesKey: controllers.StorageClasses, clusterRolesKey: controllers.ClusterRoles}
namespaceKey: controllers.Namespaces, storageClassesKey: controllers.StorageClasses, clusterRolesKey: controllers.ClusterRoles,
jobsKey: controllers.Jobs, cronJobsKey: controllers.Cronjobs}
type resourceQuota struct {
type ResourceQuota struct {
NameSpace string `json:"namespace"`
Data v1.ResourceQuotaStatus `json:"data"`
}
@@ -55,10 +60,15 @@ func getUsage(namespace, resource string) int {
if err != nil {
return 0
}
return ctl.Count(namespace)
if len(namespace) == 0 {
return ctl.CountWithConditions("")
}
return ctl.CountWithConditions(fmt.Sprintf("namespace = '%s' ", namespace))
}
func GetClusterQuota() (*resourceQuota, error) {
func GetClusterQuota() (*ResourceQuota, error) {
quota := v1.ResourceQuotaStatus{Hard: make(v1.ResourceList), Used: make(v1.ResourceList)}
for k, v := range resourceMap {
@@ -68,11 +78,11 @@ func GetClusterQuota() (*resourceQuota, error) {
quota.Used[v1.ResourceName(k)] = quantity
}
return &resourceQuota{NameSpace: "\"\"", Data: quota}, nil
return &ResourceQuota{NameSpace: "\"\"", Data: quota}, nil
}
func GetNamespaceQuota(namespace string) (*resourceQuota, error) {
func GetNamespaceQuota(namespace string) (*ResourceQuota, error) {
quota, err := getNamespaceResourceQuota(namespace)
if err != nil {
glog.Error(err)
@@ -95,7 +105,7 @@ func GetNamespaceQuota(namespace string) (*resourceQuota, error) {
}
}
return &resourceQuota{NameSpace: namespace, Data: *quota}, nil
return &ResourceQuota{NameSpace: namespace, Data: *quota}, nil
}
func updateNamespaceQuota(tmpResourceList, resourceList v1.ResourceList) {

View File

@@ -1,3 +1,19 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package models
import (
@@ -6,8 +22,15 @@ import (
"strconv"
"strings"
"kubesphere.io/kubesphere/pkg/client"
"github.com/golang/glog"
"kubesphere.io/kubesphere/pkg/models/controllers"
"kubesphere.io/kubesphere/pkg/options"
)
const (
limit = "limit"
page = "page"
)
type ResourceList struct {
@@ -17,53 +40,43 @@ type ResourceList struct {
Items interface{} `json:"items,omitempty"`
}
type searchConditions struct {
match map[string]string
fuzzy map[string]string
matchOr map[string]string
fuzzyOr map[string]string
}
func getController(resource string) (controllers.Controller, error) {
var ctl controllers.Controller
attr := controllers.CommonAttribute{DB: client.NewDBClient()}
switch resource {
case controllers.Deployments:
ctl = &controllers.DeploymentCtl{attr}
case controllers.Statefulsets:
ctl = &controllers.StatefulsetCtl{attr}
case controllers.Daemonsets:
ctl = &controllers.DaemonsetCtl{attr}
case controllers.Ingresses:
ctl = &controllers.IngressCtl{attr}
case controllers.PersistentVolumeClaim:
ctl = &controllers.PvcCtl{attr}
case controllers.Roles:
ctl = &controllers.RoleCtl{attr}
case controllers.ClusterRoles:
ctl = &controllers.ClusterRoleCtl{attr}
case controllers.Services:
ctl = &controllers.ServiceCtl{attr}
case controllers.Pods:
ctl = &controllers.PodCtl{attr}
case controllers.Namespaces:
ctl = &controllers.NamespaceCtl{attr}
case controllers.StorageClasses:
ctl = &controllers.StorageClassCtl{attr}
case controllers.Deployments, controllers.Statefulsets, controllers.Daemonsets, controllers.Ingresses,
controllers.PersistentVolumeClaim, controllers.Roles, controllers.ClusterRoles, controllers.Services,
controllers.Pods, controllers.Namespaces, controllers.StorageClasses, controllers.Jobs, controllers.Cronjobs,
controllers.Nodes, controllers.Secrets, controllers.ConfigMaps:
return controllers.ResourceControllers.Controllers[resource], nil
default:
return nil, errors.New("invalid resource type")
return nil, fmt.Errorf("invalid resource Name '%s'", resource)
}
return ctl, nil
return nil, nil
}
func getConditions(str string) (map[string]string, map[string]string, error) {
func getConditions(str string) (*searchConditions, map[string]string, error) {
match := make(map[string]string)
fuzzy := make(map[string]string)
matchOr := make(map[string]string)
fuzzyOr := make(map[string]string)
orderField := make(map[string]string)
if len(str) == 0 {
return nil, nil, nil
}
list := strings.Split(str, ",")
for _, item := range list {
if strings.Count(item, "=") >= 2 {
return nil, nil, errors.New("invalid condition input, invalid character \"=\"")
}
if strings.Count(item, "~") >= 2 {
return nil, nil, errors.New("invalid condition input, invalid character \"~\"")
conditions := strings.Split(str, ",")
for _, item := range conditions {
if strings.Count(item, "=") >= 2 || strings.Count(item, "~") >= 2 {
return nil, nil, errors.New("invalid condition input")
}
if strings.Count(item, "=") == 1 {
@@ -71,7 +84,17 @@ func getConditions(str string) (map[string]string, map[string]string, error) {
if len(kvs) < 2 || len(kvs[1]) == 0 {
return nil, nil, errors.New("invalid condition input")
}
match[kvs[0]] = kvs[1]
if !strings.Contains(kvs[0], "|") {
match[kvs[0]] = kvs[1]
} else {
multiFields := strings.Split(kvs[0], "|")
for _, filed := range multiFields {
if len(filed) > 0 {
matchOr[filed] = kvs[1]
}
}
}
continue
}
@@ -80,22 +103,41 @@ func getConditions(str string) (map[string]string, map[string]string, error) {
if len(kvs) < 2 || len(kvs[1]) == 0 {
return nil, nil, errors.New("invalid condition input")
}
fuzzy[kvs[0]] = kvs[1]
if !strings.Contains(kvs[0], "|") {
fuzzy[kvs[0]] = kvs[1]
} else {
multiFields := strings.Split(kvs[0], "|")
if len(multiFields) > 1 && len(multiFields[1]) > 0 {
orderField[multiFields[0]] = kvs[1]
}
for _, filed := range multiFields {
if len(filed) > 0 {
fuzzyOr[filed] = kvs[1]
}
}
}
continue
}
return nil, nil, errors.New("invalid condition input")
}
return match, fuzzy, nil
return &searchConditions{match: match, fuzzyOr: fuzzyOr, matchOr: matchOr, fuzzy: fuzzy}, orderField, nil
}
func getPaging(str string) (map[string]int, error) {
paging := make(map[string]int)
if len(str) == 0 {
return paging, nil
func getPaging(resourceName, pagingStr string) (*controllers.Paging, error) {
defaultPaging := &controllers.Paging{Limit: 10, Offset: 0, Page: 1}
paging := controllers.Paging{}
if resourceName == controllers.Namespaces {
defaultPaging = nil
}
list := strings.Split(str, ",")
if len(pagingStr) == 0 {
return defaultPaging, nil
}
list := strings.Split(pagingStr, ",")
for _, item := range list {
kvs := strings.Split(item, "=")
if len(kvs) < 2 {
@@ -103,45 +145,90 @@ func getPaging(str string) (map[string]int, error) {
}
value, err := strconv.Atoi(kvs[1])
if err != nil {
return nil, err
if err != nil || value <= 0 {
return nil, errors.New("invalid Paging input")
}
paging[kvs[0]] = value
if kvs[0] == limit {
paging.Limit = value
}
if kvs[0] == page {
paging.Page = value
}
}
return paging, nil
if paging.Limit > 0 && paging.Page > 0 {
paging.Offset = (paging.Page - 1) * paging.Limit
return &paging, nil
}
return defaultPaging, nil
}
func ListResource(resourceName, conditonSrt, pagingStr string) (*ResourceList, error) {
match, fuzzy, err := getConditions(conditonSrt)
func generateOrder(orderField map[string]string, order string) string {
if len(orderField) == 0 {
return order
}
var str string
for k, v := range orderField {
if len(str) > 0 {
str = fmt.Sprintf("%s, (%s like '%%%s%%')", str, k, v)
} else {
str = fmt.Sprintf("(%s like '%%%s%%')", k, v)
}
}
if len(order) == 0 {
return fmt.Sprintf("%s desc", str)
} else {
return fmt.Sprintf("%s, %s", str, order)
}
}
func ListResource(resourceName, conditonSrt, pagingStr, order string) (*ResourceList, error) {
conditions, OrderFields, err := getConditions(conditonSrt)
if err != nil {
return nil, err
}
pagingMap, err := getPaging(pagingStr)
order = generateOrder(OrderFields, order)
conditionStr := generateConditionStr(conditions)
paging, err := getPaging(resourceName, pagingStr)
if err != nil {
return nil, err
}
conditionStr, paging := generateConditionAndPaging(match, fuzzy, pagingMap)
ctl, err := getController(resourceName)
if err != nil {
return nil, err
}
total, items, err := ctl.ListWithConditions(conditionStr, paging)
total, items, err := ctl.ListWithConditions(conditionStr, paging, order)
if err != nil {
return nil, err
}
return &ResourceList{Total: total, Items: items, Page: pagingMap["page"], Limit: pagingMap["limit"]}, nil
if paging != nil {
return &ResourceList{Total: total, Items: items, Page: paging.Page, Limit: paging.Limit}, nil
} else {
return &ResourceList{Total: total, Items: items}, nil
}
}
func generateConditionAndPaging(match map[string]string, fuzzy map[string]string, paging map[string]int) (string, *controllers.Paging) {
func generateConditionStr(conditions *searchConditions) string {
shouldUseAnd := false
shouldUseBrackets := false
conditionStr := ""
for k, v := range match {
if conditions == nil {
return conditionStr
}
for k, v := range conditions.match {
if len(conditionStr) == 0 {
conditionStr = fmt.Sprintf("%s = \"%s\" ", k, v)
} else {
@@ -149,7 +236,7 @@ func generateConditionAndPaging(match map[string]string, fuzzy map[string]string
}
}
for k, v := range fuzzy {
for k, v := range conditions.fuzzy {
if len(conditionStr) == 0 {
conditionStr = fmt.Sprintf("%s like '%%%s%%' ", k, v)
} else {
@@ -157,12 +244,43 @@ func generateConditionAndPaging(match map[string]string, fuzzy map[string]string
}
}
if paging["limit"] > 0 && paging["page"] >= 0 {
offset := (paging["page"] - 1) * paging["limit"]
return conditionStr, &controllers.Paging{Limit: paging["limit"], Offset: offset}
if len(conditionStr) > 0 {
shouldUseAnd = true
}
return conditionStr, nil
for k, v := range conditions.matchOr {
if len(conditionStr) == 0 {
conditionStr = fmt.Sprintf("%s = \"%s\" ", k, v)
} else {
if shouldUseAnd {
conditionStr = fmt.Sprintf("%s And (%s = \"%s\" ", conditionStr, k, v)
shouldUseBrackets = true
shouldUseAnd = false
} else {
conditionStr = fmt.Sprintf("%s OR %s = \"%s\" ", conditionStr, k, v)
}
}
}
for k, v := range conditions.fuzzyOr {
if len(conditionStr) == 0 {
conditionStr = fmt.Sprintf("%s like '%%%s%%' ", k, v)
} else {
if shouldUseAnd {
conditionStr = fmt.Sprintf("%s And (%s like '%%%s%%' ", conditionStr, k, v)
shouldUseAnd = false
shouldUseBrackets = true
} else {
conditionStr = fmt.Sprintf("%s OR %s like '%%%s%%' ", conditionStr, k, v)
}
}
}
if shouldUseBrackets {
conditionStr = fmt.Sprintf("%s )", conditionStr)
}
return conditionStr
}
type workLoadStatus struct {
@@ -176,14 +294,14 @@ func GetNamespacesResourceStatus(namespace string) (*workLoadStatus, error) {
var status *ResourceList
var err error
for _, resource := range []string{controllers.Deployments, controllers.Statefulsets, controllers.Daemonsets, controllers.PersistentVolumeClaim} {
resourceStatus := controllers.Updating
notReadyStatus := controllers.Updating
if resource == controllers.PersistentVolumeClaim {
resourceStatus = controllers.PvcPending
notReadyStatus = controllers.PvcPending
}
if len(namespace) > 0 {
status, err = ListResource(resource, fmt.Sprintf("status=%s,namespace=%s", resourceStatus, namespace), "")
status, err = ListResource(resource, fmt.Sprintf("status=%s,namespace=%s", notReadyStatus, namespace), "", "")
} else {
status, err = ListResource(resource, fmt.Sprintf("status=%s", resourceStatus), "")
status, err = ListResource(resource, fmt.Sprintf("status=%s", notReadyStatus), "", "")
}
if err != nil {
@@ -191,9 +309,7 @@ func GetNamespacesResourceStatus(namespace string) (*workLoadStatus, error) {
}
count := status.Total
//items := status.Items
res.Count[resource] = count
//res.Items[resource] = items
}
return &res, nil
@@ -203,3 +319,36 @@ func GetClusterResourceStatus() (*workLoadStatus, error) {
return GetNamespacesResourceStatus("")
}
func GetApplication(clusterId string) (interface{}, error) {
ctl := &controllers.ApplicationCtl{OpenpitrixAddr: options.ServerOptions.GetOpAddress()}
return ctl.GetApp(clusterId)
}
func ListApplication(runtimeId, conditionStr, pagingStr string) (*ResourceList, error) {
paging, err := getPaging(controllers.Applications, pagingStr)
if err != nil {
return nil, err
}
conditions, _, err := getConditions(conditionStr)
if err != nil {
glog.Error(err)
return nil, err
}
if conditions == nil {
conditions = &searchConditions{}
}
ctl := &controllers.ApplicationCtl{OpenpitrixAddr: options.ServerOptions.GetOpAddress()}
total, items, err := ctl.ListApplication(runtimeId, conditions.match, conditions.fuzzy, paging)
if err != nil {
glog.Errorf("get application list failed, reason: %s", err)
return nil, err
}
return &ResourceList{Total: total, Items: items, Page: paging.Page, Limit: paging.Limit}, nil
}

112
pkg/models/revisions.go Normal file
View File

@@ -0,0 +1,112 @@
/*
Copyright 2018 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package models
import (
"fmt"
"strconv"
"github.com/golang/glog"
"k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
appsV1 "k8s.io/client-go/listers/apps/v1"
"kubesphere.io/kubesphere/pkg/models/controllers"
)
func GetDeployRevision(namespace, name, revision string) (*v1.ReplicaSet, error) {
deployLister := controllers.ResourceControllers.Controllers[controllers.Deployments].Lister().(appsV1.DeploymentLister)
deploy, err := deployLister.Deployments(namespace).Get(name)
if err != nil {
glog.Errorf("get deployment %s failed, reason: %s", name, err)
return nil, err
}
labelMap := deploy.Spec.Template.Labels
labelSelector := labels.Set(labelMap).AsSelector()
rsLister := controllers.ResourceControllers.Controllers[controllers.Replicasets].Lister().(appsV1.ReplicaSetLister)
rsList, err := rsLister.ReplicaSets(namespace).List(labelSelector)
if err != nil {
return nil, err
}
for _, rs := range rsList {
if rs.Annotations["deployment.kubernetes.io/revision"] == revision {
return rs, nil
}
}
return nil, errors.NewNotFound(v1.Resource("deployment revision"), fmt.Sprintf("%s#%s", name, revision))
}
func GetDaemonSetRevision(namespace, name, revision string) (*v1.ControllerRevision, error) {
revisionInt, err := strconv.Atoi(revision)
if err != nil {
return nil, err
}
dsLister := controllers.ResourceControllers.Controllers[controllers.Daemonsets].Lister().(appsV1.DaemonSetLister)
ds, err := dsLister.DaemonSets(namespace).Get(name)
if err != nil {
glog.Errorf("get Daemonset %s failed, reason: %s", name, err)
return nil, err
}
labels := ds.Spec.Template.Labels
return getControllerRevision(namespace, name, labels, revisionInt)
}
func GetStatefulSetRevision(namespace, name, revision string) (*v1.ControllerRevision, error) {
revisionInt, err := strconv.Atoi(revision)
if err != nil {
return nil, err
}
stLister := controllers.ResourceControllers.Controllers[controllers.Statefulsets].Lister().(appsV1.StatefulSetLister)
st, err := stLister.StatefulSets(namespace).Get(name)
if err != nil {
glog.Errorf("get Daemonset %s failed, reason: %s", name, err)
return nil, err
}
labels := st.Spec.Template.Labels
return getControllerRevision(namespace, name, labels, revisionInt)
}
func getControllerRevision(namespace, name string, labelMap map[string]string, revision int) (*v1.ControllerRevision, error) {
labelSelector := labels.Set(labelMap).AsSelector()
revisionLister := controllers.ResourceControllers.Controllers[controllers.ControllerRevisions].Lister().(appsV1.ControllerRevisionLister)
revisions, err := revisionLister.ControllerRevisions(namespace).List(labelSelector)
if err != nil {
return nil, err
}
for _, controllerRevision := range revisions {
if controllerRevision.Revision == int64(revision) {
return controllerRevision, nil
}
}
return nil, errors.NewNotFound(v1.Resource("revision"), fmt.Sprintf("%s#%s", name, revision))
}

View File

@@ -28,6 +28,7 @@ import (
"k8s.io/api/rbac/v1"
"errors"
"strings"
"kubesphere.io/kubesphere/pkg/client"
"kubesphere.io/kubesphere/pkg/constants"
@@ -125,6 +126,9 @@ func LoadYamls() ([]string, error) {
}
for _, file := range files {
if file.IsDir() || !strings.HasSuffix(file.Name(), ".yaml") {
continue
}
content, err := ioutil.ReadFile(constants.IngressControllerFolder + "/" + file.Name())
if err != nil {

View File

@@ -0,0 +1,46 @@
package workspaces
import "time"
type Workspace struct {
Group `json:",inline"`
Admin string `json:"admin,omitempty"`
Namespaces []string `json:"namespaces"`
DevopsProjects []string `json:"devops_projects"`
}
type UserInvite struct {
Username string `json:"username"`
Role string `json:"role"`
}
type Group struct {
Path string `json:"path"`
Name string `json:"name"`
Gid string `json:"gid"`
Members []string `json:"members"`
Logo string `json:"logo"`
Creator string `json:"creator"`
CreateTime string `json:"create_time"`
ChildGroups []string `json:"child_groups,omitempty"`
Description string `json:"description"`
}
func (g Group) GetCreateTime() (time.Time, error) {
return time.Parse("2006-01-02T15:04:05Z", g.CreateTime)
}
type WorkspaceDPBinding struct {
Workspace string `gorm:"primary_key"`
DevOpsProject string `gorm:"primary_key"`
}
type DevopsProject struct {
ProjectId *string `json:"project_id,omitempty"`
Name string `json:"name"`
Description string `json:"description"`
Creator string `json:"creator"`
CreateTime *time.Time `json:"create_time,omitempty"`
Status *string `json:"status"`
Visibility *string `json:"visibility,omitempty"`
}

File diff suppressed because it is too large Load Diff

20
pkg/util/errors/errors.go Normal file
View File

@@ -0,0 +1,20 @@
package errors
import (
"encoding/json"
"errors"
)
func Wrap(data []byte) error {
var j map[string]string
err := json.Unmarshal(data, &j)
if err != nil {
return errors.New(string(data))
} else if message := j["message"]; message != "" {
return errors.New(message)
} else if message := j["Error"]; message != "" {
return errors.New(message)
} else {
return errors.New(string(data))
}
}

5
vendor/github.com/PuerkitoBio/purell/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,5 @@
*.sublime-*
.DS_Store
*.swp
*.swo
tags

7
vendor/github.com/PuerkitoBio/purell/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,7 @@
language: go
go:
- 1.4
- 1.5
- 1.6
- tip

12
vendor/github.com/PuerkitoBio/purell/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,12 @@
Copyright (c) 2012, Martin Angers
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

187
vendor/github.com/PuerkitoBio/purell/README.md generated vendored Normal file
View File

@@ -0,0 +1,187 @@
# Purell
Purell is a tiny Go library to normalize URLs. It returns a pure URL. Pure-ell. Sanitizer and all. Yeah, I know...
Based on the [wikipedia paper][wiki] and the [RFC 3986 document][rfc].
[![build status](https://secure.travis-ci.org/PuerkitoBio/purell.png)](http://travis-ci.org/PuerkitoBio/purell)
## Install
`go get github.com/PuerkitoBio/purell`
## Changelog
* **2016-11-14 (v1.1.0)** : IDN: Conform to RFC 5895: Fold character width (thanks to @beeker1121).
* **2016-07-27 (v1.0.0)** : Normalize IDN to ASCII (thanks to @zenovich).
* **2015-02-08** : Add fix for relative paths issue ([PR #5][pr5]) and add fix for unnecessary encoding of reserved characters ([see issue #7][iss7]).
* **v0.2.0** : Add benchmarks, Attempt IDN support.
* **v0.1.0** : Initial release.
## Examples
From `example_test.go` (note that in your code, you would import "github.com/PuerkitoBio/purell", and would prefix references to its methods and constants with "purell."):
```go
package purell
import (
"fmt"
"net/url"
)
func ExampleNormalizeURLString() {
if normalized, err := NormalizeURLString("hTTp://someWEBsite.com:80/Amazing%3f/url/",
FlagLowercaseScheme|FlagLowercaseHost|FlagUppercaseEscapes); err != nil {
panic(err)
} else {
fmt.Print(normalized)
}
// Output: http://somewebsite.com:80/Amazing%3F/url/
}
func ExampleMustNormalizeURLString() {
normalized := MustNormalizeURLString("hTTpS://someWEBsite.com:443/Amazing%fa/url/",
FlagsUnsafeGreedy)
fmt.Print(normalized)
// Output: http://somewebsite.com/Amazing%FA/url
}
func ExampleNormalizeURL() {
if u, err := url.Parse("Http://SomeUrl.com:8080/a/b/.././c///g?c=3&a=1&b=9&c=0#target"); err != nil {
panic(err)
} else {
normalized := NormalizeURL(u, FlagsUsuallySafeGreedy|FlagRemoveDuplicateSlashes|FlagRemoveFragment)
fmt.Print(normalized)
}
// Output: http://someurl.com:8080/a/c/g?c=3&a=1&b=9&c=0
}
```
## API
As seen in the examples above, purell offers three methods, `NormalizeURLString(string, NormalizationFlags) (string, error)`, `MustNormalizeURLString(string, NormalizationFlags) (string)` and `NormalizeURL(*url.URL, NormalizationFlags) (string)`. They all normalize the provided URL based on the specified flags. Here are the available flags:
```go
const (
// Safe normalizations
FlagLowercaseScheme NormalizationFlags = 1 << iota // HTTP://host -> http://host, applied by default in Go1.1
FlagLowercaseHost // http://HOST -> http://host
FlagUppercaseEscapes // http://host/t%ef -> http://host/t%EF
FlagDecodeUnnecessaryEscapes // http://host/t%41 -> http://host/tA
FlagEncodeNecessaryEscapes // http://host/!"#$ -> http://host/%21%22#$
FlagRemoveDefaultPort // http://host:80 -> http://host
FlagRemoveEmptyQuerySeparator // http://host/path? -> http://host/path
// Usually safe normalizations
FlagRemoveTrailingSlash // http://host/path/ -> http://host/path
FlagAddTrailingSlash // http://host/path -> http://host/path/ (should choose only one of these add/remove trailing slash flags)
FlagRemoveDotSegments // http://host/path/./a/b/../c -> http://host/path/a/c
// Unsafe normalizations
FlagRemoveDirectoryIndex // http://host/path/index.html -> http://host/path/
FlagRemoveFragment // http://host/path#fragment -> http://host/path
FlagForceHTTP // https://host -> http://host
FlagRemoveDuplicateSlashes // http://host/path//a///b -> http://host/path/a/b
FlagRemoveWWW // http://www.host/ -> http://host/
FlagAddWWW // http://host/ -> http://www.host/ (should choose only one of these add/remove WWW flags)
FlagSortQuery // http://host/path?c=3&b=2&a=1&b=1 -> http://host/path?a=1&b=1&b=2&c=3
// Normalizations not in the wikipedia article, required to cover tests cases
// submitted by jehiah
FlagDecodeDWORDHost // http://1113982867 -> http://66.102.7.147
FlagDecodeOctalHost // http://0102.0146.07.0223 -> http://66.102.7.147
FlagDecodeHexHost // http://0x42660793 -> http://66.102.7.147
FlagRemoveUnnecessaryHostDots // http://.host../path -> http://host/path
FlagRemoveEmptyPortSeparator // http://host:/path -> http://host/path
// Convenience set of safe normalizations
FlagsSafe NormalizationFlags = FlagLowercaseHost | FlagLowercaseScheme | FlagUppercaseEscapes | FlagDecodeUnnecessaryEscapes | FlagEncodeNecessaryEscapes | FlagRemoveDefaultPort | FlagRemoveEmptyQuerySeparator
// For convenience sets, "greedy" uses the "remove trailing slash" and "remove www. prefix" flags,
// while "non-greedy" uses the "add (or keep) the trailing slash" and "add www. prefix".
// Convenience set of usually safe normalizations (includes FlagsSafe)
FlagsUsuallySafeGreedy NormalizationFlags = FlagsSafe | FlagRemoveTrailingSlash | FlagRemoveDotSegments
FlagsUsuallySafeNonGreedy NormalizationFlags = FlagsSafe | FlagAddTrailingSlash | FlagRemoveDotSegments
// Convenience set of unsafe normalizations (includes FlagsUsuallySafe)
FlagsUnsafeGreedy NormalizationFlags = FlagsUsuallySafeGreedy | FlagRemoveDirectoryIndex | FlagRemoveFragment | FlagForceHTTP | FlagRemoveDuplicateSlashes | FlagRemoveWWW | FlagSortQuery
FlagsUnsafeNonGreedy NormalizationFlags = FlagsUsuallySafeNonGreedy | FlagRemoveDirectoryIndex | FlagRemoveFragment | FlagForceHTTP | FlagRemoveDuplicateSlashes | FlagAddWWW | FlagSortQuery
// Convenience set of all available flags
FlagsAllGreedy = FlagsUnsafeGreedy | FlagDecodeDWORDHost | FlagDecodeOctalHost | FlagDecodeHexHost | FlagRemoveUnnecessaryHostDots | FlagRemoveEmptyPortSeparator
FlagsAllNonGreedy = FlagsUnsafeNonGreedy | FlagDecodeDWORDHost | FlagDecodeOctalHost | FlagDecodeHexHost | FlagRemoveUnnecessaryHostDots | FlagRemoveEmptyPortSeparator
)
```
For convenience, the set of flags `FlagsSafe`, `FlagsUsuallySafe[Greedy|NonGreedy]`, `FlagsUnsafe[Greedy|NonGreedy]` and `FlagsAll[Greedy|NonGreedy]` are provided for the similarly grouped normalizations on [wikipedia's URL normalization page][wiki]. You can add (using the bitwise OR `|` operator) or remove (using the bitwise AND NOT `&^` operator) individual flags from the sets if required, to build your own custom set.
The [full godoc reference is available on gopkgdoc][godoc].
Some things to note:
* `FlagDecodeUnnecessaryEscapes`, `FlagEncodeNecessaryEscapes`, `FlagUppercaseEscapes` and `FlagRemoveEmptyQuerySeparator` are always implicitly set, because internally, the URL string is parsed as an URL object, which automatically decodes unnecessary escapes, uppercases and encodes necessary ones, and removes empty query separators (an unnecessary `?` at the end of the url). So this operation cannot **not** be done. For this reason, `FlagRemoveEmptyQuerySeparator` (as well as the other three) has been included in the `FlagsSafe` convenience set, instead of `FlagsUnsafe`, where Wikipedia puts it.
* The `FlagDecodeUnnecessaryEscapes` decodes the following escapes (*from -> to*):
- %24 -> $
- %26 -> &
- %2B-%3B -> +,-./0123456789:;
- %3D -> =
- %40-%5A -> @ABCDEFGHIJKLMNOPQRSTUVWXYZ
- %5F -> _
- %61-%7A -> abcdefghijklmnopqrstuvwxyz
- %7E -> ~
* When the `NormalizeURL` function is used (passing an URL object), this source URL object is modified (that is, after the call, the URL object will be modified to reflect the normalization).
* The *replace IP with domain name* normalization (`http://208.77.188.166/ → http://www.example.com/`) is obviously not possible for a library without making some network requests. This is not implemented in purell.
* The *remove unused query string parameters* and *remove default query parameters* are also not implemented, since this is a very case-specific normalization, and it is quite trivial to do with an URL object.
### Safe vs Usually Safe vs Unsafe
Purell allows you to control the level of risk you take while normalizing an URL. You can aggressively normalize, play it totally safe, or anything in between.
Consider the following URL:
`HTTPS://www.RooT.com/toto/t%45%1f///a/./b/../c/?z=3&w=2&a=4&w=1#invalid`
Normalizing with the `FlagsSafe` gives:
`https://www.root.com/toto/tE%1F///a/./b/../c/?z=3&w=2&a=4&w=1#invalid`
With the `FlagsUsuallySafeGreedy`:
`https://www.root.com/toto/tE%1F///a/c?z=3&w=2&a=4&w=1#invalid`
And with `FlagsUnsafeGreedy`:
`http://root.com/toto/tE%1F/a/c?a=4&w=1&w=2&z=3`
## TODOs
* Add a class/default instance to allow specifying custom directory index names? At the moment, removing directory index removes `(^|/)((?:default|index)\.\w{1,4})$`.
## Thanks / Contributions
@rogpeppe
@jehiah
@opennota
@pchristopher1275
@zenovich
@beeker1121
## License
The [BSD 3-Clause license][bsd].
[bsd]: http://opensource.org/licenses/BSD-3-Clause
[wiki]: http://en.wikipedia.org/wiki/URL_normalization
[rfc]: http://tools.ietf.org/html/rfc3986#section-6
[godoc]: http://go.pkgdoc.org/github.com/PuerkitoBio/purell
[pr5]: https://github.com/PuerkitoBio/purell/pull/5
[iss7]: https://github.com/PuerkitoBio/purell/issues/7

379
vendor/github.com/PuerkitoBio/purell/purell.go generated vendored Normal file
View File

@@ -0,0 +1,379 @@
/*
Package purell offers URL normalization as described on the wikipedia page:
http://en.wikipedia.org/wiki/URL_normalization
*/
package purell
import (
"bytes"
"fmt"
"net/url"
"regexp"
"sort"
"strconv"
"strings"
"github.com/PuerkitoBio/urlesc"
"golang.org/x/net/idna"
"golang.org/x/text/unicode/norm"
"golang.org/x/text/width"
)
// A set of normalization flags determines how a URL will
// be normalized.
type NormalizationFlags uint
const (
// Safe normalizations
FlagLowercaseScheme NormalizationFlags = 1 << iota // HTTP://host -> http://host, applied by default in Go1.1
FlagLowercaseHost // http://HOST -> http://host
FlagUppercaseEscapes // http://host/t%ef -> http://host/t%EF
FlagDecodeUnnecessaryEscapes // http://host/t%41 -> http://host/tA
FlagEncodeNecessaryEscapes // http://host/!"#$ -> http://host/%21%22#$
FlagRemoveDefaultPort // http://host:80 -> http://host
FlagRemoveEmptyQuerySeparator // http://host/path? -> http://host/path
// Usually safe normalizations
FlagRemoveTrailingSlash // http://host/path/ -> http://host/path
FlagAddTrailingSlash // http://host/path -> http://host/path/ (should choose only one of these add/remove trailing slash flags)
FlagRemoveDotSegments // http://host/path/./a/b/../c -> http://host/path/a/c
// Unsafe normalizations
FlagRemoveDirectoryIndex // http://host/path/index.html -> http://host/path/
FlagRemoveFragment // http://host/path#fragment -> http://host/path
FlagForceHTTP // https://host -> http://host
FlagRemoveDuplicateSlashes // http://host/path//a///b -> http://host/path/a/b
FlagRemoveWWW // http://www.host/ -> http://host/
FlagAddWWW // http://host/ -> http://www.host/ (should choose only one of these add/remove WWW flags)
FlagSortQuery // http://host/path?c=3&b=2&a=1&b=1 -> http://host/path?a=1&b=1&b=2&c=3
// Normalizations not in the wikipedia article, required to cover tests cases
// submitted by jehiah
FlagDecodeDWORDHost // http://1113982867 -> http://66.102.7.147
FlagDecodeOctalHost // http://0102.0146.07.0223 -> http://66.102.7.147
FlagDecodeHexHost // http://0x42660793 -> http://66.102.7.147
FlagRemoveUnnecessaryHostDots // http://.host../path -> http://host/path
FlagRemoveEmptyPortSeparator // http://host:/path -> http://host/path
// Convenience set of safe normalizations
FlagsSafe NormalizationFlags = FlagLowercaseHost | FlagLowercaseScheme | FlagUppercaseEscapes | FlagDecodeUnnecessaryEscapes | FlagEncodeNecessaryEscapes | FlagRemoveDefaultPort | FlagRemoveEmptyQuerySeparator
// For convenience sets, "greedy" uses the "remove trailing slash" and "remove www. prefix" flags,
// while "non-greedy" uses the "add (or keep) the trailing slash" and "add www. prefix".
// Convenience set of usually safe normalizations (includes FlagsSafe)
FlagsUsuallySafeGreedy NormalizationFlags = FlagsSafe | FlagRemoveTrailingSlash | FlagRemoveDotSegments
FlagsUsuallySafeNonGreedy NormalizationFlags = FlagsSafe | FlagAddTrailingSlash | FlagRemoveDotSegments
// Convenience set of unsafe normalizations (includes FlagsUsuallySafe)
FlagsUnsafeGreedy NormalizationFlags = FlagsUsuallySafeGreedy | FlagRemoveDirectoryIndex | FlagRemoveFragment | FlagForceHTTP | FlagRemoveDuplicateSlashes | FlagRemoveWWW | FlagSortQuery
FlagsUnsafeNonGreedy NormalizationFlags = FlagsUsuallySafeNonGreedy | FlagRemoveDirectoryIndex | FlagRemoveFragment | FlagForceHTTP | FlagRemoveDuplicateSlashes | FlagAddWWW | FlagSortQuery
// Convenience set of all available flags
FlagsAllGreedy = FlagsUnsafeGreedy | FlagDecodeDWORDHost | FlagDecodeOctalHost | FlagDecodeHexHost | FlagRemoveUnnecessaryHostDots | FlagRemoveEmptyPortSeparator
FlagsAllNonGreedy = FlagsUnsafeNonGreedy | FlagDecodeDWORDHost | FlagDecodeOctalHost | FlagDecodeHexHost | FlagRemoveUnnecessaryHostDots | FlagRemoveEmptyPortSeparator
)
const (
defaultHttpPort = ":80"
defaultHttpsPort = ":443"
)
// Regular expressions used by the normalizations
var rxPort = regexp.MustCompile(`(:\d+)/?$`)
var rxDirIndex = regexp.MustCompile(`(^|/)((?:default|index)\.\w{1,4})$`)
var rxDupSlashes = regexp.MustCompile(`/{2,}`)
var rxDWORDHost = regexp.MustCompile(`^(\d+)((?:\.+)?(?:\:\d*)?)$`)
var rxOctalHost = regexp.MustCompile(`^(0\d*)\.(0\d*)\.(0\d*)\.(0\d*)((?:\.+)?(?:\:\d*)?)$`)
var rxHexHost = regexp.MustCompile(`^0x([0-9A-Fa-f]+)((?:\.+)?(?:\:\d*)?)$`)
var rxHostDots = regexp.MustCompile(`^(.+?)(:\d+)?$`)
var rxEmptyPort = regexp.MustCompile(`:+$`)
// Map of flags to implementation function.
// FlagDecodeUnnecessaryEscapes has no action, since it is done automatically
// by parsing the string as an URL. Same for FlagUppercaseEscapes and FlagRemoveEmptyQuerySeparator.
// Since maps have undefined traversing order, make a slice of ordered keys
var flagsOrder = []NormalizationFlags{
FlagLowercaseScheme,
FlagLowercaseHost,
FlagRemoveDefaultPort,
FlagRemoveDirectoryIndex,
FlagRemoveDotSegments,
FlagRemoveFragment,
FlagForceHTTP, // Must be after remove default port (because https=443/http=80)
FlagRemoveDuplicateSlashes,
FlagRemoveWWW,
FlagAddWWW,
FlagSortQuery,
FlagDecodeDWORDHost,
FlagDecodeOctalHost,
FlagDecodeHexHost,
FlagRemoveUnnecessaryHostDots,
FlagRemoveEmptyPortSeparator,
FlagRemoveTrailingSlash, // These two (add/remove trailing slash) must be last
FlagAddTrailingSlash,
}
// ... and then the map, where order is unimportant
var flags = map[NormalizationFlags]func(*url.URL){
FlagLowercaseScheme: lowercaseScheme,
FlagLowercaseHost: lowercaseHost,
FlagRemoveDefaultPort: removeDefaultPort,
FlagRemoveDirectoryIndex: removeDirectoryIndex,
FlagRemoveDotSegments: removeDotSegments,
FlagRemoveFragment: removeFragment,
FlagForceHTTP: forceHTTP,
FlagRemoveDuplicateSlashes: removeDuplicateSlashes,
FlagRemoveWWW: removeWWW,
FlagAddWWW: addWWW,
FlagSortQuery: sortQuery,
FlagDecodeDWORDHost: decodeDWORDHost,
FlagDecodeOctalHost: decodeOctalHost,
FlagDecodeHexHost: decodeHexHost,
FlagRemoveUnnecessaryHostDots: removeUnncessaryHostDots,
FlagRemoveEmptyPortSeparator: removeEmptyPortSeparator,
FlagRemoveTrailingSlash: removeTrailingSlash,
FlagAddTrailingSlash: addTrailingSlash,
}
// MustNormalizeURLString returns the normalized string, and panics if an error occurs.
// It takes an URL string as input, as well as the normalization flags.
func MustNormalizeURLString(u string, f NormalizationFlags) string {
result, e := NormalizeURLString(u, f)
if e != nil {
panic(e)
}
return result
}
// NormalizeURLString returns the normalized string, or an error if it can't be parsed into an URL object.
// It takes an URL string as input, as well as the normalization flags.
func NormalizeURLString(u string, f NormalizationFlags) (string, error) {
parsed, err := url.Parse(u)
if err != nil {
return "", err
}
if f&FlagLowercaseHost == FlagLowercaseHost {
parsed.Host = strings.ToLower(parsed.Host)
}
// The idna package doesn't fully conform to RFC 5895
// (https://tools.ietf.org/html/rfc5895), so we do it here.
// Taken from Go 1.8 cycle source, courtesy of bradfitz.
// TODO: Remove when (if?) idna package conforms to RFC 5895.
parsed.Host = width.Fold.String(parsed.Host)
parsed.Host = norm.NFC.String(parsed.Host)
if parsed.Host, err = idna.ToASCII(parsed.Host); err != nil {
return "", err
}
return NormalizeURL(parsed, f), nil
}
// NormalizeURL returns the normalized string.
// It takes a parsed URL object as input, as well as the normalization flags.
func NormalizeURL(u *url.URL, f NormalizationFlags) string {
for _, k := range flagsOrder {
if f&k == k {
flags[k](u)
}
}
return urlesc.Escape(u)
}
func lowercaseScheme(u *url.URL) {
if len(u.Scheme) > 0 {
u.Scheme = strings.ToLower(u.Scheme)
}
}
func lowercaseHost(u *url.URL) {
if len(u.Host) > 0 {
u.Host = strings.ToLower(u.Host)
}
}
func removeDefaultPort(u *url.URL) {
if len(u.Host) > 0 {
scheme := strings.ToLower(u.Scheme)
u.Host = rxPort.ReplaceAllStringFunc(u.Host, func(val string) string {
if (scheme == "http" && val == defaultHttpPort) || (scheme == "https" && val == defaultHttpsPort) {
return ""
}
return val
})
}
}
func removeTrailingSlash(u *url.URL) {
if l := len(u.Path); l > 0 {
if strings.HasSuffix(u.Path, "/") {
u.Path = u.Path[:l-1]
}
} else if l = len(u.Host); l > 0 {
if strings.HasSuffix(u.Host, "/") {
u.Host = u.Host[:l-1]
}
}
}
func addTrailingSlash(u *url.URL) {
if l := len(u.Path); l > 0 {
if !strings.HasSuffix(u.Path, "/") {
u.Path += "/"
}
} else if l = len(u.Host); l > 0 {
if !strings.HasSuffix(u.Host, "/") {
u.Host += "/"
}
}
}
func removeDotSegments(u *url.URL) {
if len(u.Path) > 0 {
var dotFree []string
var lastIsDot bool
sections := strings.Split(u.Path, "/")
for _, s := range sections {
if s == ".." {
if len(dotFree) > 0 {
dotFree = dotFree[:len(dotFree)-1]
}
} else if s != "." {
dotFree = append(dotFree, s)
}
lastIsDot = (s == "." || s == "..")
}
// Special case if host does not end with / and new path does not begin with /
u.Path = strings.Join(dotFree, "/")
if u.Host != "" && !strings.HasSuffix(u.Host, "/") && !strings.HasPrefix(u.Path, "/") {
u.Path = "/" + u.Path
}
// Special case if the last segment was a dot, make sure the path ends with a slash
if lastIsDot && !strings.HasSuffix(u.Path, "/") {
u.Path += "/"
}
}
}
func removeDirectoryIndex(u *url.URL) {
if len(u.Path) > 0 {
u.Path = rxDirIndex.ReplaceAllString(u.Path, "$1")
}
}
func removeFragment(u *url.URL) {
u.Fragment = ""
}
func forceHTTP(u *url.URL) {
if strings.ToLower(u.Scheme) == "https" {
u.Scheme = "http"
}
}
func removeDuplicateSlashes(u *url.URL) {
if len(u.Path) > 0 {
u.Path = rxDupSlashes.ReplaceAllString(u.Path, "/")
}
}
func removeWWW(u *url.URL) {
if len(u.Host) > 0 && strings.HasPrefix(strings.ToLower(u.Host), "www.") {
u.Host = u.Host[4:]
}
}
func addWWW(u *url.URL) {
if len(u.Host) > 0 && !strings.HasPrefix(strings.ToLower(u.Host), "www.") {
u.Host = "www." + u.Host
}
}
func sortQuery(u *url.URL) {
q := u.Query()
if len(q) > 0 {
arKeys := make([]string, len(q))
i := 0
for k, _ := range q {
arKeys[i] = k
i++
}
sort.Strings(arKeys)
buf := new(bytes.Buffer)
for _, k := range arKeys {
sort.Strings(q[k])
for _, v := range q[k] {
if buf.Len() > 0 {
buf.WriteRune('&')
}
buf.WriteString(fmt.Sprintf("%s=%s", k, urlesc.QueryEscape(v)))
}
}
// Rebuild the raw query string
u.RawQuery = buf.String()
}
}
func decodeDWORDHost(u *url.URL) {
if len(u.Host) > 0 {
if matches := rxDWORDHost.FindStringSubmatch(u.Host); len(matches) > 2 {
var parts [4]int64
dword, _ := strconv.ParseInt(matches[1], 10, 0)
for i, shift := range []uint{24, 16, 8, 0} {
parts[i] = dword >> shift & 0xFF
}
u.Host = fmt.Sprintf("%d.%d.%d.%d%s", parts[0], parts[1], parts[2], parts[3], matches[2])
}
}
}
func decodeOctalHost(u *url.URL) {
if len(u.Host) > 0 {
if matches := rxOctalHost.FindStringSubmatch(u.Host); len(matches) > 5 {
var parts [4]int64
for i := 1; i <= 4; i++ {
parts[i-1], _ = strconv.ParseInt(matches[i], 8, 0)
}
u.Host = fmt.Sprintf("%d.%d.%d.%d%s", parts[0], parts[1], parts[2], parts[3], matches[5])
}
}
}
func decodeHexHost(u *url.URL) {
if len(u.Host) > 0 {
if matches := rxHexHost.FindStringSubmatch(u.Host); len(matches) > 2 {
// Conversion is safe because of regex validation
parsed, _ := strconv.ParseInt(matches[1], 16, 0)
// Set host as DWORD (base 10) encoded host
u.Host = fmt.Sprintf("%d%s", parsed, matches[2])
// The rest is the same as decoding a DWORD host
decodeDWORDHost(u)
}
}
}
func removeUnncessaryHostDots(u *url.URL) {
if len(u.Host) > 0 {
if matches := rxHostDots.FindStringSubmatch(u.Host); len(matches) > 1 {
// Trim the leading and trailing dots
u.Host = strings.Trim(matches[1], ".")
if len(matches) > 2 {
u.Host += matches[2]
}
}
}
}
func removeEmptyPortSeparator(u *url.URL) {
if len(u.Host) > 0 {
u.Host = rxEmptyPort.ReplaceAllString(u.Host, "")
}
}

15
vendor/github.com/PuerkitoBio/urlesc/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,15 @@
language: go
go:
- 1.4.x
- 1.5.x
- 1.6.x
- 1.7.x
- 1.8.x
- tip
install:
- go build .
script:
- go test -v

27
vendor/github.com/PuerkitoBio/urlesc/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Some files were not shown because too many files have changed in this diff Show More