add service mesh controller

add service mesh metrics

remove unused circle yaml

fix travis misconfiguration

fix travis misconfiguration

fix travis misconfiguration
This commit is contained in:
jeff
2019-03-08 18:22:30 +08:00
committed by Jeff
parent 858facd4b2
commit 4ac20ffc2b
1709 changed files with 344390 additions and 60749 deletions

201
vendor/github.com/kiali/kiali/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

216
vendor/github.com/kiali/kiali/business/apps.go generated vendored Normal file
View File

@@ -0,0 +1,216 @@
package business
import (
"fmt"
"sync"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus"
"github.com/kiali/kiali/prometheus/internalmetrics"
)
// AppService deals with fetching Workloads group by "app" label, which will be identified as an "application"
type AppService struct {
prom prometheus.ClientInterface
k8s kubernetes.IstioClientInterface
}
// Temporal map of Workloads group by app label
type appsWorkload map[string][]*models.Workload
// Helper method to build a map of workloads for a given labelSelector
func (in *AppService) fetchWorkloadsPerApp(namespace, labelSelector string) (appsWorkload, error) {
cfg := config.Get()
ws, err := fetchWorkloads(in.k8s, namespace, labelSelector)
if err != nil {
return nil, err
}
apps := make(appsWorkload)
for _, w := range ws {
if appLabel, ok := w.Labels[cfg.IstioLabels.AppLabelName]; ok {
apps[appLabel] = append(apps[appLabel], w)
}
}
return apps, nil
}
// GetAppList is the API handler to fetch the list of applications in a given namespace
func (in *AppService) GetAppList(namespace string) (models.AppList, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "AppService", "GetAppList")
defer promtimer.ObserveNow(&err)
appList := &models.AppList{
Namespace: models.Namespace{Name: namespace},
Apps: []models.AppListItem{},
}
apps, err := fetchNamespaceApps(in.k8s, namespace, "")
if err != nil {
return *appList, err
}
for keyApp, valueApp := range apps {
appItem := &models.AppListItem{Name: keyApp}
appItem.IstioSidecar = false
if len(valueApp.Workloads) > 0 {
appItem.IstioSidecar = true
}
for _, w := range valueApp.Workloads {
appItem.IstioSidecar = appItem.IstioSidecar && w.Pods.HasIstioSideCar()
}
(*appList).Apps = append((*appList).Apps, *appItem)
}
return *appList, nil
}
// GetApp is the API handler to fetch the details for a given namespace and app name
func (in *AppService) GetApp(namespace string, appName string) (models.App, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "AppService", "GetApp")
defer promtimer.ObserveNow(&err)
appInstance := &models.App{Namespace: models.Namespace{Name: namespace}, Name: appName}
namespaceApps, err := fetchNamespaceApps(in.k8s, namespace, appName)
if err != nil {
return *appInstance, err
}
var appDetails *appDetails
var ok bool
// Send a NewNotFound if the app is not found in the deployment list, instead to send an empty result
if appDetails, ok = namespaceApps[appName]; !ok {
return *appInstance, kubernetes.NewNotFound(appName, "Kiali", "App")
}
(*appInstance).Workloads = make([]models.WorkloadItem, len(appDetails.Workloads))
for i, wkd := range appDetails.Workloads {
wkdSvc := &models.WorkloadItem{WorkloadName: wkd.Name}
wkdSvc.IstioSidecar = wkd.Pods.HasIstioSideCar()
(*appInstance).Workloads[i] = *wkdSvc
}
(*appInstance).ServiceNames = make([]string, len(appDetails.Services))
for i, svc := range appDetails.Services {
(*appInstance).ServiceNames[i] = svc.Name
}
in.fillCustomDashboardRefs(namespace, appInstance, appDetails)
return *appInstance, nil
}
// AppDetails holds Services and Workloads having the same "app" label
type appDetails struct {
app string
Services []v1.Service
Workloads models.Workloads
}
// NamespaceApps is a map of app_name x AppDetails
type namespaceApps = map[string]*appDetails
func castAppDetails(services []v1.Service, ws models.Workloads) namespaceApps {
allEntities := make(namespaceApps)
appLabel := config.Get().IstioLabels.AppLabelName
for _, service := range services {
if app, ok := service.Spec.Selector[appLabel]; ok {
if appEntities, ok := allEntities[app]; ok {
appEntities.Services = append(appEntities.Services, service)
} else {
allEntities[app] = &appDetails{
app: app,
Services: []v1.Service{service},
}
}
}
}
for _, w := range ws {
if app, ok := w.Labels[appLabel]; ok {
if appEntities, ok := allEntities[app]; ok {
appEntities.Workloads = append(appEntities.Workloads, w)
} else {
allEntities[app] = &appDetails{
app: app,
Workloads: models.Workloads{w},
}
}
}
}
return allEntities
}
// Helper method to fetch all applications for a given namespace.
// Optionally if appName parameter is provided, it filters apps for that name.
// Return an error on any problem.
func fetchNamespaceApps(k8s kubernetes.IstioClientInterface, namespace string, appName string) (namespaceApps, error) {
var services []v1.Service
var ws models.Workloads
cfg := config.Get()
labelSelector := cfg.IstioLabels.AppLabelName
if appName != "" {
labelSelector = fmt.Sprintf("%s=%s", cfg.IstioLabels.AppLabelName, appName)
}
wg := sync.WaitGroup{}
wg.Add(2)
errChan := make(chan error, 2)
go func() {
defer wg.Done()
var err error
services, err = k8s.GetServices(namespace, nil)
if appName != "" {
selector := labels.Set(map[string]string{cfg.IstioLabels.AppLabelName: appName}).AsSelector()
services = kubernetes.FilterServicesForSelector(selector, services)
}
if err != nil {
log.Errorf("Error fetching Services per namespace %s: %s", namespace, err)
errChan <- err
}
}()
go func() {
defer wg.Done()
var err error
ws, err = fetchWorkloads(k8s, namespace, labelSelector)
if err != nil {
log.Errorf("Error fetching Workload per namespace %s: %s", namespace, err)
errChan <- err
}
}()
wg.Wait()
if len(errChan) != 0 {
err := <-errChan
return nil, err
}
return castAppDetails(services, ws), nil
}
// fillCustomDashboardRefs finds all dashboard IDs and Titles associated to this app and add them to the model
func (in *AppService) fillCustomDashboardRefs(namespace string, app *models.App, details *appDetails) {
allPods := models.Pods{}
for _, workload := range details.Workloads {
allPods = append(allPods, workload.Pods...)
}
uniqueRefsList := getUniqueRuntimes(allPods)
mon, err := kubernetes.NewKialiMonitoringClient()
if err != nil {
// Do not fail the whole query, just log & return
log.Error("Cannot initialize Kiali Monitoring Client")
return
}
dash := NewDashboardsService(mon, in.prom)
app.Runtimes = dash.buildRuntimesList(namespace, uniqueRefsList)
}

View File

@@ -0,0 +1,11 @@
package checkers
import "github.com/kiali/kiali/models"
type Checker interface {
Check() ([]*models.IstioCheck, bool)
}
type GroupChecker interface {
Check() models.IstioValidations
}

View File

@@ -0,0 +1,27 @@
package checkers
import (
"github.com/kiali/kiali/business/checkers/destinationrules"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
type DestinationRulesChecker struct {
DestinationRules []kubernetes.IstioObject
MTLSDetails kubernetes.MTLSDetails
}
func (in DestinationRulesChecker) Check() models.IstioValidations {
validations := models.IstioValidations{}
enabledDRCheckers := []GroupChecker{
destinationrules.MultiMatchChecker{DestinationRules: in.DestinationRules},
destinationrules.TrafficPolicyChecker{DestinationRules: in.DestinationRules, MTLSDetails: in.MTLSDetails},
}
for _, checker := range enabledDRCheckers {
validations = validations.MergeValidations(checker.Check())
}
return validations
}

View File

@@ -0,0 +1,150 @@
package destinationrules
import (
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
const DestinationRulesCheckerType = "destinationrule"
type MultiMatchChecker struct {
DestinationRules []kubernetes.IstioObject
}
type subset struct {
Name string
RuleName string
}
// Check validates that no two destinationRules target the same host+subset combination
func (m MultiMatchChecker) Check() models.IstioValidations {
validations := models.IstioValidations{}
// Equality search is: [fqdn][subset]
seenHostSubsets := make(map[string]map[string]string)
for _, dr := range m.DestinationRules {
if host, ok := dr.GetSpec()["host"]; ok {
destinationRulesName := dr.GetObjectMeta().Name
if dHost, ok := host.(string); ok {
fqdn := kubernetes.ParseHost(dHost, dr.GetObjectMeta().Namespace, dr.GetObjectMeta().ClusterName)
// Skip DR validation if it enables mTLS either namespace or mesh-wide
if isNonLocalmTLSForServiceEnabled(dr, fqdn.Service) {
continue
}
foundSubsets := extractSubsets(dr, destinationRulesName)
if fqdn.Service == "*" {
// We need to check the matching subsets from all hosts now
for _, h := range seenHostSubsets {
checkCollisions(validations, destinationRulesName, foundSubsets, h)
}
// We add * later
}
// Search "*" first and then exact name
if previous, found := seenHostSubsets["*"]; found {
// Need to check subsets of "*"
checkCollisions(validations, destinationRulesName, foundSubsets, previous)
}
if previous, found := seenHostSubsets[fqdn.Service]; found {
// Host found, need to check underlying subsets
checkCollisions(validations, destinationRulesName, foundSubsets, previous)
}
// Nothing threw an error, so add these
if _, found := seenHostSubsets[fqdn.Service]; !found {
seenHostSubsets[fqdn.Service] = make(map[string]string)
}
for _, s := range foundSubsets {
seenHostSubsets[fqdn.Service][s.Name] = destinationRulesName
}
}
}
}
return validations
}
func isNonLocalmTLSForServiceEnabled(dr kubernetes.IstioObject, service string) bool {
return service == "*" && ismTLSEnabled(dr)
}
func ismTLSEnabled(dr kubernetes.IstioObject) bool {
if trafficPolicy, trafficPresent := dr.GetSpec()["trafficPolicy"]; trafficPresent {
if trafficCasted, ok := trafficPolicy.(map[string]interface{}); ok {
if tls, found := trafficCasted["tls"]; found {
if tlsCasted, ok := tls.(map[string]interface{}); ok {
if mode, found := tlsCasted["mode"]; found {
if modeCasted, ok := mode.(string); ok {
return modeCasted == "ISTIO_MUTUAL"
}
}
}
}
}
}
return false
}
func extractSubsets(dr kubernetes.IstioObject, destinationRulesName string) []subset {
if subsets, found := dr.GetSpec()["subsets"]; found {
if subsetSlice, ok := subsets.([]interface{}); ok {
foundSubsets := make([]subset, 0, len(subsetSlice))
for _, se := range subsetSlice {
if element, ok := se.(map[string]interface{}); ok {
if name, found := element["name"]; found {
if n, ok := name.(string); ok {
foundSubsets = append(foundSubsets, subset{n, destinationRulesName})
}
}
}
}
return foundSubsets
}
}
// Matches all the subsets:~
return []subset{subset{"~", destinationRulesName}}
}
func checkCollisions(validations models.IstioValidations, destinationRulesName string, foundSubsets []subset, existing map[string]string) {
// If current subset is ~
if len(foundSubsets) == 1 && foundSubsets[0].Name == "~" {
// This should match any subset in the same hostname
for _, v := range existing {
addError(validations, []string{destinationRulesName, v})
}
}
// If we have existing subset with ~
if ruleName, found := existing["~"]; found {
addError(validations, []string{destinationRulesName, ruleName})
}
for _, s := range foundSubsets {
if ruleName, found := existing[s.Name]; found {
addError(validations, []string{destinationRulesName, ruleName})
}
}
}
func addError(validations models.IstioValidations, destinationRuleNames []string) models.IstioValidations {
for _, destinationRuleName := range destinationRuleNames {
key := models.IstioValidationKey{Name: destinationRuleName, ObjectType: DestinationRulesCheckerType}
checks := models.Build("destinationrules.multimatch", "spec/host")
rrValidation := &models.IstioValidation{
Name: destinationRuleName,
ObjectType: DestinationRulesCheckerType,
Valid: true,
Checks: []*models.IstioCheck{
&checks,
},
}
if _, exists := validations[key]; !exists {
validations.MergeValidations(models.IstioValidations{key: rrValidation})
}
}
return validations
}

View File

@@ -0,0 +1,116 @@
package destinationrules
import (
"strconv"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
type NoDestinationChecker struct {
Namespace string
WorkloadList models.WorkloadList
DestinationRule kubernetes.IstioObject
ServiceEntries map[string][]string
ServiceNames []string
}
// Check parses the DestinationRule definitions and verifies that they point to an existing service, including any subset definitions
func (n NoDestinationChecker) Check() ([]*models.IstioCheck, bool) {
valid := true
validations := make([]*models.IstioCheck, 0)
if host, ok := n.DestinationRule.GetSpec()["host"]; ok {
if dHost, ok := host.(string); ok {
fqdn := kubernetes.ParseHost(dHost, n.DestinationRule.GetObjectMeta().Namespace, n.DestinationRule.GetObjectMeta().ClusterName)
if !n.hasMatchingService(fqdn.Service, dHost) {
validation := models.Build("destinationrules.nodest.matchingworkload", "spec/host")
validations = append(validations, &validation)
valid = false
}
if subsets, ok := n.DestinationRule.GetSpec()["subsets"]; ok {
if dSubsets, ok := subsets.([]interface{}); ok {
// Check that each subset has a matching workload somewhere..
for i, subset := range dSubsets {
if innerSubset, ok := subset.(map[string]interface{}); ok {
if labels, ok := innerSubset["labels"]; ok {
if dLabels, ok := labels.(map[string]interface{}); ok {
stringLabels := make(map[string]string, len(dLabels))
for k, v := range dLabels {
if s, ok := v.(string); ok {
stringLabels[k] = s
}
}
if !n.hasMatchingWorkload(fqdn.Service, stringLabels) {
validation := models.Build("destinationrules.nodest.subsetlabels",
"spec/subsets["+strconv.Itoa(i)+"]")
validations = append(validations, &validation)
valid = false
}
}
}
}
}
}
}
}
}
return validations, valid
}
func (n NoDestinationChecker) hasMatchingWorkload(service string, labels map[string]string) bool {
appLabel := config.Get().IstioLabels.AppLabelName
// Check wildcard hosts
if service == "*" {
return true
}
// Check workloads
for _, wl := range n.WorkloadList.Workloads {
if service == wl.Labels[appLabel] {
valid := true
for k, v := range labels {
wlv, found := wl.Labels[k]
if !found || wlv != v {
valid = false
break
}
}
if valid {
return true
}
}
}
return false
}
func (n NoDestinationChecker) hasMatchingService(service, origHost string) bool {
appLabel := config.Get().IstioLabels.AppLabelName
// Check wildcard hosts
if service == "*" {
return true
}
// Check Workloads
for _, wl := range n.WorkloadList.Workloads {
if service == wl.Labels[appLabel] {
return true
}
}
// Check ServiceNames
for _, s := range n.ServiceNames {
if service == s {
return true
}
}
// Check ServiceEntries
if _, found := n.ServiceEntries[origHost]; found {
return true
}
return false
}

View File

@@ -0,0 +1,103 @@
package destinationrules
import (
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
type TrafficPolicyChecker struct {
DestinationRules []kubernetes.IstioObject
MTLSDetails kubernetes.MTLSDetails
}
func (t TrafficPolicyChecker) Check() models.IstioValidations {
validations := models.IstioValidations{}
// When mTLS is not enabled, there is no validation to be added.
if !t.isNonLocalmTLSEnabled() {
return validations
}
// Check whether DRs override mTLS.
for _, dr := range t.DestinationRules {
if !hasTrafficPolicy(dr) || !hasTLSSettings(dr) {
check := models.Build("destinationrules.trafficpolicy.notlssettings", "spec/trafficPolicy")
key := models.BuildKey(DestinationRulesCheckerType, dr.GetObjectMeta().Name)
validation := buildDestinationRuleValidation(dr, check, true)
if _, exists := validations[key]; !exists {
validations.MergeValidations(models.IstioValidations{key: validation})
}
}
}
return validations
}
func (t TrafficPolicyChecker) isNonLocalmTLSEnabled() bool {
for _, dr := range t.MTLSDetails.DestinationRules {
if host, ok := dr.GetSpec()["host"]; ok {
if dHost, ok := host.(string); ok {
fqdn := kubernetes.ParseHost(dHost, dr.GetObjectMeta().Namespace, dr.GetObjectMeta().ClusterName)
if isNonLocalmTLSForServiceEnabled(dr, fqdn.Service) {
return true
}
}
}
}
return false
}
func hasTrafficPolicy(dr kubernetes.IstioObject) bool {
_, trafficPresent := dr.GetSpec()["trafficPolicy"]
return trafficPresent
}
func hasTLSSettings(dr kubernetes.IstioObject) bool {
return hasTrafficPolicyTLS(dr) || hasPortTLS(dr)
}
// hasPortTLS returns true when there is one port that specifies any TLS settings
func hasPortTLS(dr kubernetes.IstioObject) bool {
if trafficPolicy, trafficPresent := dr.GetSpec()["trafficPolicy"]; trafficPresent {
if trafficCasted, ok := trafficPolicy.(map[string]interface{}); ok {
if portsSettings, found := trafficCasted["portLevelSettings"]; found {
if portsSettingsCasted, ok := portsSettings.([]interface{}); ok {
for _, portSettings := range portsSettingsCasted {
if portSettingsCasted, ok := portSettings.(map[string]interface{}); ok {
if _, found := portSettingsCasted["tls"]; found {
return true
}
}
}
}
}
}
}
return false
}
// hasTrafficPolicyTLS returns true when there is a trafficPolicy specifying any tls mode
func hasTrafficPolicyTLS(dr kubernetes.IstioObject) bool {
if trafficPolicy, trafficPresent := dr.GetSpec()["trafficPolicy"]; trafficPresent {
if trafficCasted, ok := trafficPolicy.(map[string]interface{}); ok {
if _, found := trafficCasted["tls"]; found {
return true
}
}
}
return false
}
func buildDestinationRuleValidation(dr kubernetes.IstioObject, checks models.IstioCheck, valid bool) *models.IstioValidation {
validation := &models.IstioValidation{
Name: dr.GetObjectMeta().Name,
ObjectType: DestinationRulesCheckerType,
Valid: valid,
Checks: []*models.IstioCheck{
&checks,
},
}
return validation
}

View File

@@ -0,0 +1,51 @@
package checkers
import (
"github.com/kiali/kiali/business/checkers/gateways"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
const GatewayCheckerType = "gateway"
type GatewayChecker struct {
GatewaysPerNamespace [][]kubernetes.IstioObject
Namespace string
}
// Check runs checks for the all namespaces actions as well as for the single namespace validations
func (g GatewayChecker) Check() models.IstioValidations {
// Multinamespace checkers
validations := gateways.MultiMatchChecker{
GatewaysPerNamespace: g.GatewaysPerNamespace,
}.Check()
// Single namespace
for _, nssGw := range g.GatewaysPerNamespace {
for _, gw := range nssGw {
if gw.GetObjectMeta().Namespace == g.Namespace {
validations.MergeValidations(runSingleChecks(gw))
}
}
}
return validations
}
func runSingleChecks(gw kubernetes.IstioObject) models.IstioValidations {
validations := models.IstioValidations{}
checks, valid := gateways.PortChecker{
Gateway: gw,
}.Check()
if !valid {
key := models.IstioValidationKey{ObjectType: GatewayCheckerType, Name: gw.GetObjectMeta().Name}
validations[key] = &models.IstioValidation{
Name: key.Name,
ObjectType: key.ObjectType,
Checks: checks,
Valid: valid,
}
}
return validations
}

View File

@@ -0,0 +1,135 @@
package gateways
import (
"regexp"
"strconv"
"strings"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
type MultiMatchChecker struct {
GatewaysPerNamespace [][]kubernetes.IstioObject
existingList []Host
}
const (
GatewayCheckerType = "gateway"
wildCardMatch = "*"
)
type Host struct {
Port int
Hostname string
ServerIndex int
HostIndex int
GatewayRuleName string
}
// Check validates that no two gateways share the same host+port combination
func (m MultiMatchChecker) Check() models.IstioValidations {
validations := models.IstioValidations{}
m.existingList = make([]Host, 0)
for _, nsG := range m.GatewaysPerNamespace {
for _, g := range nsG {
gatewayRuleName := g.GetObjectMeta().Name
if specServers, found := g.GetSpec()["servers"]; found {
if servers, ok := specServers.([]interface{}); ok {
for i, def := range servers {
if serverDef, ok := def.(map[string]interface{}); ok {
hosts := parsePortAndHostnames(serverDef)
for hi, host := range hosts {
host.ServerIndex = i
host.HostIndex = hi
host.GatewayRuleName = gatewayRuleName
duplicate, dhosts := m.findMatch(host)
if duplicate {
validations = addError(validations, gatewayRuleName, i, hi)
for _, dh := range dhosts {
validations = addError(validations, dh.GatewayRuleName, dh.ServerIndex, dh.HostIndex)
}
}
m.existingList = append(m.existingList, host)
}
}
}
}
}
}
}
return validations
}
func addError(validations models.IstioValidations, gatewayRuleName string, serverIndex, hostIndex int) models.IstioValidations {
key := models.IstioValidationKey{Name: gatewayRuleName, ObjectType: GatewayCheckerType}
checks := models.Build("gateways.multimatch",
"spec/servers["+strconv.Itoa(serverIndex)+"]/hosts["+strconv.Itoa(hostIndex)+"]")
rrValidation := &models.IstioValidation{
Name: gatewayRuleName,
ObjectType: GatewayCheckerType,
Valid: true,
Checks: []*models.IstioCheck{
&checks,
},
}
if _, exists := validations[key]; !exists {
validations.MergeValidations(models.IstioValidations{key: rrValidation})
}
return validations
}
func parsePortAndHostnames(serverDef map[string]interface{}) []Host {
var port int
if portDef, found := serverDef["port"]; found {
if ports, ok := portDef.(map[string]interface{}); ok {
if numberDef, found := ports["number"]; found {
if portNumber, ok := numberDef.(int64); ok {
port = int(portNumber)
}
}
}
}
if hostDef, found := serverDef["hosts"]; found {
if hostnames, ok := hostDef.([]interface{}); ok {
hosts := make([]Host, 0, len(hostnames))
for _, hostinterface := range hostnames {
if hostname, ok := hostinterface.(string); ok {
hosts = append(hosts, Host{
Port: port,
Hostname: hostname,
})
}
}
return hosts
}
}
return nil
}
// findMatch uses a linear search with regexp to check for matching gateway host + port combinations. If this becomes a bottleneck for performance, replace with a graph or trie algorithm.
func (m MultiMatchChecker) findMatch(host Host) (bool, []Host) {
duplicates := make([]Host, 0)
for _, h := range m.existingList {
if h.Port == host.Port {
// wildcardMatches will always match
if host.Hostname == wildCardMatch || h.Hostname == wildCardMatch {
duplicates = append(duplicates, h)
break
}
// Either one could include wildcards, so we need to check both ways and fix "*" -> ".*" for regexp engine
current := strings.Replace(host.Hostname, "*", ".*", -1)
previous := strings.Replace(h.Hostname, "*", ".*", -1)
if regexp.MustCompile(current).MatchString(previous) || regexp.MustCompile(previous).MatchString(current) {
duplicates = append(duplicates, h)
break
}
}
}
return len(duplicates) > 0, duplicates
}

View File

@@ -0,0 +1,34 @@
package gateways
import (
"fmt"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
type PortChecker struct {
Gateway kubernetes.IstioObject
}
func (p PortChecker) Check() ([]*models.IstioCheck, bool) {
validations := make([]*models.IstioCheck, 0)
if serversSpec, found := p.Gateway.GetSpec()["servers"]; found {
if servers, ok := serversSpec.([]interface{}); ok {
for serverIndex, server := range servers {
if serverDef, ok := server.(map[string]interface{}); ok {
if portDef, found := serverDef["port"]; found {
if !kubernetes.ValidatePort(portDef) {
validation := models.Build("port.name.mismatch",
fmt.Sprintf("spec/servers[%d]/port/name", serverIndex))
validations = append(validations, &validation)
}
}
}
}
}
}
return validations, len(validations) == 0
}

View File

@@ -0,0 +1,109 @@
package checkers
import (
"github.com/kiali/kiali/business/checkers/destinationrules"
"github.com/kiali/kiali/business/checkers/virtual_services"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
v1 "k8s.io/api/core/v1"
)
type NoServiceChecker struct {
Namespace string
IstioDetails *kubernetes.IstioDetails
Services []v1.Service
WorkloadList models.WorkloadList
GatewaysPerNamespace [][]kubernetes.IstioObject
}
func (in NoServiceChecker) Check() models.IstioValidations {
validations := models.IstioValidations{}
if in.IstioDetails == nil || in.Services == nil {
return validations
}
log.Infof("ServiceEntries: %v\n", in.IstioDetails.ServiceEntries)
serviceNames := getServiceNames(in.Services)
serviceHosts := kubernetes.ServiceEntryHostnames(in.IstioDetails.ServiceEntries)
gatewayNames := kubernetes.GatewayNames(in.GatewaysPerNamespace)
for _, virtualService := range in.IstioDetails.VirtualServices {
validations.MergeValidations(runVirtualServiceCheck(virtualService, in.Namespace, serviceNames, serviceHosts))
validations.MergeValidations(runGatewayCheck(virtualService, gatewayNames))
}
for _, destinationRule := range in.IstioDetails.DestinationRules {
validations.MergeValidations(runDestinationRuleCheck(destinationRule, in.Namespace, in.WorkloadList, serviceNames, serviceHosts))
}
return validations
}
func runVirtualServiceCheck(virtualService kubernetes.IstioObject, namespace string, serviceNames []string, serviceHosts map[string][]string) models.IstioValidations {
result, valid := virtual_services.NoHostChecker{
Namespace: namespace,
ServiceNames: serviceNames,
VirtualService: virtualService,
ServiceEntryHosts: serviceHosts,
}.Check()
istioObjectName := virtualService.GetObjectMeta().Name
key := models.IstioValidationKey{ObjectType: "virtualservice", Name: istioObjectName}
vsvalidations := models.IstioValidations{}
vsvalidations[key] = &models.IstioValidation{
Name: istioObjectName,
ObjectType: "virtualservice",
Checks: result,
Valid: valid,
}
return vsvalidations
}
func runGatewayCheck(virtualService kubernetes.IstioObject, gatewayNames map[string]struct{}) models.IstioValidations {
result, valid := virtual_services.NoGatewayChecker{
VirtualService: virtualService,
GatewayNames: gatewayNames,
}.Check()
istioObjectName := virtualService.GetObjectMeta().Name
key := models.IstioValidationKey{ObjectType: "virtualservice", Name: istioObjectName}
vsvalidations := models.IstioValidations{}
vsvalidations[key] = &models.IstioValidation{
Name: istioObjectName,
ObjectType: "virtualservice",
Checks: result,
Valid: valid,
}
return vsvalidations
}
func runDestinationRuleCheck(destinationRule kubernetes.IstioObject, namespace string, workloads models.WorkloadList, serviceNames []string, serviceHosts map[string][]string) models.IstioValidations {
result, valid := destinationrules.NoDestinationChecker{
Namespace: namespace,
WorkloadList: workloads,
DestinationRule: destinationRule,
ServiceNames: serviceNames,
ServiceEntries: serviceHosts,
}.Check()
istioObjectName := destinationRule.GetObjectMeta().Name
key := models.IstioValidationKey{ObjectType: "destinationrule", Name: istioObjectName}
drvalidations := models.IstioValidations{}
drvalidations[key] = &models.IstioValidation{
Name: istioObjectName,
ObjectType: "destinationrule",
Checks: result,
Valid: valid,
}
return drvalidations
}
func getServiceNames(services []v1.Service) []string {
serviceNames := make([]string, 0)
for _, item := range services {
serviceNames = append(serviceNames, item.Name)
}
return serviceNames
}

View File

@@ -0,0 +1,80 @@
package checkers
import (
"github.com/kiali/kiali/business/checkers/virtual_services"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
const VirtualCheckerType = "virtualservice"
type VirtualServiceChecker struct {
Namespace string
DestinationRules []kubernetes.IstioObject
VirtualServices []kubernetes.IstioObject
}
// An Object Checker runs all checkers for an specific object type (i.e.: pod, route rule,...)
// It run two kinds of checkers:
// 1. Individual checks: validating individual objects.
// 2. Group checks: validating behaviour between configurations.
func (in VirtualServiceChecker) Check() models.IstioValidations {
validations := models.IstioValidations{}
validations = validations.MergeValidations(in.runIndividualChecks())
validations = validations.MergeValidations(in.runGroupChecks())
return validations
}
// Runs individual checks for each virtual service
func (in VirtualServiceChecker) runIndividualChecks() models.IstioValidations {
validations := models.IstioValidations{}
for _, virtualService := range in.VirtualServices {
validations.MergeValidations(in.runChecks(virtualService))
}
return validations
}
// runGroupChecks runs group checks for all virtual services
func (in VirtualServiceChecker) runGroupChecks() models.IstioValidations {
validations := models.IstioValidations{}
enabledCheckers := []GroupChecker{
virtual_services.SingleHostChecker{Namespace: in.Namespace, VirtualServices: in.VirtualServices},
}
for _, checker := range enabledCheckers {
validations = validations.MergeValidations(checker.Check())
}
return validations
}
// runChecks runs all the individual checks for a single virtual service and appends the result into validations.
func (in VirtualServiceChecker) runChecks(virtualService kubernetes.IstioObject) models.IstioValidations {
virtualServiceName := virtualService.GetObjectMeta().Name
key := models.IstioValidationKey{Name: virtualServiceName, ObjectType: VirtualCheckerType}
rrValidation := &models.IstioValidation{
Name: virtualServiceName,
ObjectType: VirtualCheckerType,
Valid: true,
// Explicitly create an empty array as 0-values do not appear in json
Checks: []*models.IstioCheck{},
}
enabledCheckers := []Checker{
virtual_services.RouteChecker{Route: virtualService},
virtual_services.SubsetPresenceChecker{Namespace: in.Namespace, DestinationRules: in.DestinationRules, VirtualService: virtualService},
}
for _, checker := range enabledCheckers {
checks, validChecker := checker.Check()
rrValidation.Checks = append(rrValidation.Checks, checks...)
rrValidation.Valid = rrValidation.Valid && validChecker
}
return models.IstioValidations{key: rrValidation}
}

View File

@@ -0,0 +1,27 @@
package virtual_services
import (
"strconv"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
type NoGatewayChecker struct {
VirtualService kubernetes.IstioObject
GatewayNames map[string]struct{}
}
// Check validates that all the VirtualServices are pointing to an existing Gateway
func (s NoGatewayChecker) Check() ([]*models.IstioCheck, bool) {
validations := make([]*models.IstioCheck, 0)
valid, index := kubernetes.ValidateVirtualServiceGateways(s.VirtualService.GetSpec(), s.GatewayNames, s.VirtualService.GetObjectMeta().Namespace, s.VirtualService.GetObjectMeta().ClusterName)
if !valid {
path := "spec/gateways[" + strconv.Itoa(index) + "]"
validation := models.Build("virtualservices.nogateway", path)
validations = append(validations, &validation)
}
return validations, valid
}

View File

@@ -0,0 +1,79 @@
package virtual_services
import (
"fmt"
"strings"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
type NoHostChecker struct {
Namespace string
ServiceNames []string
VirtualService kubernetes.IstioObject
ServiceEntryHosts map[string][]string
}
func (n NoHostChecker) Check() ([]*models.IstioCheck, bool) {
validations := make([]*models.IstioCheck, 0)
routeProtocols := []string{"http", "tcp", "tls"}
countOfDefinedProtocols := 0
for _, protocol := range routeProtocols {
if prot, ok := n.VirtualService.GetSpec()[protocol]; ok {
countOfDefinedProtocols++
if aHttp, ok := prot.([]interface{}); ok {
for k, httpRoute := range aHttp {
if mHttpRoute, ok := httpRoute.(map[string]interface{}); ok {
if route, ok := mHttpRoute["route"]; ok {
if aDestinationWeight, ok := route.([]interface{}); ok {
for i, destination := range aDestinationWeight {
if !n.checkDestination(destination, protocol) {
validation := models.Build("virtualservices.nohost.hostnotfound",
fmt.Sprintf("spec/%s[%d]/route[%d]/destination/host", protocol, k, i))
validations = append(validations, &validation)
}
}
}
}
}
}
}
}
}
if countOfDefinedProtocols < 1 {
validation := models.Build("virtualservices.nohost.invalidprotocol", "")
validations = append(validations, &validation)
}
return validations, len(validations) == 0
}
func (n NoHostChecker) checkDestination(destination interface{}, protocol string) bool {
if mDestination, ok := destination.(map[string]interface{}); ok {
if destinationW, ok := mDestination["destination"]; ok {
if mDestinationW, ok := destinationW.(map[string]interface{}); ok {
if host, ok := mDestinationW["host"]; ok {
if sHost, ok := host.(string); ok {
for _, service := range n.ServiceNames {
if kubernetes.FilterByHost(sHost, service, n.Namespace) {
return true
}
}
if protocols, found := n.ServiceEntryHosts[sHost]; found {
// We have ServiceEntry to check
for _, prot := range protocols {
if prot == strings.ToLower(protocol) {
return true
}
}
}
}
}
}
}
}
return false
}

View File

@@ -0,0 +1,114 @@
package virtual_services
import (
"fmt"
"reflect"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/util/intutil"
)
type RouteChecker struct {
Route kubernetes.IstioObject
}
// Check returns both an array of IstioCheck and a boolean indicating if the current route rule is valid.
// The array of IstioChecks contains the result of running the following validations:
// 1. All weights with a numeric number.
// 2. All weights have value between 0 and 100.
// 3. Sum of all weights are 100 (if only one weight, then it assumes that is 100).
// 4. All the route has to have weight label.
func (route RouteChecker) Check() ([]*models.IstioCheck, bool) {
checks, valid := make([]*models.IstioCheck, 0), true
protocols := []string{"http", "tcp", "tls"}
for _, protocol := range protocols {
cs, v := route.checkRoutesFor(protocol)
checks = append(checks, cs...)
valid = valid && v
}
return checks, valid
}
func (route RouteChecker) checkRoutesFor(kind string) ([]*models.IstioCheck, bool) {
validations := make([]*models.IstioCheck, 0)
weightSum, weightCount, valid := 0, 0, true
http := route.Route.GetSpec()[kind]
if http == nil {
return validations, valid
}
// Getting a []HTTPRoute
slice := reflect.ValueOf(http)
if slice.Kind() != reflect.Slice {
return validations, valid
}
for routeIdx := 0; routeIdx < slice.Len(); routeIdx++ {
route, ok := slice.Index(routeIdx).Interface().(map[string]interface{})
if !ok || route["route"] == nil {
continue
}
weightCount, weightSum = 0, 0
// Getting a []DestinationWeight
destinationWeights := reflect.ValueOf(route["route"])
if destinationWeights.Kind() != reflect.Slice {
return validations, valid
}
for destWeightIdx := 0; destWeightIdx < destinationWeights.Len(); destWeightIdx++ {
destinationWeight, ok := destinationWeights.Index(destWeightIdx).Interface().(map[string]interface{})
if !ok || destinationWeight["weight"] == nil {
continue
}
weightCount = weightCount + 1
weight, err := intutil.Convert(destinationWeight["weight"])
if err != nil {
valid = false
path := fmt.Sprintf("spec/%s[%d]/route[%d]/weight/%s",
kind, routeIdx, destWeightIdx, destinationWeight["weight"])
validation := buildValidation("virtualservices.route.numericweight", path)
validations = append(validations, &validation)
}
if weight > 100 || weight < 0 {
valid = false
path := fmt.Sprintf("spec/%s[%d]/route[%d]/weight/%d",
kind, routeIdx, destWeightIdx, weight)
validation := buildValidation("virtualservices.route.weightrange", path)
validations = append(validations, &validation)
}
weightSum = weightSum + weight
}
if weightCount > 0 && weightSum != 100 {
valid = false
path := fmt.Sprintf("spec/%s[%d]/route", kind, routeIdx)
validation := buildValidation("virtualservices.route.weightsum", path)
validations = append(validations, &validation)
if weightCount != destinationWeights.Len() {
valid = false
path := fmt.Sprintf("spec/%s[%d]/route", kind, routeIdx)
validation := buildValidation("virtualservices.route.allweightspresent", path)
validations = append(validations, &validation)
}
}
}
return validations, valid
}
func buildValidation(checkId string, path string) models.IstioCheck {
validation := models.Build(checkId, path)
log.Infof("%s Galley should be performing this validation but it isn't. "+
"Make sure Galley is fully working.", checkId)
return validation
}

View File

@@ -0,0 +1,151 @@
package virtual_services
import (
"reflect"
"strings"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
type SingleHostChecker struct {
Namespace string
VirtualServices []kubernetes.IstioObject
}
type Host struct {
Service string
Namespace string
Cluster string
}
func (in SingleHostChecker) Check() models.IstioValidations {
hostCounter := make(map[string]map[string]map[string][]*kubernetes.IstioObject)
validations := models.IstioValidations{}
for _, vs := range in.VirtualServices {
for _, host := range getHost(vs) {
storeHost(hostCounter, vs, host)
}
}
for _, clusterCounter := range hostCounter {
for _, namespaceCounter := range clusterCounter {
isNamespaceWildcard := len(namespaceCounter["*"]) > 0
for _, serviceCounter := range namespaceCounter {
targetSameHost := len(serviceCounter) > 1
otherServiceHosts := len(namespaceCounter) > 1
for _, virtualService := range serviceCounter {
// Marking virtualService as invalid if:
// - there is more than one virtual service per a host
// - there is one virtual service with wildcard and there are other virtual services pointing
// a host for that namespace
if targetSameHost || isNamespaceWildcard && otherServiceHosts {
if !hasGateways(virtualService) {
multipleVirtualServiceCheck(*virtualService, validations)
}
}
}
}
}
}
return validations
}
func multipleVirtualServiceCheck(virtualService kubernetes.IstioObject, validations models.IstioValidations) {
virtualServiceName := virtualService.GetObjectMeta().Name
key := models.IstioValidationKey{Name: virtualServiceName, ObjectType: "virtualservice"}
checks := models.Build("virtualservices.singlehost", "spec/hosts")
rrValidation := &models.IstioValidation{
Name: virtualServiceName,
ObjectType: "virtualservice",
Valid: true,
Checks: []*models.IstioCheck{
&checks,
},
}
validations.MergeValidations(models.IstioValidations{key: rrValidation})
}
func storeHost(hostCounter map[string]map[string]map[string][]*kubernetes.IstioObject, vs kubernetes.IstioObject, host Host) {
vsList := []*kubernetes.IstioObject{&vs}
if hostCounter[host.Cluster] == nil {
hostCounter[host.Cluster] = map[string]map[string][]*kubernetes.IstioObject{
host.Namespace: {
host.Service: vsList,
},
}
} else if hostCounter[host.Cluster][host.Namespace] == nil {
hostCounter[host.Cluster][host.Namespace] = map[string][]*kubernetes.IstioObject{
host.Service: vsList,
}
} else if _, ok := hostCounter[host.Cluster][host.Namespace][host.Service]; !ok {
hostCounter[host.Cluster][host.Namespace][host.Service] = vsList
} else {
hostCounter[host.Cluster][host.Namespace][host.Service] = append(hostCounter[host.Cluster][host.Namespace][host.Service], &vs)
}
}
func getHost(virtualService kubernetes.IstioObject) []Host {
hosts := virtualService.GetSpec()["hosts"]
if hosts == nil {
return []Host{}
}
slice := reflect.ValueOf(hosts)
if slice.Kind() != reflect.Slice {
return []Host{}
}
targetHosts := make([]Host, 0, slice.Len())
for hostIdx := 0; hostIdx < slice.Len(); hostIdx++ {
hostName, ok := slice.Index(hostIdx).Interface().(string)
if !ok {
continue
}
targetHosts = append(targetHosts, formatHostForSearch(hostName, virtualService.GetObjectMeta().Namespace))
}
return targetHosts
}
// Convert host to Host struct for searching
// e.g. reviews -> reviews, virtualService.Namespace, svc.cluster.local
// e.g. reviews.bookinfo.svc.cluster.local -> reviews, bookinfo, svc.cluster.local
// e.g. *.bookinfo.svc.cluster.local -> *, bookinfo, svc.cluster.local
// e.g. * -> *, *, *
func formatHostForSearch(hostName, virtualServiceNamespace string) Host {
domainParts := strings.Split(hostName, ".")
host := Host{}
host.Service = domainParts[0]
if len(domainParts) > 1 {
host.Namespace = domainParts[1]
if len(domainParts) > 2 {
host.Cluster = strings.Join(domainParts[2:], ".")
}
} else if host.Service != "*" {
host.Namespace = virtualServiceNamespace
host.Cluster = "svc.cluster.local"
} else if host.Service == "*" {
host.Namespace = "*"
host.Cluster = "*"
}
return host
}
func hasGateways(virtualService *kubernetes.IstioObject) bool {
if gateways, ok := (*virtualService).GetSpec()["gateways"]; ok {
vsGateways, ok := (gateways).([]interface{})
return ok && vsGateways != nil && len(vsGateways) > 0
}
return false
}

View File

@@ -0,0 +1,140 @@
package virtual_services
import (
"fmt"
"reflect"
"strings"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
)
type SubsetPresenceChecker struct {
Namespace string
DestinationRules []kubernetes.IstioObject
VirtualService kubernetes.IstioObject
}
func (checker SubsetPresenceChecker) Check() ([]*models.IstioCheck, bool) {
valid := true
validations := make([]*models.IstioCheck, 0)
protocols := [3]string{"http", "tcp", "tls"}
for _, protocol := range protocols {
specProtocol := checker.VirtualService.GetSpec()[protocol]
if specProtocol == nil {
continue
}
// Getting a []HTTPRoute, []TLSRoute, []TCPRoute
slice := reflect.ValueOf(specProtocol)
if slice.Kind() != reflect.Slice {
continue
}
for routeIdx := 0; routeIdx < slice.Len(); routeIdx++ {
httpRoute, ok := slice.Index(routeIdx).Interface().(map[string]interface{})
if !ok || httpRoute["route"] == nil {
continue
}
// Getting a []DestinationWeight
destinationWeights := reflect.ValueOf(httpRoute["route"])
if destinationWeights.Kind() != reflect.Slice {
return validations, valid
}
for destWeightIdx := 0; destWeightIdx < destinationWeights.Len(); destWeightIdx++ {
destinationWeight, ok := destinationWeights.Index(destWeightIdx).Interface().(map[string]interface{})
if !ok || destinationWeight["destination"] == nil {
valid = false
path := fmt.Sprintf("spec/%s[%d]/route[%d]", protocol, routeIdx, destWeightIdx)
validation := models.Build("virtualservices.subsetpresent.destinationmandatory", path)
validations = append(validations, &validation)
continue
}
destination, ok := destinationWeight["destination"].(map[string]interface{})
if !ok {
continue
}
host, ok := destination["host"].(string)
if !ok {
continue
}
subset, ok := destination["subset"].(string)
if !ok {
continue
}
if !checker.subsetPresent(host, subset) {
path := fmt.Sprintf("spec/%s[%d]/route[%d]/destination", protocol, routeIdx, destWeightIdx)
validation := models.Build("virtualservices.subsetpresent.subsetnotfound", path)
validations = append(validations, &validation)
}
}
}
}
return validations, valid
}
func (checker SubsetPresenceChecker) subsetPresent(host string, subset string) bool {
destinationRule, ok := checker.getDestinationRule(host)
if !ok || destinationRule == nil {
return false
}
return hasSubsetDefined(destinationRule, subset)
}
func (checker SubsetPresenceChecker) getDestinationRule(virtualServiceHost string) (kubernetes.IstioObject, bool) {
for _, destinationRule := range checker.DestinationRules {
host, ok := destinationRule.GetSpec()["host"]
if !ok {
continue
}
sHost, ok := host.(string)
if !ok {
continue
}
domainParts := strings.Split(sHost, ".")
serviceName := domainParts[0]
namespace := checker.Namespace
if len(domainParts) > 1 {
namespace = domainParts[1]
}
if kubernetes.FilterByHost(virtualServiceHost, serviceName, namespace) {
return destinationRule, true
}
}
return nil, false
}
func hasSubsetDefined(destinationRule kubernetes.IstioObject, subsetTarget string) bool {
if subsets, ok := destinationRule.GetSpec()["subsets"]; ok {
if dSubsets, ok := subsets.([]interface{}); ok {
for _, subset := range dSubsets {
if innerSubset, ok := subset.(map[string]interface{}); ok {
if sSubsetName := innerSubset["name"]; ok {
subsetName := sSubsetName.(string)
if subsetName == subsetTarget {
if labels, ok := innerSubset["labels"]; ok {
if _, ok := labels.(map[string]interface{}); ok {
return true
}
}
}
}
}
}
}
}
return false
}

232
vendor/github.com/kiali/kiali/business/dashboards.go generated vendored Normal file
View File

@@ -0,0 +1,232 @@
package business
import (
"fmt"
"strings"
"sync"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus"
)
// DashboardsService deals with fetching dashboards from k8s client
type DashboardsService struct {
prom prometheus.ClientInterface
mon kubernetes.KialiMonitoringInterface
}
// NewDashboardsService initializes this business service
func NewDashboardsService(mon kubernetes.KialiMonitoringInterface, prom prometheus.ClientInterface) DashboardsService {
return DashboardsService{prom: prom, mon: mon}
}
func (in *DashboardsService) loadDashboardResource(namespace, template string) (*kubernetes.MonitoringDashboard, error) {
// There is an override mechanism with dashboards: default dashboards can be provided in Kiali namespace,
// and can be overriden in app namespace.
// So we look for the one in app namespace first, and only if not found fallback to the one in istio-system.
dashboard, err := in.mon.GetDashboard(namespace, template)
if err != nil {
cfg := config.Get()
dashboard, err = in.mon.GetDashboard(cfg.IstioNamespace, template)
if err != nil {
return nil, err
}
}
return dashboard, nil
}
// GetDashboard returns a dashboard filled-in with target data
func (in *DashboardsService) GetDashboard(params prometheus.CustomMetricsQuery, template string) (*models.MonitoringDashboard, error) {
dashboard, err := in.loadDashboardResource(params.Namespace, template)
if err != nil {
return nil, err
}
aggLabels := models.ConvertAggregations(dashboard.Spec)
labels := fmt.Sprintf(`{namespace="%s",app="%s"`, params.Namespace, params.App)
if params.Version != "" {
labels += fmt.Sprintf(`,version="%s"`, params.Version)
} else {
// For app-based dashboards, we automatically add a possible aggregation/grouping over versions
versionsAgg := models.Aggregation{
Label: "version",
DisplayName: "Version",
}
aggLabels = append([]models.Aggregation{versionsAgg}, aggLabels...)
}
labels += "}"
grouping := strings.Join(params.ByLabels, ",")
wg := sync.WaitGroup{}
wg.Add(len(dashboard.Spec.Charts))
filledCharts := make([]models.Chart, len(dashboard.Spec.Charts))
for i, c := range dashboard.Spec.Charts {
go func(idx int, chart kubernetes.MonitoringDashboardChart) {
defer wg.Done()
filledCharts[idx] = models.ConvertChart(chart)
if chart.DataType == kubernetes.Raw {
aggregator := params.RawDataAggregator
if chart.Aggregator != "" {
aggregator = chart.Aggregator
}
filledCharts[idx].Metric = in.prom.FetchRange(chart.MetricName, labels, grouping, aggregator, &params.BaseMetricsQuery)
} else if chart.DataType == kubernetes.Rate {
filledCharts[idx].Metric = in.prom.FetchRateRange(chart.MetricName, labels, grouping, &params.BaseMetricsQuery)
} else {
filledCharts[idx].Histogram = in.prom.FetchHistogramRange(chart.MetricName, labels, grouping, &params.BaseMetricsQuery)
}
}(i, c)
}
wg.Wait()
return &models.MonitoringDashboard{
Title: dashboard.Spec.Title,
Charts: filledCharts,
Aggregations: aggLabels,
}, nil
}
type istioChart struct {
models.Chart
refName string
}
var istioCharts = []istioChart{
{
Chart: models.Chart{
Name: "Request volume",
Unit: "ops",
Spans: 6,
},
refName: "request_count",
},
{
Chart: models.Chart{
Name: "Request duration",
Unit: "s",
Spans: 6,
},
refName: "request_duration",
},
{
Chart: models.Chart{
Name: "Request size",
Unit: "B",
Spans: 6,
},
refName: "request_size",
},
{
Chart: models.Chart{
Name: "Response size",
Unit: "B",
Spans: 6,
},
refName: "response_size",
},
{
Chart: models.Chart{
Name: "TCP received",
Unit: "bps",
Spans: 6,
},
refName: "tcp_received",
},
{
Chart: models.Chart{
Name: "TCP sent",
Unit: "bps",
Spans: 6,
},
refName: "tcp_sent",
},
}
// GetIstioDashboard returns Istio dashboard (currently hard-coded) filled-in with metrics
func (in *DashboardsService) GetIstioDashboard(params prometheus.IstioMetricsQuery) (*models.MonitoringDashboard, error) {
var dashboard models.MonitoringDashboard
// Copy dashboard
if params.Direction == "inbound" {
dashboard = models.PrepareIstioDashboard("Inbound", "destination", "source")
} else {
dashboard = models.PrepareIstioDashboard("Outbound", "source", "destination")
}
metrics := in.prom.GetMetrics(&params)
for _, chartTpl := range istioCharts {
newChart := chartTpl.Chart
if metric, ok := metrics.Metrics[chartTpl.refName]; ok {
newChart.Metric = metric
}
if histo, ok := metrics.Histograms[chartTpl.refName]; ok {
newChart.Histogram = histo
}
dashboard.Charts = append(dashboard.Charts, newChart)
}
return &dashboard, nil
}
func (in *DashboardsService) buildRuntimesList(namespace string, templatesNames []string) []models.Runtime {
dashboards := make([]*kubernetes.MonitoringDashboard, len(templatesNames))
wg := sync.WaitGroup{}
wg.Add(len(templatesNames))
for idx, template := range templatesNames {
go func(i int, tpl string) {
defer wg.Done()
dashboard, err := in.loadDashboardResource(namespace, tpl)
if err != nil {
log.Errorf("Cannot get dashboard %s in namespace %s. Error was: %v", tpl, namespace, err)
} else {
dashboards[i] = dashboard
}
}(idx, template)
}
wg.Wait()
runtimes := []models.Runtime{}
for _, dashboard := range dashboards {
if dashboard == nil {
continue
}
runtime := getDashboardRuntime(dashboard)
ref := models.DashboardRef{
Template: dashboard.Metadata["name"].(string),
Title: dashboard.Spec.Title,
}
found := false
for i := range runtimes {
rtObj := &runtimes[i]
if rtObj.Name == runtime {
rtObj.DashboardRefs = append(rtObj.DashboardRefs, ref)
found = true
break
}
}
if !found {
runtimes = append(runtimes, models.Runtime{
Name: runtime,
DashboardRefs: []models.DashboardRef{ref},
})
}
}
return runtimes
}
func getDashboardRuntime(dashboard *kubernetes.MonitoringDashboard) string {
if labels, ok := dashboard.Metadata["labels"]; ok {
if labelsMap, ok := labels.(map[string]interface{}); ok {
if runtime, ok := labelsMap["runtime"]; ok {
return runtime.(string)
}
}
}
return dashboard.Spec.Title
}

305
vendor/github.com/kiali/kiali/business/health.go generated vendored Normal file
View File

@@ -0,0 +1,305 @@
package business
import (
"time"
"github.com/prometheus/common/model"
"k8s.io/apimachinery/pkg/labels"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus"
"github.com/kiali/kiali/prometheus/internalmetrics"
)
// HealthService deals with fetching health from various sources and convert to kiali model
type HealthService struct {
prom prometheus.ClientInterface
k8s kubernetes.IstioClientInterface
}
// GetServiceHealth returns a service health (service request error rate)
func (in *HealthService) GetServiceHealth(namespace, service, rateInterval string, queryTime time.Time) (models.ServiceHealth, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "HealthService", "GetServiceHealth")
defer promtimer.ObserveNow(&err)
rqHealth, err := in.getServiceRequestsHealth(namespace, service, rateInterval, queryTime)
return models.ServiceHealth{Requests: rqHealth}, err
}
// GetAppHealth returns an app health from just Namespace and app name (thus, it fetches data from K8S and Prometheus)
func (in *HealthService) GetAppHealth(namespace, app, rateInterval string, queryTime time.Time) (models.AppHealth, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "HealthService", "GetAppHealth")
defer promtimer.ObserveNow(&err)
appLabel := config.Get().IstioLabels.AppLabelName
selectorLabels := make(map[string]string)
selectorLabels[appLabel] = app
labelSelector := labels.FormatLabels(selectorLabels)
ws, err := fetchWorkloads(in.k8s, namespace, labelSelector)
if err != nil {
log.Errorf("Error fetching Workloads per namespace %s and app %s: %s", namespace, app, err)
return models.AppHealth{}, err
}
return in.getAppHealth(namespace, app, rateInterval, queryTime, ws)
}
func (in *HealthService) getAppHealth(namespace, app, rateInterval string, queryTime time.Time, ws models.Workloads) (models.AppHealth, error) {
health := models.EmptyAppHealth()
// Perf: do not bother fetching request rate if not a single workload has sidecar
hasSidecar := false
for _, w := range ws {
if w.IstioSidecar {
hasSidecar = true
break
}
}
// Fetch services requests rates
var errRate error
if hasSidecar {
rate, err := in.getAppRequestsHealth(namespace, app, rateInterval, queryTime)
health.Requests = rate
errRate = err
}
// Deployment status
health.WorkloadStatuses = castWorkloadStatuses(ws)
return health, errRate
}
// GetWorkloadHealth returns a workload health from just Namespace and workload (thus, it fetches data from K8S and Prometheus)
func (in *HealthService) GetWorkloadHealth(namespace, workload, rateInterval string, queryTime time.Time) (models.WorkloadHealth, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "HealthService", "GetWorkloadHealth")
defer promtimer.ObserveNow(&err)
w, err := fetchWorkload(in.k8s, namespace, workload)
if err != nil {
return models.WorkloadHealth{}, err
}
status := models.WorkloadStatus{
Name: w.Name,
Replicas: w.Replicas,
AvailableReplicas: w.AvailableReplicas,
}
// Perf: do not bother fetching request rate if workload has no sidecar
if !w.IstioSidecar {
return models.WorkloadHealth{
WorkloadStatus: status,
Requests: models.NewEmptyRequestHealth(),
}, nil
}
rate, err := in.getWorkloadRequestsHealth(namespace, workload, rateInterval, queryTime)
return models.WorkloadHealth{
WorkloadStatus: status,
Requests: rate,
}, err
}
// GetNamespaceAppHealth returns a health for all apps in given Namespace (thus, it fetches data from K8S and Prometheus)
func (in *HealthService) GetNamespaceAppHealth(namespace, rateInterval string, queryTime time.Time) (models.NamespaceAppHealth, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "HealthService", "GetNamespaceAppHealth")
defer promtimer.ObserveNow(&err)
appEntities, err := fetchNamespaceApps(in.k8s, namespace, "")
if err != nil {
return nil, err
}
return in.getNamespaceAppHealth(namespace, appEntities, rateInterval, queryTime)
}
func (in *HealthService) getNamespaceAppHealth(namespace string, appEntities namespaceApps, rateInterval string, queryTime time.Time) (models.NamespaceAppHealth, error) {
allHealth := make(models.NamespaceAppHealth)
// Perf: do not bother fetching request rate if not a single workload has sidecar
hasSidecar := false
// Prepare all data
for app, entities := range appEntities {
if app != "" {
h := models.EmptyAppHealth()
allHealth[app] = &h
if entities != nil {
h.WorkloadStatuses = castWorkloadStatuses(entities.Workloads)
for _, w := range entities.Workloads {
if w.IstioSidecar {
hasSidecar = true
break
}
}
}
}
}
var errRate error
if hasSidecar {
// Fetch services requests rates
rates, err := in.prom.GetAllRequestRates(namespace, rateInterval, queryTime)
errRate = err
// Fill with collected request rates
fillAppRequestRates(allHealth, rates)
}
return allHealth, errRate
}
// GetNamespaceServiceHealth returns a health for all services in given Namespace (thus, it fetches data from K8S and Prometheus)
func (in *HealthService) GetNamespaceServiceHealth(namespace, rateInterval string, queryTime time.Time) (models.NamespaceServiceHealth, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "HealthService", "GetNamespaceServiceHealth")
defer promtimer.ObserveNow(&err)
return in.getNamespaceServiceHealth(namespace, rateInterval, queryTime), nil
}
func (in *HealthService) getNamespaceServiceHealth(namespace string, rateInterval string, queryTime time.Time) models.NamespaceServiceHealth {
allHealth := make(models.NamespaceServiceHealth)
// Fetch services requests rates
rates, _ := in.prom.GetNamespaceServicesRequestRates(namespace, rateInterval, queryTime)
// Fill with collected request rates
lblDestSvc := model.LabelName("destination_service_name")
for _, sample := range rates {
service := string(sample.Metric[lblDestSvc])
health, ok := allHealth[service]
if !ok {
health = &models.ServiceHealth{Requests: models.NewEmptyRequestHealth()}
allHealth[service] = health
}
health.Requests.AggregateInbound(sample)
}
return allHealth
}
// GetNamespaceWorkloadHealth returns a health for all workloads in given Namespace (thus, it fetches data from K8S and Prometheus)
func (in *HealthService) GetNamespaceWorkloadHealth(namespace, rateInterval string, queryTime time.Time) (models.NamespaceWorkloadHealth, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "HealthService", "GetNamespaceWorkloadHealth")
defer promtimer.ObserveNow(&err)
wl, err := fetchWorkloads(in.k8s, namespace, "")
if err != nil {
return nil, err
}
return in.getNamespaceWorkloadHealth(namespace, wl, rateInterval, queryTime), nil
}
func (in *HealthService) getNamespaceWorkloadHealth(namespace string, ws models.Workloads, rateInterval string, queryTime time.Time) models.NamespaceWorkloadHealth {
// Perf: do not bother fetching request rate if not a single workload has sidecar
hasSidecar := false
allHealth := make(models.NamespaceWorkloadHealth)
for _, w := range ws {
allHealth[w.Name] = &models.WorkloadHealth{}
allHealth[w.Name].WorkloadStatus = models.WorkloadStatus{
Name: w.Name,
Replicas: w.Replicas,
AvailableReplicas: w.AvailableReplicas,
}
if w.IstioSidecar {
hasSidecar = true
}
}
if hasSidecar {
// Fetch services requests rates
rates, _ := in.prom.GetAllRequestRates(namespace, rateInterval, queryTime)
// Fill with collected request rates
fillWorkloadRequestRates(allHealth, rates)
}
return allHealth
}
// fillAppRequestRates aggregates requests rates from metrics fetched from Prometheus, and stores the result in the health map.
func fillAppRequestRates(allHealth models.NamespaceAppHealth, rates model.Vector) {
lblDest := model.LabelName("destination_app")
lblSrc := model.LabelName("source_app")
for _, sample := range rates {
name := string(sample.Metric[lblDest])
if health, ok := allHealth[name]; ok {
health.Requests.AggregateInbound(sample)
}
name = string(sample.Metric[lblSrc])
if health, ok := allHealth[name]; ok {
health.Requests.AggregateOutbound(sample)
}
}
}
// fillWorkloadRequestRates aggregates requests rates from metrics fetched from Prometheus, and stores the result in the health map.
func fillWorkloadRequestRates(allHealth models.NamespaceWorkloadHealth, rates model.Vector) {
lblDest := model.LabelName("destination_workload")
lblSrc := model.LabelName("source_workload")
for _, sample := range rates {
name := string(sample.Metric[lblDest])
if health, ok := allHealth[name]; ok {
health.Requests.AggregateInbound(sample)
}
name = string(sample.Metric[lblSrc])
if health, ok := allHealth[name]; ok {
health.Requests.AggregateOutbound(sample)
}
}
}
func (in *HealthService) getServiceRequestsHealth(namespace, service, rateInterval string, queryTime time.Time) (models.RequestHealth, error) {
rqHealth := models.NewEmptyRequestHealth()
inbound, err := in.prom.GetServiceRequestRates(namespace, service, rateInterval, queryTime)
for _, sample := range inbound {
rqHealth.AggregateInbound(sample)
}
return rqHealth, err
}
func (in *HealthService) getAppRequestsHealth(namespace, app, rateInterval string, queryTime time.Time) (models.RequestHealth, error) {
rqHealth := models.NewEmptyRequestHealth()
inbound, outbound, err := in.prom.GetAppRequestRates(namespace, app, rateInterval, queryTime)
for _, sample := range inbound {
rqHealth.AggregateInbound(sample)
}
for _, sample := range outbound {
rqHealth.AggregateOutbound(sample)
}
return rqHealth, err
}
func (in *HealthService) getWorkloadRequestsHealth(namespace, workload, rateInterval string, queryTime time.Time) (models.RequestHealth, error) {
rqHealth := models.NewEmptyRequestHealth()
inbound, outbound, err := in.prom.GetWorkloadRequestRates(namespace, workload, rateInterval, queryTime)
for _, sample := range inbound {
rqHealth.AggregateInbound(sample)
}
for _, sample := range outbound {
rqHealth.AggregateOutbound(sample)
}
return rqHealth, err
}
func castWorkloadStatuses(ws models.Workloads) []models.WorkloadStatus {
statuses := make([]models.WorkloadStatus, 0)
for _, w := range ws {
status := models.WorkloadStatus{
Name: w.Name,
Replicas: w.Replicas,
AvailableReplicas: w.AvailableReplicas}
statuses = append(statuses, status)
}
return statuses
}

722
vendor/github.com/kiali/kiali/business/istio_config.go generated vendored Normal file
View File

@@ -0,0 +1,722 @@
package business
import (
"encoding/json"
"errors"
"fmt"
"strings"
"sync"
errors2 "k8s.io/apimachinery/pkg/api/errors"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus/internalmetrics"
)
type IstioConfigService struct {
k8s kubernetes.IstioClientInterface
}
type IstioConfigCriteria struct {
Namespace string
IncludeGateways bool
IncludeVirtualServices bool
IncludeDestinationRules bool
IncludeServiceEntries bool
IncludeRules bool
IncludeAdapters bool
IncludeTemplates bool
IncludeQuotaSpecs bool
IncludeQuotaSpecBindings bool
IncludePolicies bool
IncludeMeshPolicies bool
IncludeClusterRbacConfigs bool
IncludeServiceRoles bool
IncludeServiceRoleBindings bool
}
const (
VirtualServices = "virtualservices"
DestinationRules = "destinationrules"
ServiceEntries = "serviceentries"
Gateways = "gateways"
Rules = "rules"
Adapters = "adapters"
Templates = "templates"
QuotaSpecs = "quotaspecs"
QuotaSpecBindings = "quotaspecbindings"
Policies = "policies"
MeshPolicies = "meshpolicies"
ClusterRbacConfigs = "clusterrbacconfigs"
ServiceRoles = "serviceroles"
ServiceRoleBindings = "servicerolebindings"
)
var resourceTypesToAPI = map[string]string{
DestinationRules: kubernetes.NetworkingGroupVersion.Group,
VirtualServices: kubernetes.NetworkingGroupVersion.Group,
ServiceEntries: kubernetes.NetworkingGroupVersion.Group,
Gateways: kubernetes.NetworkingGroupVersion.Group,
Adapters: kubernetes.ConfigGroupVersion.Group,
Templates: kubernetes.ConfigGroupVersion.Group,
Rules: kubernetes.ConfigGroupVersion.Group,
QuotaSpecs: kubernetes.ConfigGroupVersion.Group,
QuotaSpecBindings: kubernetes.ConfigGroupVersion.Group,
Policies: kubernetes.AuthenticationGroupVersion.Group,
MeshPolicies: kubernetes.AuthenticationGroupVersion.Group,
ClusterRbacConfigs: kubernetes.RbacGroupVersion.Group,
ServiceRoles: kubernetes.RbacGroupVersion.Group,
ServiceRoleBindings: kubernetes.RbacGroupVersion.Group,
}
var apiToVersion = map[string]string{
kubernetes.NetworkingGroupVersion.Group: kubernetes.ApiNetworkingVersion,
kubernetes.ConfigGroupVersion.Group: kubernetes.ApiConfigVersion,
kubernetes.ApiAuthenticationVersion: kubernetes.ApiAuthenticationVersion,
kubernetes.RbacGroupVersion.Group: kubernetes.ApiRbacVersion,
}
const (
MeshmTLSEnabled = "MESH_MTLS_ENABLED"
MeshmTLSPartiallyEnabled = "MESH_MTLS_PARTIALLY_ENABLED"
MeshmTLSNotEnabled = "MESH_MTLS_NOT_ENABLED"
)
// GetIstioConfigList returns a list of Istio routing objects, Mixer Rules, (etc.)
// per a given Namespace.
func (in *IstioConfigService) GetIstioConfigList(criteria IstioConfigCriteria) (models.IstioConfigList, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "IstioConfigService", "GetIstioConfigList")
defer promtimer.ObserveNow(&err)
if criteria.Namespace == "" {
return models.IstioConfigList{}, errors.New("GetIstioConfigList needs a non empty Namespace")
}
istioConfigList := models.IstioConfigList{
Namespace: models.Namespace{Name: criteria.Namespace},
Gateways: models.Gateways{},
VirtualServices: models.VirtualServices{Items: []models.VirtualService{}},
DestinationRules: models.DestinationRules{Items: []models.DestinationRule{}},
ServiceEntries: models.ServiceEntries{},
Rules: models.IstioRules{},
Adapters: models.IstioAdapters{},
Templates: models.IstioTemplates{},
QuotaSpecs: models.QuotaSpecs{},
QuotaSpecBindings: models.QuotaSpecBindings{},
Policies: models.Policies{},
MeshPolicies: models.MeshPolicies{},
ClusterRbacConfigs: models.ClusterRbacConfigs{},
ServiceRoles: models.ServiceRoles{},
ServiceRoleBindings: models.ServiceRoleBindings{},
}
var gg, vs, dr, se, qs, qb, aa, tt, mr, pc, mp, rc, sr, srb []kubernetes.IstioObject
var ggErr, vsErr, drErr, seErr, mrErr, qsErr, qbErr, aaErr, ttErr, pcErr, mpErr, rcErr, srErr, srbErr error
var wg sync.WaitGroup
wg.Add(14)
go func() {
defer wg.Done()
if criteria.IncludeGateways {
if gg, ggErr = in.k8s.GetGateways(criteria.Namespace); ggErr == nil {
(&istioConfigList.Gateways).Parse(gg)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeVirtualServices {
if vs, vsErr = in.k8s.GetVirtualServices(criteria.Namespace, ""); vsErr == nil {
(&istioConfigList.VirtualServices).Parse(vs)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeDestinationRules {
if dr, drErr = in.k8s.GetDestinationRules(criteria.Namespace, ""); drErr == nil {
(&istioConfigList.DestinationRules).Parse(dr)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeServiceEntries {
if se, seErr = in.k8s.GetServiceEntries(criteria.Namespace); seErr == nil {
(&istioConfigList.ServiceEntries).Parse(se)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeRules {
if mr, mrErr = in.k8s.GetIstioRules(criteria.Namespace); mrErr == nil {
istioConfigList.Rules = models.CastIstioRulesCollection(mr)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeAdapters {
if aa, aaErr = in.k8s.GetAdapters(criteria.Namespace); aaErr == nil {
istioConfigList.Adapters = models.CastIstioAdaptersCollection(aa)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeTemplates {
if tt, ttErr = in.k8s.GetTemplates(criteria.Namespace); ttErr == nil {
istioConfigList.Templates = models.CastIstioTemplatesCollection(tt)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeQuotaSpecs {
if qs, qsErr = in.k8s.GetQuotaSpecs(criteria.Namespace); qsErr == nil {
(&istioConfigList.QuotaSpecs).Parse(qs)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeQuotaSpecBindings {
if qb, qbErr = in.k8s.GetQuotaSpecBindings(criteria.Namespace); qbErr == nil {
(&istioConfigList.QuotaSpecBindings).Parse(qb)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludePolicies {
if pc, pcErr = in.k8s.GetPolicies(criteria.Namespace); pcErr == nil {
(&istioConfigList.Policies).Parse(pc)
}
}
}()
go func() {
defer wg.Done()
// MeshPolicies are not namespaced. They will be only listed for the namespace
// where istio is deployed.
if criteria.IncludeMeshPolicies && criteria.Namespace == config.Get().IstioNamespace {
if mp, mpErr = in.k8s.GetMeshPolicies(criteria.Namespace); mpErr == nil {
(&istioConfigList.MeshPolicies).Parse(mp)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeClusterRbacConfigs && criteria.Namespace == config.Get().IstioNamespace {
if rc, rcErr = in.k8s.GetClusterRbacConfigs(criteria.Namespace); rcErr == nil {
(&istioConfigList.ClusterRbacConfigs).Parse(rc)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeServiceRoles {
if sr, srErr = in.k8s.GetServiceRoles(criteria.Namespace); srErr == nil {
(&istioConfigList.ServiceRoles).Parse(sr)
}
}
}()
go func() {
defer wg.Done()
if criteria.IncludeServiceRoleBindings {
if srb, srbErr = in.k8s.GetServiceRoleBindings(criteria.Namespace); srbErr == nil {
(&istioConfigList.ServiceRoleBindings).Parse(srb)
}
}
}()
wg.Wait()
for _, genErr := range []error{ggErr, vsErr, drErr, seErr, mrErr, qsErr, qbErr, aaErr, ttErr, mpErr, pcErr, rcErr, srErr, srbErr} {
if genErr != nil {
err = genErr
return models.IstioConfigList{}, err
}
}
return istioConfigList, nil
}
// GetIstioConfigDetails returns a specific Istio configuration object.
// It uses following parameters:
// - "namespace": namespace where configuration is stored
// - "objectType": type of the configuration
// - "objectSubtype": subtype of the configuration, used when objectType == "adapters" or "templates", empty/not used otherwise
// - "object": name of the configuration
func (in *IstioConfigService) GetIstioConfigDetails(namespace, objectType, objectSubtype, object string) (models.IstioConfigDetails, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "IstioConfigService", "GetIstioConfigDetails")
defer promtimer.ObserveNow(&err)
istioConfigDetail := models.IstioConfigDetails{}
istioConfigDetail.Namespace = models.Namespace{Name: namespace}
istioConfigDetail.ObjectType = objectType
var gw, vs, dr, se, qs, qb, r, a, t, pc, mp, rc, sr, srb kubernetes.IstioObject
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
canCreate, canUpdate, canDelete := getPermissions(in.k8s, namespace, objectType, objectSubtype)
istioConfigDetail.Permissions = models.ResourcePermissions{
Create: canCreate,
Update: canUpdate,
Delete: canDelete,
}
}()
switch objectType {
case Gateways:
if gw, err = in.k8s.GetGateway(namespace, object); err == nil {
istioConfigDetail.Gateway = &models.Gateway{}
istioConfigDetail.Gateway.Parse(gw)
}
case VirtualServices:
if vs, err = in.k8s.GetVirtualService(namespace, object); err == nil {
istioConfigDetail.VirtualService = &models.VirtualService{}
istioConfigDetail.VirtualService.Parse(vs)
}
case DestinationRules:
if dr, err = in.k8s.GetDestinationRule(namespace, object); err == nil {
istioConfigDetail.DestinationRule = &models.DestinationRule{}
istioConfigDetail.DestinationRule.Parse(dr)
}
case ServiceEntries:
if se, err = in.k8s.GetServiceEntry(namespace, object); err == nil {
istioConfigDetail.ServiceEntry = &models.ServiceEntry{}
istioConfigDetail.ServiceEntry.Parse(se)
}
case Rules:
if r, err = in.k8s.GetIstioRule(namespace, object); err == nil {
istioRule := models.CastIstioRule(r)
istioConfigDetail.Rule = &istioRule
}
case Adapters:
if a, err = in.k8s.GetAdapter(namespace, objectSubtype, object); err == nil {
adapter := models.CastIstioAdapter(a)
istioConfigDetail.Adapter = &adapter
}
case Templates:
if t, err = in.k8s.GetTemplate(namespace, objectSubtype, object); err == nil {
template := models.CastIstioTemplate(t)
istioConfigDetail.Template = &template
}
case QuotaSpecs:
if qs, err = in.k8s.GetQuotaSpec(namespace, object); err == nil {
istioConfigDetail.QuotaSpec = &models.QuotaSpec{}
istioConfigDetail.QuotaSpec.Parse(qs)
}
case QuotaSpecBindings:
if qb, err = in.k8s.GetQuotaSpecBinding(namespace, object); err == nil {
istioConfigDetail.QuotaSpecBinding = &models.QuotaSpecBinding{}
istioConfigDetail.QuotaSpecBinding.Parse(qb)
}
case Policies:
if pc, err = in.k8s.GetPolicy(namespace, object); err == nil {
istioConfigDetail.Policy = &models.Policy{}
istioConfigDetail.Policy.Parse(pc)
}
case MeshPolicies:
if mp, err = in.k8s.GetMeshPolicy(namespace, object); err == nil {
istioConfigDetail.MeshPolicy = &models.MeshPolicy{}
istioConfigDetail.MeshPolicy.Parse(mp)
}
case ClusterRbacConfigs:
if rc, err = in.k8s.GetClusterRbacConfig(namespace, object); err == nil {
istioConfigDetail.ClusterRbacConfig = &models.ClusterRbacConfig{}
istioConfigDetail.ClusterRbacConfig.Parse(rc)
}
case ServiceRoles:
if sr, err = in.k8s.GetServiceRole(namespace, object); err == nil {
istioConfigDetail.ServiceRole = &models.ServiceRole{}
istioConfigDetail.ServiceRole.Parse(sr)
}
case ServiceRoleBindings:
if srb, err = in.k8s.GetServiceRoleBinding(namespace, object); err == nil {
istioConfigDetail.ServiceRoleBinding = &models.ServiceRoleBinding{}
istioConfigDetail.ServiceRoleBinding.Parse(srb)
}
default:
err = fmt.Errorf("Object type not found: %v", objectType)
}
wg.Wait()
return istioConfigDetail, err
}
// GetIstioAPI provides the Kubernetes API that manages this Istio resource type
// or empty string if it's not managed
func GetIstioAPI(resourceType string) string {
return resourceTypesToAPI[resourceType]
}
// ParseJsonForCreate checks if a json is well formed according resourceType/subresourceType.
// It returns a json validated to be used in the Create operation, or an error to report in the handler layer.
func (in *IstioConfigService) ParseJsonForCreate(resourceType, subresourceType string, body []byte) (string, error) {
var err error
istioConfigDetail := models.IstioConfigDetails{}
apiVersion := apiToVersion[resourceTypesToAPI[resourceType]]
var kind string
var marshalled string
if resourceType == Adapters || resourceType == Templates {
kind = kubernetes.PluralType[subresourceType]
} else {
kind = kubernetes.PluralType[resourceType]
}
switch resourceType {
case Gateways:
istioConfigDetail.Gateway = &models.Gateway{}
err = json.Unmarshal(body, istioConfigDetail.Gateway)
case VirtualServices:
istioConfigDetail.VirtualService = &models.VirtualService{}
err = json.Unmarshal(body, istioConfigDetail.VirtualService)
case DestinationRules:
istioConfigDetail.DestinationRule = &models.DestinationRule{}
err = json.Unmarshal(body, istioConfigDetail.DestinationRule)
case ServiceEntries:
istioConfigDetail.ServiceEntry = &models.ServiceEntry{}
err = json.Unmarshal(body, istioConfigDetail.ServiceEntry)
case Rules:
istioConfigDetail.Rule = &models.IstioRule{}
err = json.Unmarshal(body, istioConfigDetail.Rule)
case Adapters:
istioConfigDetail.Adapter = &models.IstioAdapter{}
err = json.Unmarshal(body, istioConfigDetail.Adapter)
case Templates:
istioConfigDetail.Template = &models.IstioTemplate{}
err = json.Unmarshal(body, istioConfigDetail.Template)
case QuotaSpecs:
istioConfigDetail.QuotaSpec = &models.QuotaSpec{}
err = json.Unmarshal(body, istioConfigDetail.QuotaSpec)
case QuotaSpecBindings:
istioConfigDetail.QuotaSpecBinding = &models.QuotaSpecBinding{}
err = json.Unmarshal(body, istioConfigDetail.QuotaSpecBinding)
case Policies:
istioConfigDetail.Policy = &models.Policy{}
err = json.Unmarshal(body, istioConfigDetail.Policy)
case MeshPolicies:
istioConfigDetail.MeshPolicy = &models.MeshPolicy{}
err = json.Unmarshal(body, istioConfigDetail.MeshPolicy)
default:
err = fmt.Errorf("Object type not found: %v", resourceType)
}
if err != nil {
return "", err
}
// Append apiVersion and kind
marshalled = string(body)
marshalled = strings.TrimSpace(marshalled)
marshalled = "" +
"{\n" +
"\"kind\": \"" + kind + "\",\n" +
"\"apiVersion\": \"" + apiVersion + "\"," +
marshalled[1:]
return marshalled, nil
}
// DeleteIstioConfigDetail deletes the given Istio resource
func (in *IstioConfigService) DeleteIstioConfigDetail(api, namespace, resourceType, resourceSubtype, name string) (err error) {
promtimer := internalmetrics.GetGoFunctionMetric("business", "IstioConfigService", "DeleteIstioConfigDetail")
defer promtimer.ObserveNow(&err)
if resourceType == Adapters || resourceType == Templates {
err = in.k8s.DeleteIstioObject(api, namespace, resourceSubtype, name)
} else {
err = in.k8s.DeleteIstioObject(api, namespace, resourceType, name)
}
return err
}
func (in *IstioConfigService) UpdateIstioConfigDetail(api, namespace, resourceType, resourceSubtype, name, jsonPatch string) (models.IstioConfigDetails, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "IstioConfigService", "UpdateIstioConfigDetail")
defer promtimer.ObserveNow(&err)
return in.modifyIstioConfigDetail(api, namespace, resourceType, resourceSubtype, name, jsonPatch, false)
}
func (in *IstioConfigService) modifyIstioConfigDetail(api, namespace, resourceType, resourceSubtype, name, json string, create bool) (models.IstioConfigDetails, error) {
var err error
updatedType := resourceType
if resourceType == Adapters || resourceType == Templates {
updatedType = resourceSubtype
}
var result kubernetes.IstioObject
istioConfigDetail := models.IstioConfigDetails{}
istioConfigDetail.Namespace = models.Namespace{Name: namespace}
istioConfigDetail.ObjectType = resourceType
if create {
// Create new object
result, err = in.k8s.CreateIstioObject(api, namespace, updatedType, json)
} else {
// Update/Path existing object
result, err = in.k8s.UpdateIstioObject(api, namespace, updatedType, name, json)
}
if err != nil {
return istioConfigDetail, err
}
switch resourceType {
case Gateways:
istioConfigDetail.Gateway = &models.Gateway{}
istioConfigDetail.Gateway.Parse(result)
case VirtualServices:
istioConfigDetail.VirtualService = &models.VirtualService{}
istioConfigDetail.VirtualService.Parse(result)
case DestinationRules:
istioConfigDetail.DestinationRule = &models.DestinationRule{}
istioConfigDetail.DestinationRule.Parse(result)
case ServiceEntries:
istioConfigDetail.ServiceEntry = &models.ServiceEntry{}
istioConfigDetail.ServiceEntry.Parse(result)
case Rules:
istioRule := models.CastIstioRule(result)
istioConfigDetail.Rule = &istioRule
case Adapters:
adapter := models.CastIstioAdapter(result)
istioConfigDetail.Adapter = &adapter
case Templates:
template := models.CastIstioTemplate(result)
istioConfigDetail.Template = &template
case QuotaSpecs:
istioConfigDetail.QuotaSpec = &models.QuotaSpec{}
istioConfigDetail.QuotaSpec.Parse(result)
case QuotaSpecBindings:
istioConfigDetail.QuotaSpecBinding = &models.QuotaSpecBinding{}
istioConfigDetail.QuotaSpecBinding.Parse(result)
case Policies:
istioConfigDetail.Policy = &models.Policy{}
istioConfigDetail.Policy.Parse(result)
case MeshPolicies:
istioConfigDetail.MeshPolicy = &models.MeshPolicy{}
istioConfigDetail.MeshPolicy.Parse(result)
case ClusterRbacConfigs:
istioConfigDetail.ClusterRbacConfig = &models.ClusterRbacConfig{}
istioConfigDetail.ClusterRbacConfig.Parse(result)
case ServiceRoles:
istioConfigDetail.ServiceRole = &models.ServiceRole{}
istioConfigDetail.ServiceRole.Parse(result)
case ServiceRoleBindings:
istioConfigDetail.ServiceRoleBinding = &models.ServiceRoleBinding{}
istioConfigDetail.ServiceRoleBinding.Parse(result)
default:
err = fmt.Errorf("Object type not found: %v", resourceType)
}
return istioConfigDetail, err
}
func (in *IstioConfigService) CreateIstioConfigDetail(api, namespace, resourceType, resourceSubtype string, body []byte) (models.IstioConfigDetails, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "IstioConfigService", "CreateIstioConfigDetail")
defer promtimer.ObserveNow(&err)
json, err := in.ParseJsonForCreate(resourceType, resourceSubtype, body)
if err != nil {
return models.IstioConfigDetails{}, errors2.NewBadRequest(err.Error())
}
return in.modifyIstioConfigDetail(api, namespace, resourceType, resourceSubtype, "", json, true)
}
func getPermissions(k8s kubernetes.IstioClientInterface, namespace, objectType, objectSubtype string) (bool, bool, bool) {
var canCreate, canPatch, canUpdate, canDelete bool
if api, ok := resourceTypesToAPI[objectType]; ok {
// objectType will always match the api used in adapters/templates
// but if objectSubtype is present it should be used as resourceType
resourceType := objectType
if objectSubtype != "" {
resourceType = objectSubtype
}
ssars, permErr := k8s.GetSelfSubjectAccessReview(namespace, api, resourceType, []string{"create", "patch", "update", "delete"})
if permErr == nil {
for _, ssar := range ssars {
if ssar.Spec.ResourceAttributes != nil {
switch ssar.Spec.ResourceAttributes.Verb {
case "create":
canCreate = ssar.Status.Allowed
case "patch":
canPatch = ssar.Status.Allowed
case "update":
canUpdate = ssar.Status.Allowed
case "delete":
canDelete = ssar.Status.Allowed
}
}
}
} else {
log.Errorf("Error getting permissions [namespace: %s, api: %s, resourceType: %s]: %v", namespace, api, resourceType, permErr)
}
}
return canCreate, (canUpdate || canPatch), canDelete
}
func (in *IstioConfigService) MeshWidemTLSStatus(namespaces []string) (string, error) {
mpp, mpErr := in.hasMeshPolicyEnabled(namespaces)
if mpErr != nil {
return "", mpErr
}
drp, drErr := in.hasDestinationRuleEnabled(namespaces)
if drErr != nil {
return "", drErr
}
if drp && mpp {
return MeshmTLSEnabled, nil
} else if drp || mpp {
return MeshmTLSPartiallyEnabled, nil
}
return MeshmTLSNotEnabled, nil
}
func (in *IstioConfigService) hasMeshPolicyEnabled(namespaces []string) (bool, error) {
if len(namespaces) < 1 {
return false, fmt.Errorf("Can't find MeshPolicies without a namespace")
}
// MeshPolicies are not namespaced. So any namespace user has access to
// will work to retrieve all the MeshPolicies.
mps, err := in.k8s.GetMeshPolicies(namespaces[0])
if err != nil {
return false, err
}
for _, mp := range mps {
// It is mandatory to have default as a name
if meshMeta := mp.GetObjectMeta(); meshMeta.Name != "default" {
continue
}
// It is no globally enabled when has targets
targets, targetPresent := mp.GetSpec()["targets"]
specificTarget := targetPresent && len(targets.([]interface{})) > 0
if specificTarget {
continue
}
// It is globally enabled when a peer has mtls enabled
peers, peersPresent := mp.GetSpec()["peers"]
if !peersPresent {
continue
}
for _, peer := range peers.([]interface{}) {
peerMap := peer.(map[string]interface{})
if mtls, present := peerMap["mtls"]; present {
if mtlsMap, ok := mtls.(map[string]interface{}); ok {
// mTLS enabled in case there is an empty map or mode is STRICT
if mode, found := mtlsMap["mode"]; !found || mode == "STRICT" {
return true, nil
}
} else {
// mTLS enabled in case mtls object is empty
return true, nil
}
}
}
}
return false, nil
}
func (in *IstioConfigService) hasDestinationRuleEnabled(namespaces []string) (bool, error) {
drs, err := in.getAllDestinationRules(namespaces)
if err != nil {
return false, err
}
mtlsEnabled := false
for _, dr := range drs {
// Following the suggested procedure to enable mesh-wide mTLS, host might be '*.local':
// https://istio.io/docs/tasks/security/authn-policy/#globally-enabling-istio-mutual-tls
host, hostPresent := dr.GetSpec()["host"]
if !hostPresent || host != "*.local" {
continue
}
if trafficPolicy, trafficPresent := dr.GetSpec()["trafficPolicy"]; trafficPresent {
if trafficCasted, ok := trafficPolicy.(map[string]interface{}); ok {
if tls, found := trafficCasted["tls"]; found {
if tlsCasted, ok := tls.(map[string]interface{}); ok {
if mode, found := tlsCasted["mode"]; found {
if modeCasted, ok := mode.(string); ok {
if modeCasted == "ISTIO_MUTUAL" {
mtlsEnabled = true
break
}
}
}
}
}
}
}
}
return mtlsEnabled, nil
}
func (in *IstioConfigService) getAllDestinationRules(namespaces []string) ([]kubernetes.IstioObject, error) {
drChan := make(chan []kubernetes.IstioObject, len(namespaces))
errChan := make(chan error, 1)
wg := sync.WaitGroup{}
wg.Add(len(namespaces))
for _, namespace := range namespaces {
go func(ns string) {
defer wg.Done()
drs, err := in.k8s.GetDestinationRules(ns, "")
if err != nil {
errChan <- err
return
}
drChan <- drs
}(namespace)
}
wg.Wait()
close(errChan)
close(drChan)
for err := range errChan {
if err != nil {
return nil, err
}
}
allDestinationRules := make([]kubernetes.IstioObject, 0)
for drs := range drChan {
allDestinationRules = append(allDestinationRules, drs...)
}
return allDestinationRules, nil
}

View File

@@ -0,0 +1,282 @@
package business
import (
"fmt"
"sync"
"github.com/kiali/kiali/business/checkers"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus/internalmetrics"
v1 "k8s.io/api/core/v1"
)
type IstioValidationsService struct {
k8s kubernetes.IstioClientInterface
businessLayer *Layer
}
type ObjectChecker interface {
Check() models.IstioValidations
}
// GetValidations returns an IstioValidations object with all the checks found when running
// all the enabled checkers. If service is "" then the whole namespace is validated.
func (in *IstioValidationsService) GetValidations(namespace, service string) (models.IstioValidations, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "IstioValidationsService", "GetValidations")
defer promtimer.ObserveNow(&err)
// Ensure the service or namespace exists.. do we need to block with this?
if service != "" {
if _, err := in.k8s.GetService(namespace, service); err != nil {
return nil, err
}
} else {
if _, err := in.k8s.GetNamespace(namespace); err != nil {
return nil, err
}
}
wg := sync.WaitGroup{}
errChan := make(chan error, 1)
var istioDetails kubernetes.IstioDetails
var services []v1.Service
var workloads models.WorkloadList
var gatewaysPerNamespace [][]kubernetes.IstioObject
var mtlsDetails kubernetes.MTLSDetails
wg.Add(5) // We need to add these here to make sure we don't execute wg.Wait() before scheduler has started goroutines
// NoServiceChecker is not necessary if we target a single service - those components with validation errors won't show up in the query
go in.fetchServices(&services, namespace, service, errChan, &wg)
// We fetch without target service as some validations will require full-namespace details
go in.fetchDetails(&istioDetails, namespace, errChan, &wg)
go in.fetchWorkloads(&workloads, namespace, errChan, &wg)
go in.fetchGatewaysPerNamespace(&gatewaysPerNamespace, errChan, &wg)
go in.fetchNonLocalmTLSConfigs(&mtlsDetails, errChan, &wg)
wg.Wait()
close(errChan)
for e := range errChan {
if e != nil { // Check that default value wasn't returned
return nil, err
}
}
objectCheckers := in.getAllObjectCheckers(namespace, istioDetails, services, workloads, gatewaysPerNamespace, mtlsDetails)
// Get group validations for same kind istio objects
return runObjectCheckers(objectCheckers), nil
}
func (in *IstioValidationsService) getAllObjectCheckers(namespace string, istioDetails kubernetes.IstioDetails, services []v1.Service, workloads models.WorkloadList, gatewaysPerNamespace [][]kubernetes.IstioObject, mtlsDetails kubernetes.MTLSDetails) []ObjectChecker {
return []ObjectChecker{
checkers.VirtualServiceChecker{Namespace: namespace, DestinationRules: istioDetails.DestinationRules, VirtualServices: istioDetails.VirtualServices},
checkers.NoServiceChecker{Namespace: namespace, IstioDetails: &istioDetails, Services: services, WorkloadList: workloads, GatewaysPerNamespace: gatewaysPerNamespace},
checkers.DestinationRulesChecker{DestinationRules: istioDetails.DestinationRules, MTLSDetails: mtlsDetails},
checkers.GatewayChecker{GatewaysPerNamespace: gatewaysPerNamespace, Namespace: namespace},
}
}
func (in *IstioValidationsService) GetIstioObjectValidations(namespace string, objectType string, object string) (models.IstioValidations, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "IstioValidationsService", "GetIstioObjectValidations")
defer promtimer.ObserveNow(&err)
var istioDetails kubernetes.IstioDetails
var services []v1.Service
var workloads models.WorkloadList
var gatewaysPerNamespace [][]kubernetes.IstioObject
var mtlsDetails kubernetes.MTLSDetails
var objectCheckers []ObjectChecker
wg := sync.WaitGroup{}
errChan := make(chan error, 1)
// Get all the Istio objects from a Namespace and all gateways from every namespace
wg.Add(5)
go in.fetchDetails(&istioDetails, namespace, errChan, &wg)
go in.fetchServices(&services, namespace, "", errChan, &wg)
go in.fetchWorkloads(&workloads, namespace, errChan, &wg)
go in.fetchGatewaysPerNamespace(&gatewaysPerNamespace, errChan, &wg)
go in.fetchNonLocalmTLSConfigs(&mtlsDetails, errChan, &wg)
wg.Wait()
switch objectType {
case Gateways:
objectCheckers = []ObjectChecker{
checkers.GatewayChecker{GatewaysPerNamespace: gatewaysPerNamespace, Namespace: namespace},
}
case VirtualServices:
virtualServiceChecker := checkers.VirtualServiceChecker{Namespace: namespace, VirtualServices: istioDetails.VirtualServices, DestinationRules: istioDetails.DestinationRules}
noServiceChecker := checkers.NoServiceChecker{Namespace: namespace, Services: services, IstioDetails: &istioDetails, WorkloadList: workloads, GatewaysPerNamespace: gatewaysPerNamespace}
objectCheckers = []ObjectChecker{noServiceChecker, virtualServiceChecker}
case DestinationRules:
destinationRulesChecker := checkers.DestinationRulesChecker{DestinationRules: istioDetails.DestinationRules, MTLSDetails: mtlsDetails}
noServiceChecker := checkers.NoServiceChecker{Namespace: namespace, Services: services, IstioDetails: &istioDetails, WorkloadList: workloads, GatewaysPerNamespace: gatewaysPerNamespace}
objectCheckers = []ObjectChecker{noServiceChecker, destinationRulesChecker}
case ServiceEntries:
// Validations on ServiceEntries are not yet in place
case Rules:
// Validations on Istio Rules are not yet in place
case Templates:
// Validations on Templates are not yet in place
// TODO Support subtypes
case Adapters:
// Validations on Adapters are not yet in place
// TODO Support subtypes
case QuotaSpecs:
// Validations on QuotaSpecs are not yet in place
case QuotaSpecBindings:
// Validations on QuotaSpecBindings are not yet in place
case Policies:
// Validations on Policies are not yet in place
case MeshPolicies:
// Validations on MeshPolicies are not yet in place
case ClusterRbacConfigs:
// Validations on ClusterRbacConfigs are not yet in place
case ServiceRoles:
// Validations on ServiceRoles are not yet in place
case ServiceRoleBindings:
// Validations on ServiceRoleBindings are not yet in place
default:
err = fmt.Errorf("Object type not found: %v", objectType)
}
close(errChan)
for e := range errChan {
if e != nil { // Check that default value wasn't returned
return nil, err
}
}
if objectCheckers == nil {
return models.IstioValidations{}, err
}
return runObjectCheckers(objectCheckers).FilterByKey(models.ObjectTypeSingular[objectType], object), nil
}
func runObjectCheckers(objectCheckers []ObjectChecker) models.IstioValidations {
objectTypeValidations := models.IstioValidations{}
// Run checks for each IstioObject type
for _, objectChecker := range objectCheckers {
objectTypeValidations.MergeValidations(objectChecker.Check())
}
return objectTypeValidations
}
// The following idea is used underneath: if errChan has at least one record, we'll effectively cancel the request (if scheduled in such order). On the other hand, if we can't
// write to the buffered errChan, we just ignore the error as select does not block even if channel is full. This is because a single error is enough to cancel the whole request.
func (in *IstioValidationsService) fetchGatewaysPerNamespace(gatewaysPerNamespace *[][]kubernetes.IstioObject, errChan chan error, wg *sync.WaitGroup) {
defer wg.Done()
if nss, err := in.businessLayer.Namespace.GetNamespaces(); err == nil {
gwss := make([][]kubernetes.IstioObject, len(nss))
for i := range nss {
gwss[i] = make([]kubernetes.IstioObject, 0)
}
*gatewaysPerNamespace = gwss
wg.Add(len(nss))
for i, ns := range nss {
go fetchNoEntry(&gwss[i], ns.Name, in.k8s.GetGateways, wg, errChan)
}
} else {
select {
case errChan <- err:
default:
}
}
}
func fetchNoEntry(rValue *[]kubernetes.IstioObject, namespace string, fetcher func(string) ([]kubernetes.IstioObject, error), wg *sync.WaitGroup, errChan chan error) {
defer wg.Done()
if len(errChan) == 0 {
fetched, err := fetcher(namespace)
*rValue = append(*rValue, fetched...)
if err != nil {
select {
case errChan <- err:
default:
}
}
}
}
func (in *IstioValidationsService) fetchServices(rValue *[]v1.Service, namespace, serviceName string, errChan chan error, wg *sync.WaitGroup) {
defer wg.Done()
if len(errChan) == 0 {
services, err := in.k8s.GetServices(namespace, nil)
if err != nil {
select {
case errChan <- err:
default:
}
} else {
*rValue = services
}
}
}
func (in *IstioValidationsService) fetchWorkloads(rValue *models.WorkloadList, namespace string, errChan chan error, wg *sync.WaitGroup) {
defer wg.Done()
if len(errChan) == 0 {
workloadList, err := in.businessLayer.Workload.GetWorkloadList(namespace)
if err != nil {
select {
case errChan <- err:
default:
}
} else {
*rValue = workloadList
}
}
}
func (in *IstioValidationsService) fetchDetails(rValue *kubernetes.IstioDetails, namespace string, errChan chan error, wg *sync.WaitGroup) {
defer wg.Done()
if len(errChan) == 0 {
istioDetails, err := in.k8s.GetIstioDetails(namespace, "")
if err != nil {
select {
case errChan <- err:
default:
}
} else {
*rValue = *istioDetails
}
}
}
func (in *IstioValidationsService) fetchNonLocalmTLSConfigs(mtlsDetails *kubernetes.MTLSDetails, errChan chan error, wg *sync.WaitGroup) {
defer wg.Done()
if len(errChan) > 0 {
return
}
namespaces, err := in.businessLayer.Namespace.GetNamespaces()
if err != nil {
errChan <- err
return
}
nsNames := make([]string, 0, len(namespaces))
for _, ns := range namespaces {
nsNames = append(nsNames, ns.Name)
}
destinationRules, err := in.businessLayer.IstioConfig.getAllDestinationRules(nsNames)
if err != nil {
errChan <- err
} else {
mtlsDetails.DestinationRules = destinationRules
}
}

119
vendor/github.com/kiali/kiali/business/jaeger_helper.go generated vendored Normal file
View File

@@ -0,0 +1,119 @@
package business
import (
"encoding/json"
"errors"
"fmt"
"github.com/kiali/kiali/log"
"io/ioutil"
"net/http"
"net/url"
"time"
"github.com/kiali/kiali/config"
)
type Trace struct {
Id string `json:"traceID"`
}
type RequestTrace struct {
Traces []Trace `json:"data"`
}
type JaegerServices struct {
Services []string `json:"data"`
}
var (
JaegerAvailable = true
)
func getErrorTracesFromJaeger(namespace string, service string) (errorTraces int, err error) {
errorTraces = 0
err = nil
if !JaegerAvailable {
return -1, errors.New("Jaeger is not available")
}
if config.Get().ExternalServices.Jaeger.Service != "" {
u, errParse := url.Parse(fmt.Sprintf("http://%s/api/traces", config.Get().ExternalServices.Jaeger.Service))
if errParse != nil {
log.Errorf("Error parse Jaeger URL fetching Error Traces: %s", err)
err = errParse
} else {
q := u.Query()
q.Set("lookback", "1h")
q.Set("service", fmt.Sprintf("%s.%s", service, namespace))
t := time.Now().UnixNano() / 1000
q.Set("start", fmt.Sprintf("%d", t-60*60*1000*1000))
q.Set("end", fmt.Sprintf("%d", t))
q.Set("tags", "{\"error\":\"true\"}")
u.RawQuery = q.Encode()
timeout := time.Duration(1000 * time.Millisecond)
client := http.Client{
Timeout: timeout,
}
resp, reqError := client.Get(u.String())
if reqError != nil {
err = reqError
} else {
defer resp.Body.Close()
body, errRead := ioutil.ReadAll(resp.Body)
if errRead != nil {
log.Errorf("Error Reading Jaeger Response fetching Error Traces: %s", errRead)
err = errRead
return -1, err
}
var traces RequestTrace
if errMarshal := json.Unmarshal([]byte(body), &traces); errMarshal != nil {
log.Errorf("Error Unmarshal Jaeger Response fetching Error Traces: %s", errRead)
err = errMarshal
return -1, err
}
errorTraces = len(traces.Traces)
}
}
}
return errorTraces, err
}
func GetServices() (services JaegerServices, err error) {
services = JaegerServices{Services: []string{}}
err = nil
u, err := url.Parse(fmt.Sprintf("http://%s/api/services", config.Get().ExternalServices.Jaeger.Service))
if err != nil {
log.Errorf("Error parse Jaeger URL fetching Services: %s", err)
return services, err
}
timeout := time.Duration(1000 * time.Millisecond)
client := http.Client{
Timeout: timeout,
}
resp, reqError := client.Get(u.String())
if reqError != nil {
err = reqError
} else {
defer resp.Body.Close()
body, errRead := ioutil.ReadAll(resp.Body)
if errRead != nil {
log.Errorf("Error Reading Jaeger Response fetching Services: %s", errRead)
err = errRead
return services, err
}
if errMarshal := json.Unmarshal([]byte(body), &services); errMarshal != nil {
log.Errorf("Error Unmarshal Jaeger Response fetching Services: %s", errRead)
err = errMarshal
return services, err
}
}
return services, err
}
func contains(a []string, x string) bool {
for _, n := range a {
if x == n {
return true
}
}
return false
}

66
vendor/github.com/kiali/kiali/business/layer.go generated vendored Normal file
View File

@@ -0,0 +1,66 @@
package business
import (
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/prometheus"
)
// Layer is a container for fast access to inner services
type Layer struct {
Svc SvcService
Health HealthService
Validations IstioValidationsService
IstioConfig IstioConfigService
Workload WorkloadService
App AppService
Namespace NamespaceService
k8s kubernetes.IstioClientInterface
}
// Global business.Layer; currently only used for tests to inject mocks,
// whereas production code recreates services in a stateless way
var layer *Layer
// Get the business.Layer, create it if necessary
func Get() (*Layer, error) {
if layer != nil {
return layer, nil
}
k8s, err := kubernetes.NewClient()
if err != nil {
return nil, err
}
prom, err := prometheus.NewClient()
if err != nil {
return nil, err
}
// Business needs to maintain a minimal state as kubernetes package will maintain a cache
SetWithBackends(k8s, prom)
return layer, nil
}
// SetWithBackends creates all services with injected clients to external APIs
func SetWithBackends(k8s kubernetes.IstioClientInterface, prom prometheus.ClientInterface) *Layer {
layer = NewWithBackends(k8s, prom)
return layer
}
// NewWithBackends creates the business layer using the passed k8s and prom clients
func NewWithBackends(k8s kubernetes.IstioClientInterface, prom prometheus.ClientInterface) *Layer {
temporaryLayer := &Layer{}
temporaryLayer.Health = HealthService{prom: prom, k8s: k8s}
temporaryLayer.Svc = SvcService{prom: prom, k8s: k8s, businessLayer: temporaryLayer}
temporaryLayer.IstioConfig = IstioConfigService{k8s: k8s}
temporaryLayer.Workload = WorkloadService{k8s: k8s, prom: prom, businessLayer: temporaryLayer}
temporaryLayer.Validations = IstioValidationsService{k8s: k8s, businessLayer: temporaryLayer}
temporaryLayer.App = AppService{prom: prom, k8s: k8s}
temporaryLayer.Namespace = NewNamespaceService(k8s)
temporaryLayer.k8s = k8s
return temporaryLayer
}
func (in *Layer) Stop() {
if in.k8s != nil {
in.k8s.Stop()
}
}

101
vendor/github.com/kiali/kiali/business/namespaces.go generated vendored Normal file
View File

@@ -0,0 +1,101 @@
package business
import (
"regexp"
osv1 "github.com/openshift/api/project/v1"
kv1 "k8s.io/api/core/v1"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus/internalmetrics"
)
// Namespace deals with fetching k8s namespaces / OpenShift projects and convert to kiali model
type NamespaceService struct {
k8s kubernetes.IstioClientInterface
hasProjects bool
}
func NewNamespaceService(k8s kubernetes.IstioClientInterface) NamespaceService {
var hasProjects bool
if k8s != nil && k8s.IsOpenShift() {
hasProjects = true
} else {
hasProjects = false
}
return NamespaceService{
k8s: k8s,
hasProjects: hasProjects,
}
}
// Returns a list of the given namespaces / projects
func (in *NamespaceService) GetNamespaces() ([]models.Namespace, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "NamespaceService", "GetNamespaces")
defer promtimer.ObserveNow(&err)
namespaces := []models.Namespace{}
// If we are running in OpenShift, we will use the project names since these are the list of accessible namespaces
if in.hasProjects {
projects, err2 := in.k8s.GetProjects()
if err2 == nil {
// Everything is good, return the projects we got from OpenShift / kube-project
namespaces = models.CastProjectCollection(projects)
}
} else {
nss, err := in.k8s.GetNamespaces()
if err != nil {
return nil, err
}
namespaces = models.CastNamespaceCollection(nss)
}
result := namespaces
excludes := config.Get().API.Namespaces.Exclude
if len(excludes) > 0 {
result = []models.Namespace{}
NAMESPACES:
for _, namespace := range namespaces {
for _, excludePattern := range excludes {
if match, _ := regexp.MatchString(excludePattern, namespace.Name); match {
continue NAMESPACES
}
}
result = append(result, namespace)
}
}
return result, nil
}
// GetNamespace returns the definition of the specified namespace.
func (in *NamespaceService) GetNamespace(namespace string) (*models.Namespace, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "NamespaceService", "GetNamespace")
defer promtimer.ObserveNow(&err)
if in.hasProjects {
var project *osv1.Project
project, err = in.k8s.GetProject(namespace)
if err != nil {
return nil, err
}
result := models.CastProject(*project)
return &result, nil
} else {
var ns *kv1.Namespace
ns, err = in.k8s.GetNamespace(namespace)
if err != nil {
return nil, err
}
result := models.CastNamespace(*ns)
return &result, nil
}
}

267
vendor/github.com/kiali/kiali/business/services.go generated vendored Normal file
View File

@@ -0,0 +1,267 @@
package business
import (
"sync"
"time"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus"
"github.com/kiali/kiali/prometheus/internalmetrics"
)
// SvcService deals with fetching istio/kubernetes services related content and convert to kiali model
type SvcService struct {
prom prometheus.ClientInterface
k8s kubernetes.IstioClientInterface
businessLayer *Layer
}
// GetServiceList returns a list of all services for a given Namespace
func (in *SvcService) GetServiceList(namespace string) (*models.ServiceList, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "SvcService", "GetServiceList")
defer promtimer.ObserveNow(&err)
var svcs []v1.Service
var pods []v1.Pod
wg := sync.WaitGroup{}
wg.Add(2)
errChan := make(chan error, 2)
go func() {
defer wg.Done()
var err2 error
svcs, err2 = in.k8s.GetServices(namespace, nil)
if err2 != nil {
log.Errorf("Error fetching Services per namespace %s: %s", namespace, err2)
errChan <- err2
}
}()
go func() {
defer wg.Done()
var err2 error
pods, err2 = in.k8s.GetPods(namespace, "")
if err2 != nil {
log.Errorf("Error fetching Pods per namespace %s: %s", namespace, err2)
errChan <- err2
}
}()
wg.Wait()
if len(errChan) != 0 {
err = <-errChan
return nil, err
}
// Convert to Kiali model
return in.buildServiceList(models.Namespace{Name: namespace}, svcs, pods), nil
}
func (in *SvcService) buildServiceList(namespace models.Namespace, svcs []v1.Service, pods []v1.Pod) *models.ServiceList {
services := make([]models.ServiceOverview, len(svcs))
conf := config.Get()
// Convert each k8s service into our model
for i, item := range svcs {
sPods := kubernetes.FilterPodsForService(&item, pods)
/** Check if Service has istioSidecar deployed */
mPods := models.Pods{}
mPods.Parse(sPods)
hasSideCar := mPods.HasIstioSideCar()
/** Check if Service has the label app required by Istio */
_, appLabel := item.Spec.Selector[conf.IstioLabels.AppLabelName]
services[i] = models.ServiceOverview{
Name: item.Name,
IstioSidecar: hasSideCar,
AppLabel: appLabel,
}
}
return &models.ServiceList{Namespace: namespace, Services: services}
}
// GetService returns a single service and associated data using the interval and queryTime
func (in *SvcService) GetService(namespace, service, interval string, queryTime time.Time) (*models.ServiceDetails, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "SvcService", "GetService")
defer promtimer.ObserveNow(&err)
svc, eps, err := in.getServiceDefinition(namespace, service)
if err != nil {
return nil, err
}
var pods []v1.Pod
var hth models.ServiceHealth
var vs, dr []kubernetes.IstioObject
var sWk map[string][]prometheus.Workload
var ws models.Workloads
wg := sync.WaitGroup{}
wg.Add(9)
errChan := make(chan error, 8)
labelsSelector := labels.Set(svc.Spec.Selector).String()
go func() {
defer wg.Done()
var err2 error
pods, err2 = in.k8s.GetPods(namespace, labelsSelector)
if err2 != nil {
errChan <- err2
}
}()
go func() {
defer wg.Done()
var err2 error
hth, err2 = in.businessLayer.Health.GetServiceHealth(namespace, service, interval, queryTime)
if err2 != nil {
errChan <- err2
}
}()
go func() {
defer wg.Done()
var err2 error
vs, err2 = in.k8s.GetVirtualServices(namespace, service)
if err2 != nil {
errChan <- err2
}
}()
go func() {
defer wg.Done()
var err2 error
dr, err2 = in.k8s.GetDestinationRules(namespace, service)
if err2 != nil {
errChan <- err2
}
}()
go func() {
defer wg.Done()
var err2 error
ns, err2 := in.businessLayer.Namespace.GetNamespace(namespace)
if err2 != nil {
log.Errorf("Error fetching details of namespace %s: %s", namespace, err2)
errChan <- err2
}
sWk, err2 = in.prom.GetSourceWorkloads(ns.Name, ns.CreationTimestamp, service)
if err2 != nil {
log.Errorf("Error fetching SourceWorkloads per namespace %s and service %s: %s", namespace, service, err2)
errChan <- err2
}
}()
go func() {
defer wg.Done()
var err2 error
ws, err2 = fetchWorkloads(in.k8s, namespace, labelsSelector)
if err2 != nil {
log.Errorf("Error fetching Workloads per namespace %s and service %s: %s", namespace, service, err2)
errChan <- err2
}
}()
var vsCreate, vsUpdate, vsDelete bool
go func() {
defer wg.Done()
vsCreate, vsUpdate, vsDelete = getPermissions(in.k8s, namespace, VirtualServices, "")
}()
var drCreate, drUpdate, drDelete bool
go func() {
defer wg.Done()
drCreate, drUpdate, drDelete = getPermissions(in.k8s, namespace, DestinationRules, "")
}()
var eTraces int
go func() {
// Maybe a future jaeger business layer
defer wg.Done()
eTraces, err = getErrorTracesFromJaeger(namespace, service)
}()
wg.Wait()
if len(errChan) != 0 {
err = <-errChan
return nil, err
}
wo := models.WorkloadOverviews{}
for _, w := range ws {
wi := &models.WorkloadListItem{}
wi.ParseWorkload(w)
wo = append(wo, wi)
}
s := models.ServiceDetails{Workloads: wo, Health: hth}
s.SetService(svc)
s.SetPods(kubernetes.FilterPodsForEndpoints(eps, pods))
s.SetEndpoints(eps)
s.SetVirtualServices(vs, vsCreate, vsUpdate, vsDelete)
s.SetDestinationRules(dr, drCreate, drUpdate, drDelete)
s.SetSourceWorkloads(sWk)
s.SetErrorTraces(eTraces)
return &s, nil
}
// GetServiceDefinition returns a single service definition (the service object and endpoints), no istio or runtime information
func (in *SvcService) GetServiceDefinition(namespace, service string) (*models.ServiceDetails, error) {
var err error
promtimer := internalmetrics.GetGoFunctionMetric("business", "SvcService", "GetServiceDefinition")
defer promtimer.ObserveNow(&err)
svc, eps, err := in.getServiceDefinition(namespace, service)
if err != nil {
return nil, err
}
s := models.ServiceDetails{}
s.SetService(svc)
s.SetEndpoints(eps)
return &s, nil
}
func (in *SvcService) getServiceDefinition(namespace, service string) (svc *v1.Service, eps *v1.Endpoints, err error) {
wg := sync.WaitGroup{}
wg.Add(2)
errChan := make(chan error, 2)
go func() {
defer wg.Done()
var err2 error
svc, err2 = in.k8s.GetService(namespace, service)
if err2 != nil {
log.Errorf("Error fetching Service per namespace %s and service %s: %s", namespace, service, err2)
errChan <- err2
}
}()
go func() {
defer wg.Done()
var err2 error
eps, err2 = in.k8s.GetEndpoints(namespace, service)
if err2 != nil {
log.Errorf("Error fetching Endpoints per namespace %s and service %s: %s", namespace, service, err2)
errChan <- err2
}
}()
wg.Wait()
if len(errChan) != 0 {
err = <-errChan
return nil, nil, err
}
return svc, eps, nil
}

712
vendor/github.com/kiali/kiali/business/test_util.go generated vendored Normal file
View File

@@ -0,0 +1,712 @@
package business
import (
"time"
osappsv1 "github.com/openshift/api/apps/v1"
"k8s.io/api/apps/v1beta1"
"k8s.io/api/apps/v1beta2"
"k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/kubernetes/kubetest"
)
// Consolidate fake/mock data used in tests per package
func FakeDeployments() []v1beta1.Deployment {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
return []v1beta1.Deployment{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "Deployment",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v1",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin"},
},
},
},
Status: v1beta1.DeploymentStatus{
Replicas: 1,
AvailableReplicas: 1,
UnavailableReplicas: 0,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "Deployment",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v2",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin", versionLabel: "v2"},
},
},
},
Status: v1beta1.DeploymentStatus{
Replicas: 2,
AvailableReplicas: 1,
UnavailableReplicas: 1,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "Deployment",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v3",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{},
},
},
},
Status: v1beta1.DeploymentStatus{
Replicas: 2,
AvailableReplicas: 0,
UnavailableReplicas: 2,
},
},
}
}
func FakeDuplicatedDeployments() []v1beta1.Deployment {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
return []v1beta1.Deployment{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "Deployment",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "duplicated-v1",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "duplicated", versionLabel: "v1"},
},
},
},
Status: v1beta1.DeploymentStatus{
Replicas: 1,
AvailableReplicas: 1,
UnavailableReplicas: 0,
},
},
}
}
func FakeReplicaSets() []v1beta2.ReplicaSet {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
return []v1beta2.ReplicaSet{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "ReplicaSet",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v1",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta2.ReplicaSetSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin"},
},
},
},
Status: v1beta2.ReplicaSetStatus{
Replicas: 1,
AvailableReplicas: 1,
ReadyReplicas: 1,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "ReplicaSet",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v2",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta2.ReplicaSetSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin", versionLabel: "v2"},
},
},
},
Status: v1beta2.ReplicaSetStatus{
Replicas: 2,
AvailableReplicas: 1,
ReadyReplicas: 1,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "ReplicaSet",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v3",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta2.ReplicaSetSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{},
},
},
},
Status: v1beta2.ReplicaSetStatus{
Replicas: 2,
AvailableReplicas: 0,
ReadyReplicas: 2,
},
},
}
}
func FakeDuplicatedReplicaSets() []v1beta2.ReplicaSet {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
controller := true
return []v1beta2.ReplicaSet{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "ReplicaSet",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "duplicated-v1-12345",
CreationTimestamp: meta_v1.NewTime(t1),
OwnerReferences: []meta_v1.OwnerReference{meta_v1.OwnerReference{
Controller: &controller,
Kind: "Deployment",
Name: "duplicated-v1",
}},
},
Spec: v1beta2.ReplicaSetSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "duplicated", versionLabel: "v1"},
},
},
},
Status: v1beta2.ReplicaSetStatus{
Replicas: 1,
AvailableReplicas: 1,
ReadyReplicas: 1,
},
},
}
}
func FakeReplicationControllers() []v1.ReplicationController {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
return []v1.ReplicationController{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "ReplicationController",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v1",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1.ReplicationControllerSpec{
Template: &v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin"},
},
},
},
Status: v1.ReplicationControllerStatus{
Replicas: 1,
AvailableReplicas: 1,
ReadyReplicas: 1,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "ReplicationController",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v2",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1.ReplicationControllerSpec{
Template: &v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin", versionLabel: "v2"},
},
},
},
Status: v1.ReplicationControllerStatus{
Replicas: 2,
AvailableReplicas: 1,
ReadyReplicas: 1,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "ReplicationController",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v3",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1.ReplicationControllerSpec{
Template: &v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{},
},
},
},
Status: v1.ReplicationControllerStatus{
Replicas: 2,
AvailableReplicas: 0,
ReadyReplicas: 2,
},
},
}
}
func FakeDeploymentConfigs() []osappsv1.DeploymentConfig {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
return []osappsv1.DeploymentConfig{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "DeploymentConfig",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v1",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: osappsv1.DeploymentConfigSpec{
Template: &v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin"},
},
},
},
Status: osappsv1.DeploymentConfigStatus{
Replicas: 1,
AvailableReplicas: 1,
UnavailableReplicas: 0,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "DeploymentConfig",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v2",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: osappsv1.DeploymentConfigSpec{
Template: &v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin", versionLabel: "v2"},
},
},
},
Status: osappsv1.DeploymentConfigStatus{
Replicas: 2,
AvailableReplicas: 1,
UnavailableReplicas: 1,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "DeploymentConfig",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v3",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: osappsv1.DeploymentConfigSpec{
Template: &v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{},
},
},
},
Status: osappsv1.DeploymentConfigStatus{
Replicas: 2,
AvailableReplicas: 0,
UnavailableReplicas: 2,
},
},
}
}
func FakeStatefulSets() []v1beta2.StatefulSet {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
return []v1beta2.StatefulSet{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "StatefulSet",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v1",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta2.StatefulSetSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin"},
},
},
},
Status: v1beta2.StatefulSetStatus{
Replicas: 1,
ReadyReplicas: 1,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "StatefulSet",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v2",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta2.StatefulSetSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "httpbin", versionLabel: "v2"},
},
},
},
Status: v1beta2.StatefulSetStatus{
Replicas: 2,
ReadyReplicas: 1,
},
},
{
TypeMeta: meta_v1.TypeMeta{
Kind: "StatefulSet",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v3",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta2.StatefulSetSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{},
},
},
},
Status: v1beta2.StatefulSetStatus{
Replicas: 2,
ReadyReplicas: 2,
},
},
}
}
func FakeDuplicatedStatefulSets() []v1beta2.StatefulSet {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
return []v1beta2.StatefulSet{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "StatefulSet",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "duplicated-v1",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta2.StatefulSetSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "duplicated", versionLabel: "v1"},
},
},
},
Status: v1beta2.StatefulSetStatus{
Replicas: 1,
ReadyReplicas: 1,
},
},
}
}
func FakeDepSyncedWithRS() []v1beta1.Deployment {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
return []v1beta1.Deployment{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "Deployment",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "details-v1",
CreationTimestamp: meta_v1.NewTime(t1),
},
Spec: v1beta1.DeploymentSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "details", versionLabel: "v1"},
},
},
},
Status: v1beta1.DeploymentStatus{
Replicas: 1,
AvailableReplicas: 1,
UnavailableReplicas: 0,
},
},
}
}
func FakeRSSyncedWithPods() []v1beta2.ReplicaSet {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
controller := true
return []v1beta2.ReplicaSet{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "ReplicaSet",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "details-v1-3618568057",
CreationTimestamp: meta_v1.NewTime(t1),
OwnerReferences: []meta_v1.OwnerReference{meta_v1.OwnerReference{
Controller: &controller,
Kind: "Deployment",
Name: "details-v1",
}},
},
Spec: v1beta2.ReplicaSetSpec{
Template: v1.PodTemplateSpec{
ObjectMeta: meta_v1.ObjectMeta{
Labels: map[string]string{appLabel: "details", versionLabel: "v1"},
},
},
},
Status: v1beta2.ReplicaSetStatus{
Replicas: 1,
AvailableReplicas: 1,
ReadyReplicas: 0,
},
},
}
}
func FakePodsSyncedWithDeployments() []v1.Pod {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
controller := true
return []v1.Pod{
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "details-v1-3618568057-dnkjp",
CreationTimestamp: meta_v1.NewTime(t1),
Labels: map[string]string{appLabel: "httpbin", versionLabel: "v1"},
OwnerReferences: []meta_v1.OwnerReference{meta_v1.OwnerReference{
Controller: &controller,
Kind: "ReplicaSet",
Name: "details-v1-3618568057",
}},
Annotations: kubetest.FakeIstioAnnotations(),
},
Spec: v1.PodSpec{
Containers: []v1.Container{
v1.Container{Name: "details", Image: "whatever"},
v1.Container{Name: "istio-proxy", Image: "docker.io/istio/proxy:0.7.1"},
},
InitContainers: []v1.Container{
v1.Container{Name: "istio-init", Image: "docker.io/istio/proxy_init:0.7.1"},
v1.Container{Name: "enable-core-dump", Image: "alpine"},
},
},
},
}
}
func FakePodsSyncedWithDuplicated() []v1.Pod {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
controller := true
return []v1.Pod{
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "duplicated-v1-3618568057-1",
CreationTimestamp: meta_v1.NewTime(t1),
Labels: map[string]string{appLabel: "duplicated", versionLabel: "v1"},
OwnerReferences: []meta_v1.OwnerReference{meta_v1.OwnerReference{
Controller: &controller,
Kind: "StatefulSet",
Name: "duplicated-v1",
}},
Annotations: kubetest.FakeIstioAnnotations(),
},
Spec: v1.PodSpec{
Containers: []v1.Container{
v1.Container{Name: "details", Image: "whatever"},
v1.Container{Name: "istio-proxy", Image: "docker.io/istio/proxy:0.7.1"},
},
InitContainers: []v1.Container{
v1.Container{Name: "istio-init", Image: "docker.io/istio/proxy_init:0.7.1"},
v1.Container{Name: "enable-core-dump", Image: "alpine"},
},
},
},
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "duplicated-v1-3618568057-3",
CreationTimestamp: meta_v1.NewTime(t1),
Labels: map[string]string{appLabel: "duplicated", versionLabel: "v1"},
OwnerReferences: []meta_v1.OwnerReference{meta_v1.OwnerReference{
Controller: &controller,
Kind: "StatefulSet",
Name: "duplicated-v1",
}},
Annotations: kubetest.FakeIstioAnnotations(),
},
Spec: v1.PodSpec{
Containers: []v1.Container{
v1.Container{Name: "details", Image: "whatever"},
v1.Container{Name: "istio-proxy", Image: "docker.io/istio/proxy:0.7.1"},
},
InitContainers: []v1.Container{
v1.Container{Name: "istio-init", Image: "docker.io/istio/proxy_init:0.7.1"},
v1.Container{Name: "enable-core-dump", Image: "alpine"},
},
},
},
}
}
func FakePodsNoController() []v1.Pod {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
return []v1.Pod{
{
TypeMeta: meta_v1.TypeMeta{
Kind: "Pod",
},
ObjectMeta: meta_v1.ObjectMeta{
Name: "orphan-pod",
CreationTimestamp: meta_v1.NewTime(t1),
Labels: map[string]string{appLabel: "httpbin", versionLabel: "v1"},
Annotations: kubetest.FakeIstioAnnotations(),
},
Spec: v1.PodSpec{
Containers: []v1.Container{
v1.Container{Name: "details", Image: "whatever"},
v1.Container{Name: "istio-proxy", Image: "docker.io/istio/proxy:0.7.1"},
},
InitContainers: []v1.Container{
v1.Container{Name: "istio-init", Image: "docker.io/istio/proxy_init:0.7.1"},
v1.Container{Name: "enable-core-dump", Image: "alpine"},
},
},
},
}
}
func FakePodsFromDaemonSet() []v1.Pod {
conf := config.NewConfig()
config.Set(conf)
appLabel := conf.IstioLabels.AppLabelName
versionLabel := conf.IstioLabels.VersionLabelName
t1, _ := time.Parse(time.RFC822Z, "08 Mar 18 17:44 +0300")
controller := true
return []v1.Pod{
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "daemon-pod",
CreationTimestamp: meta_v1.NewTime(t1),
Labels: map[string]string{appLabel: "httpbin", versionLabel: "v1"},
OwnerReferences: []meta_v1.OwnerReference{meta_v1.OwnerReference{
Controller: &controller,
Kind: "DaemonSet",
Name: "daemon-controller",
}},
Annotations: kubetest.FakeIstioAnnotations(),
},
Spec: v1.PodSpec{
Containers: []v1.Container{
v1.Container{Name: "details", Image: "whatever"},
v1.Container{Name: "istio-proxy", Image: "docker.io/istio/proxy:0.7.1"},
},
InitContainers: []v1.Container{
v1.Container{Name: "istio-init", Image: "docker.io/istio/proxy_init:0.7.1"},
v1.Container{Name: "enable-core-dump", Image: "alpine"},
},
},
},
}
}
func FakeServices() []v1.Service {
return []v1.Service{
{
ObjectMeta: meta_v1.ObjectMeta{Name: "httpbin"},
Spec: v1.ServiceSpec{
Selector: map[string]string{"app": "httpbin"},
},
},
}
}

1028
vendor/github.com/kiali/kiali/business/workloads.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

63
vendor/github.com/kiali/kiali/config/authentication.go generated vendored Normal file
View File

@@ -0,0 +1,63 @@
package config
import (
"net/http"
"strings"
"github.com/kiali/kiali/log"
)
func AuthenticationHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
statusCode := http.StatusOK
errMsg := ""
conf := Get()
if strings.Contains(r.Header.Get("Authorization"), "Bearer") {
user, err := ValidateToken(strings.TrimPrefix(r.Header.Get("Authorization"), "Bearer "))
// Internal header used to propagate the subject of the request for audit purposes
r.Header.Add("Kiali-User", user)
if err != nil {
log.Warning("Token error: ", err)
statusCode = http.StatusUnauthorized
}
} else if conf.Server.Credentials.Username != "" || conf.Server.Credentials.Password != "" {
u, p, ok := r.BasicAuth()
// Internal header used to propagate the subject of the request for audit purposes
r.Header.Add("Kiali-User", u)
if !ok || conf.Server.Credentials.Username != u || conf.Server.Credentials.Password != p {
statusCode = http.StatusUnauthorized
if conf.Server.Credentials.AllowAnonymous {
// we really should never get here - the credentials config should not have validated at startup
// if anonymous access is allowed with non-empty username/password. But just in case, log a message.
log.Debugf("Access to the server endpoint allows anonymous access - but user [%v] provided invalid credentials. Request denied.", u)
}
}
} else if conf.Server.Credentials.AllowAnonymous {
log.Trace("Access to the server endpoint is not secured with credentials - letting request come in")
} else {
statusCode = 520 // our specific error code that indicates to the client that we are missing the secret
errMsg = "Credentials are missing. Create a secret and restart Kiali. Please refer to the documentation for more details."
}
if statusCode != http.StatusOK && errMsg == "" {
errMsg = http.StatusText(statusCode)
}
switch statusCode {
case http.StatusOK:
next.ServeHTTP(w, r)
case http.StatusUnauthorized:
// If header exists return the value, must be 1 to use the API from Kiali
// Otherwise an empty string is returned and WWW-Authenticate will be Basic
if r.Header.Get("X-Auth-Type-Kiali-UI") == "1" {
w.Header().Set("WWW-Authenticate", "xBasic realm=\"Kiali\"")
} else {
w.Header().Set("WWW-Authenticate", "Basic realm=\"Kiali\"")
}
http.Error(w, errMsg, statusCode)
default:
http.Error(w, errMsg, statusCode)
log.Errorf("Cannot send response to unauthorized user: %v (%v)", statusCode, errMsg)
}
})
}

392
vendor/github.com/kiali/kiali/config/config.go generated vendored Normal file
View File

@@ -0,0 +1,392 @@
package config
import (
"fmt"
"io/ioutil"
"os"
"regexp"
"strconv"
"strings"
"time"
yaml "gopkg.in/yaml.v2"
"github.com/kiali/kiali/config/security"
"github.com/kiali/kiali/log"
)
// Environment vars can define some default values.
// NOTE: If you add a new variable, don't forget to update README.adoc
const (
EnvIdentityCertFile = "IDENTITY_CERT_FILE"
EnvIdentityPrivateKeyFile = "IDENTITY_PRIVATE_KEY_FILE"
EnvPrometheusServiceURL = "PROMETHEUS_SERVICE_URL"
EnvPrometheusCustomMetricsURL = "PROMETHEUS_CUSTOM_METRICS_URL"
EnvInCluster = "IN_CLUSTER"
EnvIstioIdentityDomain = "ISTIO_IDENTITY_DOMAIN"
EnvIstioSidecarAnnotation = "ISTIO_SIDECAR_ANNOTATION"
EnvIstioUrlServiceVersion = "ISTIO_URL_SERVICE_VERSION"
EnvApiNamespacesExclude = "API_NAMESPACES_EXCLUDE"
EnvServerAddress = "SERVER_ADDRESS"
EnvServerPort = "SERVER_PORT"
EnvServerCredentialsUsername = "SERVER_CREDENTIALS_USERNAME"
EnvServerCredentialsPassword = "SERVER_CREDENTIALS_PASSWORD"
EnvServerAllowAnonymousAccess = "SERVER_ALLOW_ANONYMOUS_ACCESS"
EnvWebRoot = "SERVER_WEB_ROOT"
EnvServerStaticContentRootDirectory = "SERVER_STATIC_CONTENT_ROOT_DIRECTORY"
EnvServerCORSAllowAll = "SERVER_CORS_ALLOW_ALL"
EnvServerAuditLog = "SERVER_AUDIT_LOG"
EnvGrafanaDisplayLink = "GRAFANA_DISPLAY_LINK"
EnvGrafanaURL = "GRAFANA_URL"
EnvGrafanaServiceNamespace = "GRAFANA_SERVICE_NAMESPACE"
EnvGrafanaService = "GRAFANA_SERVICE"
EnvGrafanaWorkloadDashboardPattern = "GRAFANA_WORKLOAD_DASHBOARD_PATTERN"
EnvGrafanaServiceDashboardPattern = "GRAFANA_SERVICE_DASHBOARD_PATTERN"
EnvGrafanaVarNamespace = "GRAFANA_VAR_NAMESPACE"
EnvGrafanaVarService = "GRAFANA_VAR_SERVICE"
EnvGrafanaVarWorkload = "GRAFANA_VAR_WORKLOAD"
EnvGrafanaAPIKey = "GRAFANA_API_KEY"
EnvGrafanaUsername = "GRAFANA_USERNAME"
EnvGrafanaPassword = "GRAFANA_PASSWORD"
EnvJaegerURL = "JAEGER_URL"
EnvJaegerService = "JAEGER_SERVICE"
EnvLoginTokenSigningKey = "LOGIN_TOKEN_SIGNING_KEY"
EnvLoginTokenExpirationSeconds = "LOGIN_TOKEN_EXPIRATION_SECONDS"
EnvIstioNamespace = "ISTIO_NAMESPACE"
EnvIstioLabelNameApp = "ISTIO_LABEL_NAME_APP"
EnvIstioLabelNameVersion = "ISTIO_LABEL_NAME_VERSION"
EnvKubernetesBurst = "KUBERNETES_BURST"
EnvKubernetesQPS = "KUBERNETES_QPS"
EnvKubernetesCacheEnabled = "KUBERNETES_CACHE_ENABLED"
EnvKubernetesCacheDuration = "KUBERNETES_CACHE_DURATION"
)
// The versions that Kiali requires
const (
IstioVersionSupported = ">= 1.0"
MaistraVersionSupported = ">= 0.1.0"
)
// Global configuration for the application.
var configuration *Config
// Server configuration
type Server struct {
Address string `yaml:",omitempty"`
Port int `yaml:",omitempty"`
Credentials security.Credentials `yaml:",omitempty"`
WebRoot string `yaml:"web_root,omitempty"`
StaticContentRootDirectory string `yaml:"static_content_root_directory,omitempty"`
CORSAllowAll bool `yaml:"cors_allow_all,omitempty"`
AuditLog bool `yaml:"audit_log,omitempty"`
}
// GrafanaConfig describes configuration used for Grafana links
type GrafanaConfig struct {
DisplayLink bool `yaml:"display_link"`
URL string `yaml:"url"`
ServiceNamespace string `yaml:"service_namespace"`
Service string `yaml:"service"`
WorkloadDashboardPattern string `yaml:"workload_dashboard_pattern"`
ServiceDashboardPattern string `yaml:"service_dashboard_pattern"`
VarNamespace string `yaml:"var_namespace"`
VarService string `yaml:"var_service"`
VarWorkload string `yaml:"var_workload"`
APIKey string `yaml:"api_key"`
Username string `yaml:"username"`
Password string `yaml:"password"`
}
// JaegerConfig describes configuration used for jaeger links
type JaegerConfig struct {
URL string `yaml:"url"`
Service string `yaml:"service"`
}
// IstioConfig describes configuration used for istio links
type IstioConfig struct {
UrlServiceVersion string `yaml:"url_service_version"`
IstioIdentityDomain string `yaml:"istio_identity_domain,omitempty"`
IstioSidecarAnnotation string `yaml:"istio_sidecar_annotation,omitempty"`
}
// ExternalServices holds configurations for other systems that Kiali depends on
type ExternalServices struct {
Istio IstioConfig `yaml:"istio,omitempty"`
PrometheusServiceURL string `yaml:"prometheus_service_url,omitempty"`
PrometheusCustomMetricsURL string `yaml:"prometheus_custom_metrics_url,omitempty"`
Grafana GrafanaConfig `yaml:"grafana,omitempty"`
Jaeger JaegerConfig `yaml:"jaeger,omitempty"`
}
// LoginToken holds config used in token-based authentication
type LoginToken struct {
SigningKey []byte `yaml:"signing_key,omitempty"`
ExpirationSeconds int64 `yaml:"expiration_seconds,omitempty"`
}
// IstioLabels holds configuration about the labels required by Istio
type IstioLabels struct {
AppLabelName string `yaml:"app_label_name,omitempty" json:"appLabelName"`
VersionLabelName string `yaml:"version_label_name,omitempty" json:"versionLabelName"`
}
// Kubernetes client configuration
type KubernetesConfig struct {
Burst int `yaml:"burst,omitempty"`
QPS float32 `yaml:"qps,omitempty"`
CacheEnabled bool `yaml:"cache_enabled,omitempty"`
CacheDuration int64 `yaml:"cache_duration,omitempty"`
}
// Exclude Blacklist holds regex strings defining a blacklist
type ApiConfig struct {
Namespaces ApiNamespacesConfig
}
// Exclude Blacklist holds regex strings defining a blacklist
type ApiNamespacesConfig struct {
Exclude []string
}
// Config defines full YAML configuration.
type Config struct {
Identity security.Identity `yaml:",omitempty"`
Server Server `yaml:",omitempty"`
InCluster bool `yaml:"in_cluster,omitempty"`
ExternalServices ExternalServices `yaml:"external_services,omitempty"`
LoginToken LoginToken `yaml:"login_token,omitempty"`
IstioNamespace string `yaml:"istio_namespace,omitempty"`
IstioLabels IstioLabels `yaml:"istio_labels,omitempty"`
KubernetesConfig KubernetesConfig `yaml:"kubernetes_config,omitempty"`
API ApiConfig `yaml:"api,omitempty"`
}
// NewConfig creates a default Config struct
func NewConfig() (c *Config) {
c = new(Config)
c.Identity.CertFile = getDefaultString(EnvIdentityCertFile, "")
c.Identity.PrivateKeyFile = getDefaultString(EnvIdentityPrivateKeyFile, "")
c.InCluster = getDefaultBool(EnvInCluster, true)
c.IstioNamespace = strings.TrimSpace(getDefaultString(EnvIstioNamespace, "istio-system"))
c.IstioLabels.AppLabelName = strings.TrimSpace(getDefaultString(EnvIstioLabelNameApp, "app"))
c.IstioLabels.VersionLabelName = strings.TrimSpace(getDefaultString(EnvIstioLabelNameVersion, "version"))
c.API.Namespaces.Exclude = getDefaultStringArray(EnvApiNamespacesExclude, "istio-operator,kube.*,openshift.*,ibm.*")
// Server configuration
c.Server.Address = strings.TrimSpace(getDefaultString(EnvServerAddress, ""))
c.Server.Port = getDefaultInt(EnvServerPort, 20000)
c.Server.Credentials = security.Credentials{
Username: getDefaultString(EnvServerCredentialsUsername, ""),
Password: getDefaultString(EnvServerCredentialsPassword, ""),
AllowAnonymous: getDefaultBool(EnvServerAllowAnonymousAccess, false),
}
c.Server.WebRoot = strings.TrimSpace(getDefaultString(EnvWebRoot, "/"))
c.Server.StaticContentRootDirectory = strings.TrimSpace(getDefaultString(EnvServerStaticContentRootDirectory, "/opt/kiali/console"))
c.Server.CORSAllowAll = getDefaultBool(EnvServerCORSAllowAll, false)
c.Server.AuditLog = getDefaultBool(EnvServerAuditLog, true)
// Prometheus configuration
c.ExternalServices.PrometheusServiceURL = strings.TrimSpace(getDefaultString(EnvPrometheusServiceURL, "http://prometheus.istio-system:9090"))
c.ExternalServices.PrometheusCustomMetricsURL = strings.TrimSpace(getDefaultString(EnvPrometheusCustomMetricsURL, c.ExternalServices.PrometheusServiceURL))
// Grafana Configuration
c.ExternalServices.Grafana.DisplayLink = getDefaultBool(EnvGrafanaDisplayLink, true)
c.ExternalServices.Grafana.URL = strings.TrimSpace(getDefaultString(EnvGrafanaURL, ""))
c.ExternalServices.Grafana.ServiceNamespace = strings.TrimSpace(getDefaultString(EnvGrafanaServiceNamespace, "istio-system"))
c.ExternalServices.Grafana.Service = strings.TrimSpace(getDefaultString(EnvGrafanaService, "grafana"))
c.ExternalServices.Grafana.WorkloadDashboardPattern = strings.TrimSpace(getDefaultString(EnvGrafanaWorkloadDashboardPattern, "Istio%20Workload%20Dashboard"))
c.ExternalServices.Grafana.ServiceDashboardPattern = strings.TrimSpace(getDefaultString(EnvGrafanaServiceDashboardPattern, "Istio%20Service%20Dashboard"))
c.ExternalServices.Grafana.VarNamespace = strings.TrimSpace(getDefaultString(EnvGrafanaVarNamespace, "var-namespace"))
c.ExternalServices.Grafana.VarService = strings.TrimSpace(getDefaultString(EnvGrafanaVarService, "var-service"))
c.ExternalServices.Grafana.VarWorkload = strings.TrimSpace(getDefaultString(EnvGrafanaVarWorkload, "var-workload"))
c.ExternalServices.Grafana.APIKey = strings.TrimSpace(getDefaultString(EnvGrafanaAPIKey, ""))
c.ExternalServices.Grafana.Username = strings.TrimSpace(getDefaultString(EnvGrafanaUsername, ""))
c.ExternalServices.Grafana.Password = strings.TrimSpace(getDefaultString(EnvGrafanaPassword, ""))
if c.ExternalServices.Grafana.Username != "" && c.ExternalServices.Grafana.Password == "" {
log.Error("Grafana username (\"GRAFANA_USERNAME\") requires that Grafana password (\"GRAFANA_PASSWORD\") is set.")
}
// Jaeger Configuration
c.ExternalServices.Jaeger.URL = strings.TrimSpace(getDefaultString(EnvJaegerURL, ""))
c.ExternalServices.Jaeger.Service = strings.TrimSpace(getDefaultString(EnvJaegerService, "jaeger-query"))
// Istio Configuration
c.ExternalServices.Istio.IstioIdentityDomain = strings.TrimSpace(getDefaultString(EnvIstioIdentityDomain, "svc.cluster.local"))
c.ExternalServices.Istio.IstioSidecarAnnotation = strings.TrimSpace(getDefaultString(EnvIstioSidecarAnnotation, "sidecar.istio.io/status"))
c.ExternalServices.Istio.UrlServiceVersion = strings.TrimSpace(getDefaultString(EnvIstioUrlServiceVersion, "http://istio-pilot:8080/version"))
// Token-based authentication Configuration
c.LoginToken.SigningKey = []byte(strings.TrimSpace(getDefaultString(EnvLoginTokenSigningKey, "kiali")))
c.LoginToken.ExpirationSeconds = getDefaultInt64(EnvLoginTokenExpirationSeconds, 24*3600)
// Kubernetes client Configuration
c.KubernetesConfig.Burst = getDefaultInt(EnvKubernetesBurst, 200)
c.KubernetesConfig.QPS = getDefaultFloat32(EnvKubernetesQPS, 175)
c.KubernetesConfig.CacheEnabled = getDefaultBool(EnvKubernetesCacheEnabled, false)
c.KubernetesConfig.CacheDuration = getDefaultInt64(EnvKubernetesCacheDuration, time.Duration(5*time.Minute).Nanoseconds())
trimmedExclusionPatterns := []string{}
for _, entry := range c.API.Namespaces.Exclude {
exclusionPattern := strings.TrimSpace(entry)
if _, err := regexp.Compile(exclusionPattern); err != nil {
log.Errorf("Invalid namespace exclude entry, [%s] is not a valid regex pattern: %v", exclusionPattern, err)
} else {
trimmedExclusionPatterns = append(trimmedExclusionPatterns, strings.TrimSpace(exclusionPattern))
}
}
c.API.Namespaces.Exclude = trimmedExclusionPatterns
return
}
// Get the global Config
func Get() (conf *Config) {
return configuration
}
// Set the global Config
func Set(conf *Config) {
configuration = conf
}
func getDefaultString(envvar string, defaultValue string) (retVal string) {
retVal = os.Getenv(envvar)
if retVal == "" {
retVal = defaultValue
}
return
}
func getDefaultStringArray(envvar string, defaultValue string) (retVal []string) {
csv := os.Getenv(envvar)
if csv == "" {
csv = defaultValue
}
retVal = strings.Split(csv, ",")
return
}
func getDefaultInt(envvar string, defaultValue int) (retVal int) {
retValString := os.Getenv(envvar)
if retValString == "" {
retVal = defaultValue
} else {
if num, err := strconv.Atoi(retValString); err != nil {
log.Warningf("Invalid number for envvar [%v]. err=%v", envvar, err)
retVal = defaultValue
} else {
retVal = num
}
}
return
}
func getDefaultInt64(envvar string, defaultValue int64) (retVal int64) {
retValString := os.Getenv(envvar)
if retValString == "" {
retVal = defaultValue
} else {
if num, err := strconv.ParseInt(retValString, 10, 64); err != nil {
log.Warningf("Invalid number for envvar [%v]. err=%v", envvar, err)
retVal = defaultValue
} else {
retVal = num
}
}
return
}
func getDefaultBool(envvar string, defaultValue bool) (retVal bool) {
retValString := os.Getenv(envvar)
if retValString == "" {
retVal = defaultValue
} else {
if b, err := strconv.ParseBool(retValString); err != nil {
log.Warningf("Invalid boolean for envvar [%v]. err=%v", envvar, err)
retVal = defaultValue
} else {
retVal = b
}
}
return
}
func getDefaultFloat32(envvar string, defaultValue float32) (retVal float32) {
retValString := os.Getenv(envvar)
if retValString == "" {
retVal = defaultValue
} else {
if f, err := strconv.ParseFloat(retValString, 32); err != nil {
log.Warningf("Invalid float number for envvar [%v]. err=%v", envvar, err)
retVal = defaultValue
} else {
retVal = float32(f)
}
}
return
}
// String marshals the given Config into a YAML string
func (conf Config) String() (str string) {
str, err := Marshal(&conf)
if err != nil {
str = fmt.Sprintf("Failed to marshal config to string. err=%v", err)
log.Debugf(str)
}
return
}
// Unmarshal parses the given YAML string and returns its Config object representation.
func Unmarshal(yamlString string) (conf *Config, err error) {
conf = NewConfig()
err = yaml.Unmarshal([]byte(yamlString), &conf)
if err != nil {
return nil, fmt.Errorf("Failed to parse yaml data. error=%v", err)
}
return
}
// Marshal converts the Config object and returns its YAML string.
func Marshal(conf *Config) (yamlString string, err error) {
yamlBytes, err := yaml.Marshal(&conf)
if err != nil {
return "", fmt.Errorf("Failed to produce yaml. error=%v", err)
}
yamlString = string(yamlBytes)
return
}
// LoadFromFile reads the YAML from the given file, parses the content, and returns its Config object representation.
func LoadFromFile(filename string) (conf *Config, err error) {
log.Debugf("Reading YAML config from [%s]", filename)
fileContent, err := ioutil.ReadFile(filename)
if err != nil {
return nil, fmt.Errorf("Failed to load config file [%v]. error=%v", filename, err)
}
return Unmarshal(string(fileContent))
}
// SaveToFile converts the Config object and stores its YAML string into the given file, overwriting any data that is in the file.
func SaveToFile(filename string, conf *Config) (err error) {
fileContent, err := Marshal(conf)
if err != nil {
return fmt.Errorf("Failed to save config file [%v]. error=%v", filename, err)
}
log.Debugf("Writing YAML config to [%s]", filename)
err = ioutil.WriteFile(filename, []byte(fileContent), 0640)
return
}

View File

@@ -0,0 +1,80 @@
package security
import (
"encoding/base64"
"fmt"
)
// Identity security details about a client.
type Identity struct {
CertFile string `yaml:"cert_file"`
PrivateKeyFile string `yaml:"private_key_file"`
}
// Credentials provides information when needing to authenticate to remote endpoints.
// Credentials are either a username/password or a bearer token, but not both.
type Credentials struct {
Username string `yaml:",omitempty"`
Password string `yaml:",omitempty"`
Token string `yaml:",omitempty"`
AllowAnonymous bool `yaml:"allow_anonymous,omitempty"`
}
// TLS options - SkipCertificateValidation will disable server certificate verification - the client
// will accept any certificate presented by the server and any host name in that certificate.
type TLS struct {
SkipCertificateValidation bool `yaml:"skip_certificate_validation,omitempty"`
}
// ValidateCredentials makes sure that if username is provided, so is password (and vice versa)
// and also makes sure if username/password is provided that token is not (and vice versa).
// It is valid to have nothing defined (no username, password, nor token), but if nothing is
// defined and the "AllowAnonymous" flag is false, this usually means the person who
// installed Kiali most likely forgot to set credentials - therefore access should always be denied.
// If nothing is defined and the "AllowAnonymous" flag is true, this means anonymous access is specifically allowed.
// If the "AllowAnonymous" flag is true but non-empty credentials are defined, an error results.
func (c *Credentials) ValidateCredentials() error {
if c.Username != "" && c.Password == "" {
return fmt.Errorf("A password must be provided if a username is set")
}
if c.Username == "" && c.Password != "" {
return fmt.Errorf("A username must be provided if a password is set")
}
if c.Username != "" && c.Token != "" {
return fmt.Errorf("Username/password cannot be specified if a token is specified also. Only Username/Password or Token can be set but not both")
}
if c.AllowAnonymous && (c.Username != "" || c.Token != "") {
return fmt.Errorf("The 'AllowAnonymous' flag is true but non-empty credentials exist")
}
return nil
}
// GetHTTPAuthHeader provides the authentication ehader name and value (can be empty), or an error
func (c *Credentials) GetHTTPAuthHeader() (headerName string, headerValue string, err error) {
// if no credentials are provided, this is fine, we are just going to do an insecure request
if c == nil {
return "", "", nil
}
if err := c.ValidateCredentials(); err != nil {
return "", "", err
}
if c.Token != "" {
headerName = "Authorization"
headerValue = fmt.Sprintf("Bearer %s", c.Token)
} else if c.Username != "" {
creds := base64.StdEncoding.EncodeToString([]byte(fmt.Sprintf("%s:%s", c.Username, c.Password)))
headerName = "Authorization"
headerValue = fmt.Sprintf("Basic %s", creds)
} else {
headerName = ""
headerValue = ""
}
return headerName, headerValue, nil
}

77
vendor/github.com/kiali/kiali/config/token.go generated vendored Normal file
View File

@@ -0,0 +1,77 @@
package config
import (
"errors"
"fmt"
"time"
"github.com/dgrijalva/jwt-go"
)
// Structured version of Claims Section, as referenced at
// https://tools.ietf.org/html/rfc7519#section-4.1
// See examples for how to use this with your own claim types
type TokenClaim struct {
User string `json:"username"`
jwt.StandardClaims
}
// TokenGenerated tokenGenerated
//
// This is used for returning the token
//
// swagger:model TokenGenerated
type TokenGenerated struct {
// The authentication token
// A string with the authentication token for the user
//
// example: zI1NiIsIsR5cCI6IkpXVCJ9.ezJ1c2VybmFtZSI6ImFkbWluIiwiZXhwIjoxNTI5NTIzNjU0fQ.PPZvRGnR6VA4v7FmgSfQcGQr-VD
// required: true
Token string `json:"token"`
// The expired time for the token
// A string with the Datetime when the token will be expired
//
// example: 2018-06-20 19:40:54.116369887 +0000 UTC m=+43224.838320603
// required: true
ExpiredAt string `json:"expired_at"`
}
// GenerateToken generates a signed token with an expiration of <ExpirationSeconds> seconds
func GenerateToken(username string) (TokenGenerated, error) {
timeExpire := time.Now().Add(time.Second * time.Duration(Get().LoginToken.ExpirationSeconds))
claim := TokenClaim{
username,
jwt.StandardClaims{
ExpiresAt: timeExpire.Unix(),
},
}
token := jwt.NewWithClaims(jwt.SigningMethodHS256, claim)
ss, err := token.SignedString(Get().LoginToken.SigningKey)
if err != nil {
return TokenGenerated{}, err
}
return TokenGenerated{Token: ss, ExpiredAt: timeExpire.String()}, nil
}
// ValidateToken checks if the input token is still valid
func ValidateToken(tokenString string) (string, error) {
token, err := jwt.ParseWithClaims(tokenString, &TokenClaim{}, func(token *jwt.Token) (interface{}, error) {
return Get().LoginToken.SigningKey, nil
})
if err != nil {
return "", err
}
if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
return "", fmt.Errorf("Unexpected signing method: %s", token.Header["alg"])
}
if token.Valid {
user := ""
if sToken, ok := token.Claims.(*TokenClaim); ok {
user = sToken.User
}
return user, nil
}
return "", errors.New("Invalid token")
}

View File

@@ -0,0 +1,80 @@
package appender
import (
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus"
)
// GlobalInfo caches information relevant to a single graph. It allows
// an appender to populate the cache and then it, or another appender
// can re-use the information. A new instance is generated for graph and
// is initially empty.
type GlobalInfo struct {
Business *business.Layer
PromClient *prometheus.Client
ServiceEntries map[string]string
}
func NewGlobalInfo() *GlobalInfo {
return &GlobalInfo{}
}
// NamespaceInfo caches information relevant to a single namespace. It allows
// one appender to populate the cache and another to then re-use the information.
// A new instance is generated for each namespace of a single graph and is initially
// seeded with only Namespace.
type NamespaceInfo struct {
Namespace string // always provided
WorkloadList *models.WorkloadList
}
func NewNamespaceInfo(namespace string) *NamespaceInfo {
return &NamespaceInfo{Namespace: namespace}
}
func getWorkload(workloadName string, workloadList *models.WorkloadList) (*models.WorkloadListItem, bool) {
if workloadName == "" || workloadName == graph.Unknown {
return nil, false
}
for _, workload := range workloadList.Workloads {
if workload.Name == workloadName {
return &workload, true
}
}
return nil, false
}
func getAppWorkloads(app, version string, workloadList *models.WorkloadList) []models.WorkloadListItem {
cfg := config.Get()
appLabel := cfg.IstioLabels.AppLabelName
versionLabel := cfg.IstioLabels.VersionLabelName
result := []models.WorkloadListItem{}
versionOk := graph.IsOK(version)
for _, workload := range workloadList.Workloads {
if appVal, ok := workload.Labels[appLabel]; ok && app == appVal {
if !versionOk {
result = append(result, workload)
} else if versionVal, ok := workload.Labels[versionLabel]; ok && version == versionVal {
result = append(result, workload)
}
}
}
return result
}
// Appender is implemented by any code offering to append a service graph with
// supplemental information. On error the appender should panic and it will be
// handled as an error response.
type Appender interface {
// AppendGraph performs the appender work on the provided traffic map. The map
// may be initially empty. An appender is allowed to add or remove map entries.
AppendGraph(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo)
// Name returns a unique appender name and which is the name used to identify the appender (e.g in 'appenders' query param)
Name() string
}

View File

@@ -0,0 +1,129 @@
package appender
import (
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/graph"
)
const DeadNodeAppenderName = "deadNode"
// DeadNodeAppender is responsible for removing from the graph unwanted nodes:
// - nodes for which there is no traffic reported and a backing workload that can't be found
// (presumably removed from K8S). (kiali-621)
// - this includes "unknown"
// - service nodes that are not service entries (kiali-1526) and for which there is no incoming
// error traffic and no outgoing edges (kiali-1326).
// Name: deadNode
type DeadNodeAppender struct{}
// Name implements Appender
func (a DeadNodeAppender) Name() string {
return DeadNodeAppenderName
}
// AppendGraph implements Appender
func (a DeadNodeAppender) AppendGraph(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
if len(trafficMap) == 0 {
return
}
var err error
if globalInfo.Business == nil {
globalInfo.Business, err = business.Get()
graph.CheckError(err)
}
if namespaceInfo.WorkloadList == nil {
workloadList, err := globalInfo.Business.Workload.GetWorkloadList(namespaceInfo.Namespace)
graph.CheckError(err)
namespaceInfo.WorkloadList = &workloadList
}
a.applyDeadNodes(trafficMap, globalInfo, namespaceInfo)
}
func (a DeadNodeAppender) applyDeadNodes(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
numRemoved := 0
for id, n := range trafficMap {
switch n.NodeType {
case graph.NodeTypeService:
// a service node with outgoing edges is never considered dead (or egress)
if len(n.Edges) > 0 {
continue
}
// A service node that is a service entry is never considered dead
if _, ok := n.Metadata["isServiceEntry"]; ok {
continue
}
// a service node with no incoming error traffic and no outgoing edges, is dead.
// Incoming non-error traffic can not raise the dead because it is caused by an
// edge case (pod life-cycle change) that we don't want to see.
isDead := true
ServiceCase:
for _, p := range graph.Protocols {
for _, r := range p.NodeRates {
if r.IsErr {
if errRate, hasErrRate := n.Metadata[r.Name]; hasErrRate && errRate.(float64) > 0 {
isDead = false
break ServiceCase
}
}
}
}
if isDead {
delete(trafficMap, id)
numRemoved++
}
default:
// a node with traffic is not dead, skip
isDead := true
DefaultCase:
for _, p := range graph.Protocols {
for _, r := range p.NodeRates {
if r.IsIn || r.IsOut {
if rate, hasRate := n.Metadata[r.Name]; hasRate && rate.(float64) > 0 {
isDead = false
break DefaultCase
}
}
}
}
if !isDead {
continue
}
// There are some node types that are never associated with backing workloads (such as versionless app nodes).
// Nodes of those types are never dead because their workload clearly can't be missing (they don't have workloads).
// - note: unknown is not saved by this rule (kiali-2078) - i.e. unknown nodes can be declared dead
if n.NodeType != graph.NodeTypeUnknown && !graph.IsOK(n.Workload) {
continue
}
// Remove if backing workload is not defined (always true for "unknown"), flag if there are no pods
if workload, found := getWorkload(n.Workload, namespaceInfo.WorkloadList); !found {
delete(trafficMap, id)
numRemoved++
} else {
if workload.PodCount == 0 {
n.Metadata["isDead"] = true
}
}
}
}
// If we removed any nodes we need to remove any edges to them as well...
if numRemoved == 0 {
return
}
for _, s := range trafficMap {
goodEdges := []*graph.Edge{}
for _, e := range s.Edges {
if _, found := trafficMap[e.Dest.ID]; found {
goodEdges = append(goodEdges, e)
}
}
s.Edges = goodEdges
}
}

View File

@@ -0,0 +1,142 @@
package appender
import (
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
)
const IstioAppenderName = "istio"
// IstioAppender is responsible for badging nodes with special Istio significance:
// - CircuitBreaker: n.Metadata["hasCB"] = true
// - VirtualService: n.Metadata["hasVS"] = true
// Name: istio
type IstioAppender struct{}
// Name implements Appender
func (a IstioAppender) Name() string {
return IstioAppenderName
}
// AppendGraph implements Appender
func (a IstioAppender) AppendGraph(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
if len(trafficMap) == 0 {
return
}
if globalInfo.Business == nil {
var err error
globalInfo.Business, err = business.Get()
graph.CheckError(err)
}
addBadging(trafficMap, globalInfo, namespaceInfo)
addLabels(trafficMap, globalInfo)
}
func addBadging(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
// Currently no other appenders use DestinationRules or VirtualServices, so they are not cached in NamespaceInfo
istioCfg, err := globalInfo.Business.IstioConfig.GetIstioConfigList(business.IstioConfigCriteria{
IncludeDestinationRules: true,
IncludeVirtualServices: true,
Namespace: namespaceInfo.Namespace,
})
graph.CheckError(err)
applyCircuitBreakers(trafficMap, namespaceInfo.Namespace, istioCfg)
applyVirtualServices(trafficMap, namespaceInfo.Namespace, istioCfg)
}
func applyCircuitBreakers(trafficMap graph.TrafficMap, namespace string, istioCfg models.IstioConfigList) {
NODES:
for _, n := range trafficMap {
// Skip the check if this node is outside the requested namespace, we limit badging to the requested namespaces
if n.Namespace != namespace {
continue
}
// Note, Because DestinationRules are applied to services we limit CB badges to service nodes and app nodes.
// Whether we should add to workload nodes is debatable, we could add it later if needed.
versionOk := graph.IsOK(n.Version)
switch {
case n.NodeType == graph.NodeTypeService:
for _, destinationRule := range istioCfg.DestinationRules.Items {
if destinationRule.HasCircuitBreaker(namespace, n.Service, "") {
n.Metadata["hasCB"] = true
continue NODES
}
}
case !versionOk && (n.NodeType == graph.NodeTypeApp):
if destServices, ok := n.Metadata["destServices"]; ok {
for serviceName, _ := range destServices.(map[string]bool) {
for _, destinationRule := range istioCfg.DestinationRules.Items {
if destinationRule.HasCircuitBreaker(namespace, serviceName, "") {
n.Metadata["hasCB"] = true
continue NODES
}
}
}
}
case versionOk:
if destServices, ok := n.Metadata["destServices"]; ok {
for serviceName, _ := range destServices.(map[string]bool) {
for _, destinationRule := range istioCfg.DestinationRules.Items {
if destinationRule.HasCircuitBreaker(namespace, serviceName, n.Version) {
n.Metadata["hasCB"] = true
continue NODES
}
}
}
}
default:
continue
}
}
}
func applyVirtualServices(trafficMap graph.TrafficMap, namespace string, istioCfg models.IstioConfigList) {
NODES:
for _, n := range trafficMap {
if n.NodeType != graph.NodeTypeService {
continue
}
if n.Namespace != namespace {
continue
}
for _, virtualService := range istioCfg.VirtualServices.Items {
if virtualService.IsValidHost(namespace, n.Service) {
n.Metadata["hasVS"] = true
continue NODES
}
}
}
}
// addLabels is a chance to add any missing label info to nodes when the telemetry does not provide enough information.
// TODO: For efficiency we may want to consider pulling all namespace service definitions in one call (the call does not
// exist at this writing). As written we pull each service individually, which can be a fair number of round
// trips when services are injected (as they are by default). Note also that currently we do query for
// outsider service nodes. That may be a security problem f the outside namespace is inaccessible to the user. If
// that becomes an issue we can limit to accessible namespaces or only to the NamespaceInfo namespace.
func addLabels(trafficMap graph.TrafficMap, globalInfo *GlobalInfo) {
appLabelName := config.Get().IstioLabels.AppLabelName
for _, n := range trafficMap {
// make sure service nodes have the defined app label so it can be used for app grouping in the UI.
if n.NodeType == graph.NodeTypeService && n.App == "" {
service, err := globalInfo.Business.Svc.GetServiceDefinition(n.Namespace, n.Service)
if err != nil {
log.Debugf("Error fetching service definition, may not apply app label correctly for namespace=%s svc=%s: %s", n.Namespace, n.Service, err.Error())
if service == nil {
continue
}
}
if app, ok := service.Service.Labels[appLabelName]; ok {
n.App = app
}
}
}
}

View File

@@ -0,0 +1,206 @@
package appender
import (
"fmt"
"math"
"time"
"github.com/prometheus/common/model"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/prometheus"
)
const (
DefaultQuantile = 0.95 // 95th percentile
ResponseTimeAppenderName = "responseTime"
)
// ResponseTimeAppender is responsible for adding responseTime information to the graph. ResponseTime
// is represented as a percentile value. The default is 95th percentile, which means that
// 95% of requests executed in no more than the resulting milliseconds.
// Name: responseTime
type ResponseTimeAppender struct {
GraphType string
InjectServiceNodes bool
IncludeIstio bool
Namespaces map[string]graph.NamespaceInfo
Quantile float64
QueryTime int64 // unix time in seconds
}
// Name implements Appender
func (a ResponseTimeAppender) Name() string {
return ResponseTimeAppenderName
}
// AppendGraph implements Appender
func (a ResponseTimeAppender) AppendGraph(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
if len(trafficMap) == 0 {
return
}
if globalInfo.PromClient == nil {
var err error
globalInfo.PromClient, err = prometheus.NewClient()
graph.CheckError(err)
}
a.appendGraph(trafficMap, namespaceInfo.Namespace, globalInfo.PromClient)
}
func (a ResponseTimeAppender) appendGraph(trafficMap graph.TrafficMap, namespace string, client *prometheus.Client) {
quantile := a.Quantile
if a.Quantile <= 0.0 || a.Quantile >= 100.0 {
log.Warningf("Replacing invalid quantile [%.2f] with default [%.2f]", a.Quantile, DefaultQuantile)
quantile = DefaultQuantile
}
log.Debugf("Generating responseTime using quantile [%.2f]; namespace = %v", quantile, namespace)
duration := a.Namespaces[namespace].Duration
// create map to quickly look up responseTime
responseTimeMap := make(map[string]float64)
// query prometheus for the responseTime info in three queries:
// 1) query for responseTime originating from "unknown" (i.e. the internet)
groupBy := "le,source_workload_namespace,source_workload,source_app,source_version,destination_service_namespace,destination_service_name,destination_workload,destination_app,destination_version"
query := fmt.Sprintf(`histogram_quantile(%.2f, sum(rate(%s{reporter="destination",source_workload="unknown",destination_service_namespace="%v",response_code=~"%s"}[%vs])) by (%s))`,
quantile,
"istio_request_duration_seconds_bucket",
namespace,
"2[0-9]{2}|^0$", // must match success for all expected protocols
int(duration.Seconds()), // range duration for the query
groupBy)
unkVector := promQuery(query, time.Unix(a.QueryTime, 0), client.API(), a)
a.populateResponseTimeMap(responseTimeMap, &unkVector)
// 2) query for responseTime originating from a workload outside of the namespace. Exclude any "unknown" source telemetry (an unusual corner case)
query = fmt.Sprintf(`histogram_quantile(%.2f, sum(rate(%s{reporter="source",source_workload_namespace!="%v",source_workload!="unknown",destination_service_namespace="%v",response_code=~"%s"}[%vs])) by (%s))`,
quantile,
"istio_request_duration_seconds_bucket",
namespace,
namespace,
"2[0-9]{2}|^0$", // must match success for all expected protocols
int(duration.Seconds()), // range duration for the query
groupBy)
outVector := promQuery(query, time.Unix(a.QueryTime, 0), client.API(), a)
a.populateResponseTimeMap(responseTimeMap, &outVector)
// 3) query for responseTime originating from a workload inside of the namespace
query = fmt.Sprintf(`histogram_quantile(%.2f, sum(rate(%s{reporter="source",source_workload_namespace="%v",response_code=~"%s"}[%vs])) by (%s))`,
quantile,
"istio_request_duration_seconds_bucket",
namespace,
"2[0-9]{2}|^0$", // must match success for all expected protocols
int(duration.Seconds()), // range duration for the query
groupBy)
inVector := promQuery(query, time.Unix(a.QueryTime, 0), client.API(), a)
a.populateResponseTimeMap(responseTimeMap, &inVector)
// istio component telemetry is only reported destination-side, so we must perform additional queries
if a.IncludeIstio {
istioNamespace := config.Get().IstioNamespace
// 4) if the target namespace is istioNamespace re-query for traffic originating from outside (other than unknown, covered in query #1)
if namespace == istioNamespace {
query = fmt.Sprintf(`histogram_quantile(%.2f, sum(rate(%s{reporter="destination",source_workload!="unknown",source_workload_namespace!="%v",destination_service_namespace="%v",response_code=~"%s"}[%vs])) by (%s))`,
quantile,
"istio_request_duration_seconds_bucket",
namespace,
namespace,
"2[0-9]{2}|^0$", // must match success for all expected protocols
int(duration.Seconds()), // range duration for the query
groupBy)
// fetch the externally originating request traffic time-series
outIstioVector := promQuery(query, time.Unix(a.QueryTime, 0), client.API(), a)
a.populateResponseTimeMap(responseTimeMap, &outIstioVector)
}
// 5) supplemental query for traffic originating from a workload inside of the namespace with istioSystem destination
query = fmt.Sprintf(`histogram_quantile(%.2f, sum(rate(%s{reporter="destination",source_workload_namespace="%v",destination_service_namespace="%v",response_code=~"%s"}[%vs])) by (%s))`,
quantile,
"istio_request_duration_seconds_bucket",
namespace,
istioNamespace,
"2[0-9]{2}|^0$", // must match success for all expected protocols
int(duration.Seconds()), // range duration for the query
groupBy)
// fetch the internally originating request traffic time-series
inIstioVector := promQuery(query, time.Unix(a.QueryTime, 0), client.API(), a)
a.populateResponseTimeMap(responseTimeMap, &inIstioVector)
}
applyResponseTime(trafficMap, responseTimeMap)
}
func applyResponseTime(trafficMap graph.TrafficMap, responseTimeMap map[string]float64) {
for _, n := range trafficMap {
for _, e := range n.Edges {
key := fmt.Sprintf("%s %s", e.Source.ID, e.Dest.ID)
e.Metadata["responseTime"] = responseTimeMap[key]
}
}
}
func (a ResponseTimeAppender) populateResponseTimeMap(responseTimeMap map[string]float64, vector *model.Vector) {
for _, s := range *vector {
m := s.Metric
lSourceWlNs, sourceWlNsOk := m["source_workload_namespace"]
lSourceWl, sourceWlOk := m["source_workload"]
lSourceApp, sourceAppOk := m["source_app"]
lSourceVer, sourceVerOk := m["source_version"]
lDestSvcNs, destSvcNsOk := m["destination_service_namespace"]
lDestSvcName, destSvcNameOk := m["destination_service_name"]
lDestWl, destWlOk := m["destination_workload"]
lDestApp, destAppOk := m["destination_app"]
lDestVer, destVerOk := m["destination_version"]
if !sourceWlNsOk || !sourceWlOk || !sourceAppOk || !sourceVerOk || !destSvcNsOk || !destSvcNameOk || !destWlOk || !destAppOk || !destVerOk {
log.Warningf("Skipping %v, missing expected labels", m.String())
continue
}
sourceWlNs := string(lSourceWlNs)
sourceWl := string(lSourceWl)
sourceApp := string(lSourceApp)
sourceVer := string(lSourceVer)
destSvcNs := string(lDestSvcNs)
destSvcName := string(lDestSvcName)
destWl := string(lDestWl)
destApp := string(lDestApp)
destVer := string(lDestVer)
// to best preserve precision convert from secs to millis now, otherwise the
// thousandths place is dropped downstream.
val := float64(s.Value) * 1000.0
// It is possible to get a NaN if there is no traffic (or possibly other reasons). Just skip it
if math.IsNaN(val) {
continue
}
if a.InjectServiceNodes {
// don't inject a service node if the dest node is already a service node. Also, we can't inject if destSvcName is not set.
_, destNodeType := graph.Id(destSvcNs, destWl, destApp, destVer, destSvcName, a.GraphType)
if destSvcNameOk && destNodeType != graph.NodeTypeService {
// Do not set response time on the incoming edge, we can't validly aggregate response times of the outgoing edges (kiali-2297)
a.addResponseTime(responseTimeMap, val, destSvcNs, "", "", "", destSvcName, destSvcNs, destWl, destApp, destVer, destSvcName)
} else {
a.addResponseTime(responseTimeMap, val, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, destWl, destApp, destVer, destSvcName)
}
} else {
a.addResponseTime(responseTimeMap, val, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, destWl, destApp, destVer, destSvcName)
}
}
}
func (a ResponseTimeAppender) addResponseTime(responseTimeMap map[string]float64, val float64, sourceWlNs, sourceWl, sourceApp, sourceVer, sourceSvcName, destSvcNs, destWl, destApp, destVer, destSvcName string) {
sourceId, _ := graph.Id(sourceWlNs, sourceWl, sourceApp, sourceVer, sourceSvcName, a.GraphType)
destId, _ := graph.Id(destSvcNs, destWl, destApp, destVer, destSvcName, a.GraphType)
key := fmt.Sprintf("%s %s", sourceId, destId)
responseTimeMap[key] = val
}

View File

@@ -0,0 +1,168 @@
package appender
import (
"fmt"
"time"
"github.com/prometheus/common/model"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/prometheus"
)
const (
SecurityPolicyAppenderName = "securityPolicy"
policyMTLS = "mutual_tls"
)
// SecurityPolicyAppender is responsible for adding securityPolicy information to the graph.
// The appender currently reports only mutual_tls security although is written in a generic way.
// Name: securityPolicy
type SecurityPolicyAppender struct {
GraphType string
IncludeIstio bool
InjectServiceNodes bool
Namespaces map[string]graph.NamespaceInfo
QueryTime int64 // unix time in seconds
}
type PolicyRates map[string]float64
// Name implements Appender
func (a SecurityPolicyAppender) Name() string {
return SecurityPolicyAppenderName
}
// AppendGraph implements Appender
func (a SecurityPolicyAppender) AppendGraph(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
if len(trafficMap) == 0 {
return
}
if globalInfo.PromClient == nil {
var err error
globalInfo.PromClient, err = prometheus.NewClient()
graph.CheckError(err)
}
a.appendGraph(trafficMap, namespaceInfo.Namespace, globalInfo.PromClient)
}
func (a SecurityPolicyAppender) appendGraph(trafficMap graph.TrafficMap, namespace string, client *prometheus.Client) {
log.Debugf("Resolving security policy for namespace = %v", namespace)
duration := a.Namespaces[namespace].Duration
// query prometheus for mutual_tls info in two queries (use dest telemetry because it reports the security policy):
// 1) query for requests originating from a workload outside the namespace
groupBy := "source_workload_namespace,source_workload,source_app,source_version,destination_service_namespace,destination_service_name,destination_workload,destination_app,destination_version,connection_security_policy"
query := fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload_namespace!="%v",destination_service_namespace="%v"}[%vs]) > 0) by (%s)`,
"istio_requests_total",
namespace,
namespace,
int(duration.Seconds()), // range duration for the query
groupBy)
outVector := promQuery(query, time.Unix(a.QueryTime, 0), client.API(), a)
// 2) query for requests originating from a workload inside of the namespace
istioCondition := ""
if !a.IncludeIstio {
istioCondition = fmt.Sprintf(`,destination_service_namespace!="%s"`, config.Get().IstioNamespace)
}
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload_namespace="%v"%s}[%vs]) > 0) by (%s)`,
"istio_requests_total",
namespace,
istioCondition,
int(duration.Seconds()), // range duration for the query
groupBy)
inVector := promQuery(query, time.Unix(a.QueryTime, 0), client.API(), a)
// create map to quickly look up securityPolicy
securityPolicyMap := make(map[string]PolicyRates)
a.populateSecurityPolicyMap(securityPolicyMap, &outVector)
a.populateSecurityPolicyMap(securityPolicyMap, &inVector)
applySecurityPolicy(trafficMap, securityPolicyMap)
}
func (a SecurityPolicyAppender) populateSecurityPolicyMap(securityPolicyMap map[string]PolicyRates, vector *model.Vector) {
for _, s := range *vector {
m := s.Metric
lSourceWlNs, sourceWlNsOk := m["source_workload_namespace"]
lSourceWl, sourceWlOk := m["source_workload"]
lSourceApp, sourceAppOk := m["source_app"]
lSourceVer, sourceVerOk := m["source_version"]
lDestSvcNs, destSvcNsOk := m["destination_service_namespace"]
lDestSvcName, destSvcNameOk := m["destination_service_name"]
lDestWl, destWlOk := m["destination_workload"]
lDestApp, destAppOk := m["destination_app"]
lDestVer, destVerOk := m["destination_version"]
lCsp, cspOk := m["connection_security_policy"]
if !sourceWlNsOk || !sourceWlOk || !sourceAppOk || !sourceVerOk || !destSvcNsOk || !destSvcNameOk || !destWlOk || !destAppOk || !destVerOk || !cspOk {
log.Warningf("Skipping %v, missing expected labels", m.String())
continue
}
sourceWlNs := string(lSourceWlNs)
sourceWl := string(lSourceWl)
sourceApp := string(lSourceApp)
sourceVer := string(lSourceVer)
destSvcNs := string(lDestSvcNs)
destSvcName := string(lDestSvcName)
destWl := string(lDestWl)
destApp := string(lDestApp)
destVer := string(lDestVer)
csp := string(lCsp)
val := float64(s.Value)
if a.InjectServiceNodes {
// don't inject a service node if the dest node is already a service node. Also, we can't inject if destSvcName is not set.
_, destNodeType := graph.Id(destSvcNs, destWl, destApp, destVer, destSvcName, a.GraphType)
if destSvcNameOk && destNodeType != graph.NodeTypeService {
a.addSecurityPolicy(securityPolicyMap, csp, val, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, "", "", "", destSvcName)
a.addSecurityPolicy(securityPolicyMap, csp, val, destSvcNs, "", "", "", destSvcName, destSvcNs, destWl, destApp, destVer, destSvcName)
} else {
a.addSecurityPolicy(securityPolicyMap, csp, val, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, destWl, destApp, destVer, destSvcName)
}
} else {
a.addSecurityPolicy(securityPolicyMap, csp, val, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, destWl, destApp, destVer, destSvcName)
}
}
}
func (a SecurityPolicyAppender) addSecurityPolicy(securityPolicyMap map[string]PolicyRates, csp string, val float64, sourceWlNs, sourceWl, sourceApp, sourceVer, sourceSvcName, destSvcNs, destWl, destApp, destVer, destSvcName string) {
sourceId, _ := graph.Id(sourceWlNs, sourceWl, sourceApp, sourceVer, sourceSvcName, a.GraphType)
destId, _ := graph.Id(destSvcNs, destWl, destApp, destVer, destSvcName, a.GraphType)
key := fmt.Sprintf("%s %s", sourceId, destId)
var policyRates PolicyRates
var ok bool
if policyRates, ok = securityPolicyMap[key]; !ok {
policyRates = make(PolicyRates)
securityPolicyMap[key] = policyRates
}
policyRates[csp] = val
}
func applySecurityPolicy(trafficMap graph.TrafficMap, securityPolicyMap map[string]PolicyRates) {
for _, s := range trafficMap {
for _, e := range s.Edges {
key := fmt.Sprintf("%s %s", e.Source.ID, e.Dest.ID)
if policyRates, ok := securityPolicyMap[key]; ok {
mtls := 0.0
other := 0.0
for policy, rate := range policyRates {
if policy == policyMTLS {
mtls = rate
} else {
other += rate
}
}
if percentMtls := mtls / (mtls + other) * 100; percentMtls > 0 {
e.Metadata["isMTLS"] = percentMtls
}
}
}
}
}

View File

@@ -0,0 +1,92 @@
package appender
import (
"time"
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/log"
)
const ServiceEntryAppenderName = "serviceEntry"
// ServiceEntryAppender is responsible for identifying service nodes that are
// Istio Service Entries.
// Name: serviceEntry
type ServiceEntryAppender struct {
AccessibleNamespaces map[string]time.Time
}
// Name implements Appender
func (a ServiceEntryAppender) Name() string {
return ServiceEntryAppenderName
}
// AppendGraph implements Appender
func (a ServiceEntryAppender) AppendGraph(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
if len(trafficMap) == 0 {
return
}
var err error
if globalInfo.Business == nil {
globalInfo.Business, err = business.Get()
graph.CheckError(err)
}
a.applyServiceEntries(trafficMap, globalInfo, namespaceInfo)
}
func (a ServiceEntryAppender) applyServiceEntries(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
for _, n := range trafficMap {
// only a service node can be a service entry
if n.NodeType != graph.NodeTypeService {
continue
}
// only a terminal node can be a service entry (no outgoing edges because the service is performed outside the mesh)
if len(n.Edges) > 0 {
continue
}
// A service node with no outgoing edges may be an egress.
// If so flag it, don't discard it (kiali-1526, see also kiali-2014).
// The flag will be passed to the UI to inhibit links to non-existent detail pages.
if location, ok := a.getServiceEntry(n.Service, globalInfo); ok {
n.Metadata["isServiceEntry"] = location
}
}
}
// getServiceEntry queries the cluster API to resolve service entries
// across all accessible namespaces in the cluster. All ServiceEntries are needed because
// Istio does not distinguish where a ServiceEntry is created when routing traffic (i.e.
// a ServiceEntry can be in any namespace and it will still work).
func (a ServiceEntryAppender) getServiceEntry(service string, globalInfo *GlobalInfo) (string, bool) {
if globalInfo.ServiceEntries == nil {
globalInfo.ServiceEntries = make(map[string]string)
for ns := range a.AccessibleNamespaces {
istioCfg, err := globalInfo.Business.IstioConfig.GetIstioConfigList(business.IstioConfigCriteria{
IncludeServiceEntries: true,
Namespace: ns,
})
graph.CheckError(err)
for _, entry := range istioCfg.ServiceEntries {
if entry.Spec.Hosts != nil {
location := "MESH_EXTERNAL"
if entry.Spec.Location == "MESH_INTERNAL" {
location = "MESH_INTERNAL"
}
for _, host := range entry.Spec.Hosts.([]interface{}) {
globalInfo.ServiceEntries[host.(string)] = location
}
}
}
}
log.Tracef("Found [%v] service entries", len(globalInfo.ServiceEntries))
}
location, ok := globalInfo.ServiceEntries[service]
return location, ok
}

View File

@@ -0,0 +1,89 @@
package appender
import (
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/graph"
)
const SidecarsCheckAppenderName = "sidecarsCheck"
// SidecarsCheckAppender flags nodes whose backing workloads are missing at least one Envoy sidecar. Note that
// a node with no backing workloads is not flagged.
// Name: sidecarsCheck
type SidecarsCheckAppender struct{}
// Name implements Appender
func (a SidecarsCheckAppender) Name() string {
return SidecarsCheckAppenderName
}
// AppendGraph implements Appender
func (a SidecarsCheckAppender) AppendGraph(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
if len(trafficMap) == 0 {
return
}
if globalInfo.Business == nil {
var err error
globalInfo.Business, err = business.Get()
graph.CheckError(err)
}
if namespaceInfo.WorkloadList == nil {
workloadList, err := globalInfo.Business.Workload.GetWorkloadList(namespaceInfo.Namespace)
graph.CheckError(err)
namespaceInfo.WorkloadList = &workloadList
}
a.applySidecarsChecks(trafficMap, namespaceInfo)
}
func (a *SidecarsCheckAppender) applySidecarsChecks(trafficMap graph.TrafficMap, namespaceInfo *NamespaceInfo) {
cfg := config.Get()
istioNamespace := cfg.IstioNamespace
workloadList := namespaceInfo.WorkloadList
for _, n := range trafficMap {
// Skip the check if this node is outside the requested namespace, we limit badging to the requested namespaces
if n.Namespace != namespaceInfo.Namespace {
continue
}
// We whitelist istio components because they may not report telemetry using injected sidecars.
if n.Namespace == istioNamespace {
continue
}
// dead nodes tell no tales (er, have no pods)
if isDead, ok := n.Metadata["isDead"]; ok && isDead.(bool) {
continue
}
// get the workloads for the node and check to see if they have sidecars. Note that
// if there are no workloads/pods we don't flag it as missing sidecars. No pods means
// no missing sidecars. (In most cases this means it was flagged as dead, and handled above)
hasIstioSidecar := true
switch n.NodeType {
case graph.NodeTypeWorkload:
if workload, found := getWorkload(n.Workload, workloadList); found {
hasIstioSidecar = workload.IstioSidecar
}
case graph.NodeTypeApp:
workloads := getAppWorkloads(n.App, n.Version, workloadList)
if len(workloads) > 0 {
for _, workload := range workloads {
if !workload.IstioSidecar {
hasIstioSidecar = false
break
}
}
}
default:
continue
}
if !hasIstioSidecar {
n.Metadata["hasMissingSC"] = true
}
}
}

View File

@@ -0,0 +1,116 @@
package appender
import (
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
)
const UnusedNodeAppenderName = "unusedNode"
// UnusedNodeAppender looks for services that have never seen request traffic. It adds nodes to represent the
// unused definitions. The added node types depend on the graph type and/or labeling on the definition.
// Name: unusedNode
type UnusedNodeAppender struct {
GraphType string // This appender does not operate on service graphs because it adds workload nodes.
IsNodeGraph bool // This appender does not operate on node detail graphs because we want to focus on the specific node.
}
// Name implements Appender
func (a UnusedNodeAppender) Name() string {
return UnusedNodeAppenderName
}
// AppendGraph implements Appender
func (a UnusedNodeAppender) AppendGraph(trafficMap graph.TrafficMap, globalInfo *GlobalInfo, namespaceInfo *NamespaceInfo) {
if graph.GraphTypeService == a.GraphType || a.IsNodeGraph {
return
}
if globalInfo.Business == nil {
var err error
globalInfo.Business, err = business.Get()
graph.CheckError(err)
}
if namespaceInfo.WorkloadList == nil {
workloadList, err := globalInfo.Business.Workload.GetWorkloadList(namespaceInfo.Namespace)
graph.CheckError(err)
namespaceInfo.WorkloadList = &workloadList
}
a.addUnusedNodes(trafficMap, namespaceInfo.Namespace, namespaceInfo.WorkloadList.Workloads)
}
func (a UnusedNodeAppender) addUnusedNodes(trafficMap graph.TrafficMap, namespace string, workloads []models.WorkloadListItem) {
unusedTrafficMap := a.buildUnusedTrafficMap(trafficMap, namespace, workloads)
// If trafficMap is empty just populate it with the unused nodes and return
if len(trafficMap) == 0 {
for k, v := range unusedTrafficMap {
trafficMap[k] = v
}
return
}
// Integrate the unused nodes into the existing traffic map
for _, v := range unusedTrafficMap {
addUnusedNodeToTrafficMap(trafficMap, v)
}
}
func (a UnusedNodeAppender) buildUnusedTrafficMap(trafficMap graph.TrafficMap, namespace string, workloads []models.WorkloadListItem) graph.TrafficMap {
unusedTrafficMap := graph.NewTrafficMap()
cfg := config.Get()
appLabel := cfg.IstioLabels.AppLabelName
versionLabel := cfg.IstioLabels.VersionLabelName
for _, w := range workloads {
labels := w.Labels
app := graph.Unknown
version := graph.Unknown
if v, ok := labels[appLabel]; ok {
app = v
}
if v, ok := labels[versionLabel]; ok {
version = v
}
id, nodeType := graph.Id(namespace, w.Name, app, version, "", a.GraphType)
if _, found := trafficMap[id]; !found {
if _, found = unusedTrafficMap[id]; !found {
log.Debugf("Adding unused node for workload [%s] with labels [%v]", w.Name, labels)
node := graph.NewNodeExplicit(id, namespace, w.Name, app, version, "", nodeType, a.GraphType)
// note: we don't know what the protocol really should be, http is most common, it's a dead edge anyway
node.Metadata = map[string]interface{}{"httpIn": 0.0, "httpOut": 0.0, "isUnused": true}
unusedTrafficMap[id] = &node
}
}
}
return unusedTrafficMap
}
func addUnusedNodeToTrafficMap(trafficMap graph.TrafficMap, unusedNode *graph.Node) {
// add unused node to traffic map
trafficMap[unusedNode.ID] = unusedNode
// Add a "sibling" edge to any node with an edge to the same app
for _, n := range trafficMap {
findAndAddSibling(n, unusedNode)
}
}
func findAndAddSibling(parent, unusedNode *graph.Node) {
if unusedNode.App == graph.Unknown {
return
}
found := false
for _, edge := range parent.Edges {
if found = edge.Dest.App == unusedNode.App; found {
break
}
}
if found {
parent.AddEdge(unusedNode)
}
}

38
vendor/github.com/kiali/kiali/graph/appender/util.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
package appender
import (
"context"
"fmt"
"time"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/prometheus/internalmetrics"
"github.com/prometheus/client_golang/api/prometheus/v1"
"github.com/prometheus/common/model"
)
// package-private util functions (used by multiple files)
func promQuery(query string, queryTime time.Time, api v1.API, a Appender) model.Vector {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// wrap with a round() to be in line with metrics api
query = fmt.Sprintf("round(%s,0.001)", query)
log.Debugf("Appender query:\n%s&time=%v (now=%v, %v)\n", query, queryTime.Format(graph.TF), time.Now().Format(graph.TF), queryTime.Unix())
promtimer := internalmetrics.GetPrometheusProcessingTimePrometheusTimer("Graph-Appender-" + a.Name())
value, err := api.Query(ctx, query, queryTime)
graph.CheckError(err)
promtimer.ObserveDuration() // notice we only collect metrics for successful prom queries
switch t := value.Type(); t {
case model.ValVector: // Instant Vector
return value.(model.Vector)
default:
graph.Error(fmt.Sprintf("No handling for type %v!\n", t))
}
return nil
}

View File

@@ -0,0 +1,417 @@
// Cytoscape package provides conversion from our graph to the CystoscapeJS
// configuration json model.
//
// The following links are useful for understanding CytoscapeJS and it's configuration:
//
// Main page: http://js.cytoscape.org/
// JSON config: http://js.cytoscape.org/#notation/elements-json
// Demos: http://js.cytoscape.org/#demos
//
// Algorithm: Process the graph structure adding nodes and edges, decorating each
// with information provided. An optional second pass generates compound
// nodes for version grouping.
//
package cytoscape
import (
"crypto/md5"
"fmt"
"sort"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/graph/options"
)
type ProtocolTraffic struct {
Protocol string `json:"protocol"` // protocol
Rates map[string]string `json:"rates"` // map[rate]value
}
type NodeData struct {
// Cytoscape Fields
Id string `json:"id"` // unique internal node ID (n0, n1...)
Parent string `json:"parent,omitempty"` // Compound Node parent ID
// App Fields (not required by Cytoscape)
NodeType string `json:"nodeType"`
Namespace string `json:"namespace"`
Workload string `json:"workload,omitempty"`
App string `json:"app,omitempty"`
Version string `json:"version,omitempty"`
Service string `json:"service,omitempty"` // requested service for NodeTypeService
DestServices map[string]bool `json:"destServices,omitempty"` // requested services for [dest] node
Traffic []ProtocolTraffic `json:"traffic,omitempty"` // traffic rates for all detected protocols
HasCB bool `json:"hasCB,omitempty"` // true (has circuit breaker) | false
HasMissingSC bool `json:"hasMissingSC,omitempty"` // true (has missing sidecar) | false
HasVS bool `json:"hasVS,omitempty"` // true (has route rule) | false
IsDead bool `json:"isDead,omitempty"` // true (has no pods) | false
IsGroup string `json:"isGroup,omitempty"` // set to the grouping type, current values: [ 'app', 'version' ]
IsInaccessible bool `json:"isInaccessible,omitempty"` // true if the node exists in an inaccessible namespace
IsMisconfigured string `json:"isMisconfigured,omitempty"` // set to misconfiguration list, current values: [ 'labels' ]
IsOutside bool `json:"isOutside,omitempty"` // true | false
IsRoot bool `json:"isRoot,omitempty"` // true | false
IsServiceEntry string `json:"isServiceEntry,omitempty"` // set to the location, current values: [ 'MESH_EXTERNAL', 'MESH_INTERNAL' ]
IsUnused bool `json:"isUnused,omitempty"` // true | false
}
type EdgeData struct {
// Cytoscape Fields
Id string `json:"id"` // unique internal edge ID (e0, e1...)
Source string `json:"source"` // parent node ID
Target string `json:"target"` // child node ID
// App Fields (not required by Cytoscape)
Traffic ProtocolTraffic `json:"traffic,omitempty"` // traffic rates for the edge protocol
ResponseTime string `json:"responseTime,omitempty"` // in millis
IsMTLS string `json:"isMTLS,omitempty"` // set to the percentage of traffic using a mutual TLS connection
IsUnused bool `json:"isUnused,omitempty"` // true | false
}
type NodeWrapper struct {
Data *NodeData `json:"data"`
}
type EdgeWrapper struct {
Data *EdgeData `json:"data"`
}
type Elements struct {
Nodes []*NodeWrapper `json:"nodes"`
Edges []*EdgeWrapper `json:"edges"`
}
type Config struct {
Timestamp int64 `json:"timestamp"`
Duration int64 `json:"duration"`
GraphType string `json:"graphType"`
Elements Elements `json:"elements"`
}
func nodeHash(id string) string {
return fmt.Sprintf("%x", md5.Sum([]byte(id)))
}
func edgeHash(from, to, protocol string) string {
return fmt.Sprintf("%x", md5.Sum([]byte(fmt.Sprintf("%s.%s.%s", from, to, protocol))))
}
func NewConfig(trafficMap graph.TrafficMap, o options.VendorOptions) (result Config) {
nodes := []*NodeWrapper{}
edges := []*EdgeWrapper{}
buildConfig(trafficMap, &nodes, &edges, o)
// Add compound nodes as needed
switch o.GroupBy {
case options.GroupByApp:
if o.GraphType != graph.GraphTypeService {
groupByApp(&nodes)
}
case options.GroupByVersion:
if o.GraphType == graph.GraphTypeVersionedApp {
groupByVersion(&nodes)
}
default:
// no grouping
}
// sort nodes and edges for better json presentation (and predictable testing)
// kiali-1258 compound/isGroup/parent nodes must come before the child references
sort.Slice(nodes, func(i, j int) bool {
switch {
case nodes[i].Data.Namespace != nodes[j].Data.Namespace:
return nodes[i].Data.Namespace < nodes[j].Data.Namespace
case nodes[i].Data.IsGroup != nodes[j].Data.IsGroup:
return nodes[i].Data.IsGroup > nodes[j].Data.IsGroup
case nodes[i].Data.App != nodes[j].Data.App:
return nodes[i].Data.App < nodes[j].Data.App
case nodes[i].Data.Version != nodes[j].Data.Version:
return nodes[i].Data.Version < nodes[j].Data.Version
case nodes[i].Data.Service != nodes[j].Data.Service:
return nodes[i].Data.Service < nodes[j].Data.Service
default:
return nodes[i].Data.Workload < nodes[j].Data.Workload
}
})
sort.Slice(edges, func(i, j int) bool {
switch {
case edges[i].Data.Source < edges[j].Data.Source:
return true
case edges[i].Data.Source > edges[j].Data.Source:
return false
default:
return edges[i].Data.Target < edges[j].Data.Target
}
})
elements := Elements{nodes, edges}
result = Config{
Duration: int64(o.Duration.Seconds()),
Timestamp: o.QueryTime,
GraphType: o.GraphType,
Elements: elements,
}
return result
}
func buildConfig(trafficMap graph.TrafficMap, nodes *[]*NodeWrapper, edges *[]*EdgeWrapper, o options.VendorOptions) {
for id, n := range trafficMap {
nodeId := nodeHash(id)
nd := &NodeData{
Id: nodeId,
NodeType: n.NodeType,
Namespace: n.Namespace,
Workload: n.Workload,
App: n.App,
Version: n.Version,
Service: n.Service,
}
addNodeTelemetry(n, nd)
// node may have deployment but no pods running)
if val, ok := n.Metadata["isDead"]; ok {
nd.IsDead = val.(bool)
}
// node may be a root
if val, ok := n.Metadata["isRoot"]; ok {
nd.IsRoot = val.(bool)
}
// node may be unused
if val, ok := n.Metadata["isUnused"]; ok {
nd.IsUnused = val.(bool)
}
// node is not accessible to the current user
if val, ok := n.Metadata["isInaccessible"]; ok {
nd.IsInaccessible = val.(bool)
}
// node may have a circuit breaker
if val, ok := n.Metadata["hasCB"]; ok {
nd.HasCB = val.(bool)
}
// node may have a virtual service
if val, ok := n.Metadata["hasVS"]; ok {
nd.HasVS = val.(bool)
}
// set sidecars checks, if available
if val, ok := n.Metadata["hasMissingSC"]; ok {
nd.HasMissingSC = val.(bool)
}
// check if node is misconfigured
if val, ok := n.Metadata["isMisconfigured"]; ok {
nd.IsMisconfigured = val.(string)
}
// check if node is on another namespace
if val, ok := n.Metadata["isOutside"]; ok {
nd.IsOutside = val.(bool)
}
// node may have destination service info
if val, ok := n.Metadata["destServices"]; ok {
nd.DestServices = val.(map[string]bool)
}
// node may be a service entry
if val, ok := n.Metadata["isServiceEntry"]; ok {
nd.IsServiceEntry = val.(string)
}
nw := NodeWrapper{
Data: nd,
}
*nodes = append(*nodes, &nw)
for _, e := range n.Edges {
sourceIdHash := nodeHash(n.ID)
destIdHash := nodeHash(e.Dest.ID)
protocol := ""
if e.Metadata["protocol"] != nil {
protocol = e.Metadata["protocol"].(string)
}
edgeId := edgeHash(sourceIdHash, destIdHash, protocol)
ed := EdgeData{
Id: edgeId,
Source: sourceIdHash,
Target: destIdHash,
}
addEdgeTelemetry(e, &ed)
ew := EdgeWrapper{
Data: &ed,
}
*edges = append(*edges, &ew)
}
}
}
func addNodeTelemetry(n *graph.Node, nd *NodeData) {
nd.Traffic = []ProtocolTraffic{}
for _, p := range graph.Protocols {
protocolTraffic := ProtocolTraffic{Protocol: p.Name}
for _, r := range p.NodeRates {
if rateVal := getRate(n.Metadata, r.Name); rateVal > 0.0 {
if protocolTraffic.Rates == nil {
protocolTraffic.Rates = make(map[string]string)
}
protocolTraffic.Rates[r.Name] = fmt.Sprintf("%.*f", r.Precision, rateVal)
}
}
if protocolTraffic.Rates != nil {
nd.Traffic = append(nd.Traffic, protocolTraffic)
}
}
}
func addEdgeTelemetry(e *graph.Edge, ed *EdgeData) {
if val, ok := e.Metadata["isMTLS"]; ok {
ed.IsMTLS = fmt.Sprintf("%.0f", val.(float64))
}
if val, ok := e.Metadata["responseTime"]; ok {
responseTime := val.(float64)
ed.ResponseTime = fmt.Sprintf("%.0f", responseTime)
}
if val, ok := e.Source.Metadata["isUnused"]; ok {
ed.IsUnused = val.(bool)
}
// an edge represents traffic for at most one protocol
ed.Traffic = ProtocolTraffic{}
for _, p := range graph.Protocols {
protocolTraffic := ProtocolTraffic{Protocol: p.Name}
total := 0.0
err := 0.0
var percentErr, percentReq graph.Rate
for _, r := range p.EdgeRates {
rateVal := getRate(e.Metadata, r.Name)
switch {
case r.IsTotal:
// there is one field holding the total traffic
total = rateVal
case r.IsErr:
// error rates can be reported for several error status codes, so sum up all
// of the error traffic to be used in the percentErr calculation below.
err += rateVal
case r.IsPercentErr:
// hold onto the percentErr field so we know how to report it below
percentErr = r
case r.IsPercentReq:
// hold onto the percentReq field so we know how to report it below
percentReq = r
}
if rateVal := getRate(e.Metadata, r.Name); rateVal > 0.0 {
if protocolTraffic.Rates == nil {
protocolTraffic.Rates = make(map[string]string)
}
protocolTraffic.Rates[r.Name] = fmt.Sprintf("%.*f", r.Precision, rateVal)
}
}
if protocolTraffic.Rates != nil {
if total > 0 {
if percentErr.Name != "" {
rateVal := err / total * 100
if rateVal > 0.0 {
protocolTraffic.Rates[percentErr.Name] = fmt.Sprintf("%.*f", percentErr.Precision, rateVal)
}
}
if percentReq.Name != "" {
rateVal := 0.0
for _, r := range p.NodeRates {
if !r.IsOut {
continue
}
rateVal = total / getRate(e.Source.Metadata, r.Name) * 100.0
break
}
if rateVal > 0.0 {
protocolTraffic.Rates[percentReq.Name] = fmt.Sprintf("%.*f", percentReq.Precision, rateVal)
}
}
}
ed.Traffic = protocolTraffic
break
}
}
}
func getRate(md map[string]interface{}, k string) float64 {
if rate, ok := md[k]; ok {
return rate.(float64)
}
return 0.0
}
// groupByVersion adds compound nodes to group multiple versions of the same app
func groupByVersion(nodes *[]*NodeWrapper) {
appBox := make(map[string][]*NodeData)
for _, nw := range *nodes {
if nw.Data.NodeType == graph.NodeTypeApp {
k := fmt.Sprintf("box_%s_%s", nw.Data.Namespace, nw.Data.App)
appBox[k] = append(appBox[k], nw.Data)
}
}
generateGroupCompoundNodes(appBox, nodes, options.GroupByVersion)
}
// groupByApp adds compound nodes to group all nodes for the same app
func groupByApp(nodes *[]*NodeWrapper) {
appBox := make(map[string][]*NodeData)
for _, nw := range *nodes {
if nw.Data.App != "unknown" && nw.Data.App != "" {
k := fmt.Sprintf("box_%s_%s", nw.Data.Namespace, nw.Data.App)
appBox[k] = append(appBox[k], nw.Data)
}
}
generateGroupCompoundNodes(appBox, nodes, options.GroupByApp)
}
func generateGroupCompoundNodes(appBox map[string][]*NodeData, nodes *[]*NodeWrapper, groupBy string) {
for k, members := range appBox {
if len(members) > 1 {
// create the compound (parent) node for the member nodes
nodeId := nodeHash(k)
nd := NodeData{
Id: nodeId,
NodeType: graph.NodeTypeApp,
Namespace: members[0].Namespace,
App: members[0].App,
Version: "",
IsGroup: groupBy,
}
nw := NodeWrapper{
Data: &nd,
}
// assign each member node to the compound parent
nd.HasMissingSC = false // TODO: this is probably unecessarily noisy
nd.IsInaccessible = false
nd.IsOutside = false
for _, n := range members {
n.Parent = nodeId
// copy some member attributes to to the compound node (aka app box)
nd.HasMissingSC = nd.HasMissingSC || n.HasMissingSC
nd.IsInaccessible = nd.IsInaccessible || n.IsInaccessible
nd.IsOutside = nd.IsOutside || n.IsOutside
}
// add the compound node to the list of nodes
*nodes = append(*nodes, &nw)
}
}
}

179
vendor/github.com/kiali/kiali/graph/graph.go generated vendored Normal file
View File

@@ -0,0 +1,179 @@
// Graph package provides support for the graph handlers such as supported path
// variables and query params, as well as types for graph processing.
package graph
import (
"fmt"
"time"
)
const (
GraphTypeApp string = "app"
GraphTypeService string = "service" // Treated as graphType Workload, with service injection, and then condensed
GraphTypeVersionedApp string = "versionedApp"
GraphTypeWorkload string = "workload"
NodeTypeApp string = "app"
NodeTypeService string = "service"
NodeTypeUnknown string = "unknown" // The special "unknown" traffic gen node
NodeTypeWorkload string = "workload"
TF string = "2006-01-02 15:04:05" // TF is the TimeFormat for timestamps
Unknown string = "unknown" // Istio unknown label value
)
type Node struct {
ID string // unique identifier for the node
NodeType string // Node type
Namespace string // Namespace
Workload string // Workload (deployment) name
App string // Workload app label value
Version string // Workload version label value
Service string // Service name
Edges []*Edge // child nodes
Metadata map[string]interface{} // app-specific data
}
type Edge struct {
Source *Node
Dest *Node
Metadata map[string]interface{} // app-specific data
}
type NamespaceInfo struct {
Name string
Duration time.Duration
}
// TrafficMap is a map of app Nodes, each optionally holding Edge data. Metadata
// is a general purpose map for holding any desired node or edge information.
// Each app node should have a unique namespace+workload. Note that it is feasible
// but likely unusual to have two nodes with the same name+version in the same
// namespace.
type TrafficMap map[string]*Node
func NewNode(namespace, workload, app, version, service, graphType string) Node {
id, nodeType := Id(namespace, workload, app, version, service, graphType)
return NewNodeExplicit(id, namespace, workload, app, version, service, nodeType, graphType)
}
func NewNodeExplicit(id, namespace, workload, app, version, service, nodeType, graphType string) Node {
// trim unnecessary fields
switch nodeType {
case NodeTypeWorkload:
// maintain the app+version labeling if it is set, it can be useful
// for identifying destination rules, providing links, and grouping
if app == Unknown {
app = ""
}
if version == Unknown {
version = ""
}
service = ""
case NodeTypeApp:
// note: we keep workload for a versioned app node because app+version labeling
// should be backed by a single workload and it can be useful to use the workload
// name as opposed to the label values.
if graphType != GraphTypeVersionedApp {
workload = ""
version = ""
}
service = ""
case NodeTypeService:
app = ""
workload = ""
version = ""
}
return Node{
ID: id,
NodeType: nodeType,
Namespace: namespace,
Workload: workload,
App: app,
Version: version,
Service: service,
Edges: []*Edge{},
Metadata: make(map[string]interface{}),
}
}
func (s *Node) AddEdge(dest *Node) *Edge {
e := NewEdge(s, dest)
s.Edges = append(s.Edges, &e)
return &e
}
func NewEdge(source, dest *Node) Edge {
return Edge{
Source: source,
Dest: dest,
Metadata: make(map[string]interface{}),
}
}
func NewTrafficMap() TrafficMap {
return make(map[string]*Node)
}
func Id(namespace, workload, app, version, service, graphType string) (id, nodeType string) {
// first, check for the special-case "unknown" source node
if Unknown == namespace && Unknown == workload && Unknown == app && "" == service {
return fmt.Sprintf("unknown_source"), NodeTypeUnknown
}
// It is possible that a request is made for an unknown destination. For example, an Ingress
// request to an unknown path. In this case the namespace may or may not be unknown.
// Every other field is unknown. Allow one unknown service per namespace to help reflect these
// bad destinations in the graph, it may help diagnose a problem.
if Unknown == workload && Unknown == app && Unknown == service {
return fmt.Sprintf("svc_%s_unknown", namespace), NodeTypeService
}
workloadOk := IsOK(workload)
appOk := IsOK(app)
serviceOk := IsOK(service)
if !workloadOk && !appOk && !serviceOk {
panic(fmt.Sprintf("Failed ID gen: namespace=[%s] workload=[%s] app=[%s] version=[%s] service=[%s] graphType=[%s]", namespace, workload, app, version, service, graphType))
}
// handle workload graph nodes (service graphs are initially processed as workload graphs)
if graphType == GraphTypeWorkload || graphType == GraphTypeService {
// workload graph nodes are type workload or service
if !workloadOk && !serviceOk {
panic(fmt.Sprintf("Failed ID gen: namespace=[%s] workload=[%s] app=[%s] version=[%s] service=[%s] graphType=[%s]", namespace, workload, app, version, service, graphType))
}
if !workloadOk {
return fmt.Sprintf("svc_%v_%v", namespace, service), NodeTypeService
}
return fmt.Sprintf("wl_%v_%v", namespace, workload), NodeTypeWorkload
}
// handle app and versionedApp graphs
versionOk := IsOK(version)
if appOk {
// For a versionedApp graph use workload as the Id, if available. It allows us some protection
// against labeling anti-patterns. It won't be there in a few cases like:
// - root node of a node graph
// - app box node
// Otherwise use what we have and alter node type as necessary
// For a [versionless] App graph use the app label to aggregate versions/workloads into one node
if graphType == GraphTypeVersionedApp {
if workloadOk {
return fmt.Sprintf("vapp_%v_%v", namespace, workload), NodeTypeApp
}
if versionOk {
return fmt.Sprintf("vapp_%v_%v_%v", namespace, app, version), NodeTypeApp
}
}
return fmt.Sprintf("app_%v_%v", namespace, app), NodeTypeApp
}
// fall back to workload if applicable
if workloadOk {
return fmt.Sprintf("wl_%v_%v", namespace, workload), NodeTypeWorkload
}
// fall back to service as a last resort in the app graph
return fmt.Sprintf("svc_%v_%v", namespace, service), NodeTypeService
}

355
vendor/github.com/kiali/kiali/graph/options/options.go generated vendored Normal file
View File

@@ -0,0 +1,355 @@
// Package options holds the option settings for a single graph generation.
package options
import (
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/gorilla/mux"
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/graph/appender"
)
const (
GroupByApp string = "app"
GroupByNone string = "none"
GroupByVersion string = "version"
NamespaceIstio string = "istio-system"
VendorCytoscape string = "cytoscape"
defaultDuration string = "10m"
defaultGraphType string = graph.GraphTypeWorkload
defaultGroupBy string = GroupByNone
defaultIncludeIstio bool = false
defaultInjectServiceNodes bool = false
defaultVendor string = VendorCytoscape
)
const (
graphKindNamespace string = "namespace"
graphKindNode string = "node"
)
// NodeOptions are those that apply only to node-detail graphs
type NodeOptions struct {
App string
Namespace string
Service string
Version string
Workload string
}
// VendorOptions are those that are supplied to the vendor-specific generators.
type VendorOptions struct {
Duration time.Duration
GraphType string
GroupBy string
QueryTime int64 // unix time in seconds
}
// Options are all supported graph generation options.
type Options struct {
AccessibleNamespaces map[string]time.Time
Appenders []appender.Appender
IncludeIstio bool // include istio-system services. Ignored for istio-system ns. Default false.
InjectServiceNodes bool // inject destination service nodes between source and destination nodes.
Namespaces map[string]graph.NamespaceInfo
Vendor string
NodeOptions
VendorOptions
}
func NewOptions(r *http.Request) Options {
// path variables (0 or more will be set)
vars := mux.Vars(r)
app := vars["app"]
namespace := vars["namespace"]
service := vars["service"]
version := vars["version"]
workload := vars["workload"]
// query params
params := r.URL.Query()
var duration time.Duration
var includeIstio bool
var injectServiceNodes bool
var queryTime int64
durationString := params.Get("duration")
graphType := params.Get("graphType")
groupBy := params.Get("groupBy")
includeIstioString := params.Get("includeIstio")
injectServiceNodesString := params.Get("injectServiceNodes")
namespaces := params.Get("namespaces") // csl of namespaces
queryTimeString := params.Get("queryTime")
vendor := params.Get("vendor")
if durationString == "" {
duration, _ = time.ParseDuration(defaultDuration)
} else {
var durationErr error
duration, durationErr = time.ParseDuration(durationString)
if durationErr != nil {
graph.BadRequest(fmt.Sprintf("Invalid duration [%s]", durationString))
}
}
if graphType == "" {
graphType = defaultGraphType
} else if graphType != graph.GraphTypeApp && graphType != graph.GraphTypeService && graphType != graph.GraphTypeVersionedApp && graphType != graph.GraphTypeWorkload {
graph.BadRequest(fmt.Sprintf("Invalid graphType [%s]", graphType))
}
// app node graphs require an app graph type
if app != "" && graphType != graph.GraphTypeApp && graphType != graph.GraphTypeVersionedApp {
graph.BadRequest(fmt.Sprintf("Invalid graphType [%s]. This node detail graph supports only graphType app or versionedApp.", graphType))
}
if groupBy == "" {
groupBy = defaultGroupBy
} else if groupBy != GroupByApp && groupBy != GroupByNone && groupBy != GroupByVersion {
graph.BadRequest(fmt.Sprintf("Invalid groupBy [%s]", groupBy))
}
if includeIstioString == "" {
includeIstio = defaultIncludeIstio
} else {
var includeIstioErr error
includeIstio, includeIstioErr = strconv.ParseBool(includeIstioString)
if includeIstioErr != nil {
graph.BadRequest(fmt.Sprintf("Invalid includeIstio [%s]", includeIstioString))
}
}
if injectServiceNodesString == "" {
injectServiceNodes = defaultInjectServiceNodes
} else {
var injectServiceNodesErr error
injectServiceNodes, injectServiceNodesErr = strconv.ParseBool(injectServiceNodesString)
if injectServiceNodesErr != nil {
graph.BadRequest(fmt.Sprintf("Invalid injectServiceNodes [%s]", injectServiceNodesString))
}
}
if queryTimeString == "" {
queryTime = time.Now().Unix()
} else {
var queryTimeErr error
queryTime, queryTimeErr = strconv.ParseInt(queryTimeString, 10, 64)
if queryTimeErr != nil {
graph.BadRequest(fmt.Sprintf("Invalid queryTime [%s]", queryTimeString))
}
}
if vendor == "" {
vendor = defaultVendor
} else if vendor != VendorCytoscape {
graph.BadRequest(fmt.Sprintf("Invalid vendor [%s]", vendor))
}
// Process namespaces options:
namespaceMap := make(map[string]graph.NamespaceInfo)
accessibleNamespaces := getAccessibleNamespaces()
// If path variable is set then it is the only relevant namespace (it's a node graph)
// Else if namespaces query param is set it specifies the relevant namespaces
// Else error, at least one namespace is required.
if namespace != "" {
namespaces = namespace
}
if namespaces == "" {
graph.BadRequest(fmt.Sprintf("At least one namespace must be specified via the namespaces query parameter."))
}
for _, namespaceToken := range strings.Split(namespaces, ",") {
namespaceToken = strings.TrimSpace(namespaceToken)
if creationTime, found := accessibleNamespaces[namespaceToken]; found {
namespaceMap[namespaceToken] = graph.NamespaceInfo{
Name: namespaceToken,
Duration: resolveNamespaceDuration(creationTime, duration, queryTime),
}
} else {
graph.Forbidden(fmt.Sprintf("Requested namespace [%s] is not accessible.", namespaceToken))
}
}
// Service graphs require service injection
if graphType == graph.GraphTypeService {
injectServiceNodes = true
}
options := Options{
AccessibleNamespaces: accessibleNamespaces,
IncludeIstio: includeIstio,
InjectServiceNodes: injectServiceNodes,
Namespaces: namespaceMap,
Vendor: vendor,
NodeOptions: NodeOptions{
App: app,
Namespace: namespace,
Service: service,
Version: version,
Workload: workload,
},
VendorOptions: VendorOptions{
Duration: duration,
GraphType: graphType,
GroupBy: groupBy,
QueryTime: queryTime,
},
}
appenders := parseAppenders(params, options)
options.Appenders = appenders
return options
}
// GetGraphKind will return the kind of graph represented by the options.
func (o *Options) GetGraphKind() string {
if o.NodeOptions.App != "" ||
o.NodeOptions.Version != "" ||
o.NodeOptions.Workload != "" ||
o.NodeOptions.Service != "" {
return graphKindNode
} else {
return graphKindNamespace
}
}
func parseAppenders(params url.Values, o Options) []appender.Appender {
requestedAppenders := make(map[string]bool)
allAppenders := false
if _, ok := params["appenders"]; ok {
for _, requestedAppender := range strings.Split(params.Get("appenders"), ",") {
switch strings.TrimSpace(requestedAppender) {
case appender.DeadNodeAppenderName:
requestedAppenders[appender.DeadNodeAppenderName] = true
case appender.ServiceEntryAppenderName:
requestedAppenders[appender.ServiceEntryAppenderName] = true
case appender.IstioAppenderName:
requestedAppenders[appender.IstioAppenderName] = true
case appender.ResponseTimeAppenderName:
requestedAppenders[appender.ResponseTimeAppenderName] = true
case appender.SecurityPolicyAppenderName:
requestedAppenders[appender.SecurityPolicyAppenderName] = true
case appender.SidecarsCheckAppenderName:
requestedAppenders[appender.SidecarsCheckAppenderName] = true
case appender.UnusedNodeAppenderName:
requestedAppenders[appender.UnusedNodeAppenderName] = true
case "":
// skip
default:
graph.BadRequest(fmt.Sprintf("Invalid appender [%s]", strings.TrimSpace(requestedAppender)))
}
}
} else {
allAppenders = true
}
// The appender order is important
// To pre-process service nodes run service_entry appender first
// To reduce processing, filter dead nodes next
// To reduce processing, next run appenders that don't apply to unused services
// Add orphan (unused) services
// Run remaining appenders
var appenders []appender.Appender
if _, ok := requestedAppenders[appender.ServiceEntryAppenderName]; ok || allAppenders {
a := appender.ServiceEntryAppender{
AccessibleNamespaces: o.AccessibleNamespaces,
}
appenders = append(appenders, a)
}
if _, ok := requestedAppenders[appender.DeadNodeAppenderName]; ok || allAppenders {
a := appender.DeadNodeAppender{}
appenders = append(appenders, a)
}
if _, ok := requestedAppenders[appender.ResponseTimeAppenderName]; ok || allAppenders {
quantile := appender.DefaultQuantile
if _, ok := params["responseTimeQuantile"]; ok {
if responseTimeQuantile, err := strconv.ParseFloat(params.Get("responseTimeQuantile"), 64); err == nil {
quantile = responseTimeQuantile
}
}
a := appender.ResponseTimeAppender{
Quantile: quantile,
GraphType: o.GraphType,
InjectServiceNodes: o.InjectServiceNodes,
IncludeIstio: o.IncludeIstio,
Namespaces: o.Namespaces,
QueryTime: o.QueryTime,
}
appenders = append(appenders, a)
}
if _, ok := requestedAppenders[appender.SecurityPolicyAppenderName]; ok || allAppenders {
a := appender.SecurityPolicyAppender{
GraphType: o.GraphType,
IncludeIstio: o.IncludeIstio,
InjectServiceNodes: o.InjectServiceNodes,
Namespaces: o.Namespaces,
QueryTime: o.QueryTime,
}
appenders = append(appenders, a)
}
if _, ok := requestedAppenders[appender.UnusedNodeAppenderName]; ok || allAppenders {
hasNodeOptions := o.App != "" || o.Workload != "" || o.Service != ""
a := appender.UnusedNodeAppender{
GraphType: o.GraphType,
IsNodeGraph: hasNodeOptions,
}
appenders = append(appenders, a)
}
if _, ok := requestedAppenders[appender.IstioAppenderName]; ok || allAppenders {
a := appender.IstioAppender{}
appenders = append(appenders, a)
}
if _, ok := requestedAppenders[appender.SidecarsCheckAppenderName]; ok || allAppenders {
a := appender.SidecarsCheckAppender{}
appenders = append(appenders, a)
}
return appenders
}
// getAccessibleNamespaces returns a Set of all namespaces accessible to the user.
// The Set is implemented using the map convention. Each map entry is set to the
// creation timestamp of the namespace, to be used to ensure valid time ranges for
// queries against the namespace.
func getAccessibleNamespaces() map[string]time.Time {
// Get the namespaces
business, err := business.Get()
graph.CheckError(err)
namespaces, err := business.Namespace.GetNamespaces()
graph.CheckError(err)
// Create a map to store the namespaces
namespaceMap := make(map[string]time.Time)
for _, namespace := range namespaces {
namespaceMap[namespace.Name] = namespace.CreationTimestamp
}
return namespaceMap
}
// resolveNamespaceDuration determines if, given queryTime, the requestedRange won't lead to
// querying data before nsCreationTime. If this is the case, resolveNamespaceDuration returns
// and adjusted range. Else, the original requestedRange is returned.
func resolveNamespaceDuration(nsCreationTime time.Time, requestedRange time.Duration, queryTime int64) time.Duration {
var referenceTime time.Time
resolvedBound := requestedRange
if !nsCreationTime.IsZero() {
if queryTime != 0 {
referenceTime = time.Unix(queryTime, 0)
} else {
referenceTime = time.Now()
}
nsLifetime := referenceTime.Sub(nsCreationTime)
if nsLifetime < resolvedBound {
resolvedBound = nsLifetime
}
}
return resolvedBound
}

203
vendor/github.com/kiali/kiali/graph/protocol.go generated vendored Normal file
View File

@@ -0,0 +1,203 @@
package graph
import (
"fmt"
"strings"
"github.com/kiali/kiali/log"
)
type Rate struct {
Name string
IsErr bool
IsIn bool
IsOut bool
IsPercentErr bool
IsPercentReq bool
IsTotal bool
Precision int
}
type Protocol struct {
Name string
EdgeRates []Rate
NodeRates []Rate
Unit string
UnitShort string
}
var GRPC Protocol = Protocol{
Name: "grpc",
EdgeRates: []Rate{
Rate{Name: "grpc", IsTotal: true, Precision: 2},
Rate{Name: "grpcErr", IsErr: true, Precision: 2},
Rate{Name: "grpcPercentErr", IsPercentErr: true, Precision: 1},
Rate{Name: "grpcPercentReq", IsPercentReq: true, Precision: 1},
},
NodeRates: []Rate{
Rate{Name: "grpcIn", IsIn: true, Precision: 2},
Rate{Name: "grpcInErr", IsErr: true, Precision: 2},
Rate{Name: "grpcOut", IsOut: true, Precision: 2},
},
Unit: "requests per second",
UnitShort: "rps",
}
var HTTP Protocol = Protocol{
Name: "http",
EdgeRates: []Rate{
Rate{Name: "http", IsTotal: true, Precision: 2},
Rate{Name: "http3xx", Precision: 2},
Rate{Name: "http4xx", IsErr: true, Precision: 2},
Rate{Name: "http5xx", IsErr: true, Precision: 2},
Rate{Name: "httpPercentErr", IsPercentErr: true, Precision: 1},
Rate{Name: "httpPercentReq", IsPercentReq: true, Precision: 1},
},
NodeRates: []Rate{
Rate{Name: "httpIn", IsIn: true, Precision: 2},
Rate{Name: "httpIn3xx", Precision: 2},
Rate{Name: "httpIn4xx", IsErr: true, Precision: 2},
Rate{Name: "httpIn5xx", IsErr: true, Precision: 2},
Rate{Name: "httpOut", IsOut: true, Precision: 2},
},
Unit: "requests per second",
UnitShort: "rps",
}
var TCP Protocol = Protocol{
Name: "tcp",
EdgeRates: []Rate{
Rate{Name: "tcp", IsTotal: true, Precision: 2},
},
NodeRates: []Rate{
Rate{Name: "tcpIn", IsIn: true, Precision: 2},
Rate{Name: "tcpOut", IsOut: true, Precision: 2},
},
Unit: "bytes per second",
UnitShort: "bps",
}
var Protocols []Protocol = []Protocol{GRPC, HTTP, TCP}
func AddToMetadata(protocol string, val float64, code string, sourceMetadata, destMetadata, edgeMetadata map[string]interface{}) {
switch protocol {
case "grpc":
addToMetadataGrpc(val, code, sourceMetadata, destMetadata, edgeMetadata)
case "http":
addToMetadataHttp(val, code, sourceMetadata, destMetadata, edgeMetadata)
case "tcp":
addToMetadataTcp(val, code, sourceMetadata, destMetadata, edgeMetadata)
default:
log.Tracef("Ignore unhandled metadata protocol [%s]", protocol)
}
}
func addToMetadataGrpc(val float64, code string, sourceMetadata, destMetadata, edgeMetadata map[string]interface{}) {
addToMetadataValue(sourceMetadata, "grpcOut", val)
addToMetadataValue(destMetadata, "grpcIn", val)
addToMetadataValue(edgeMetadata, "grpc", val)
// Istio telemetry may use HTTP codes for gRPC, so if it quacks like a duck...
isHttpCode := len(code) == 3
isErr := false
if isHttpCode {
isErr = strings.HasPrefix(code, "4") || strings.HasPrefix(code, "5")
} else {
isErr = code != "0"
}
if isErr {
addToMetadataValue(destMetadata, "grpcInErr", val)
addToMetadataValue(edgeMetadata, "grpcErr", val)
}
}
func addToMetadataHttp(val float64, code string, sourceMetadata, destMetadata, edgeMetadata map[string]interface{}) {
addToMetadataValue(sourceMetadata, "httpOut", val)
addToMetadataValue(destMetadata, "httpIn", val)
addToMetadataValue(edgeMetadata, "http", val)
// note, we don't track 2xx because it's not used downstream and can be easily
// calculated: 2xx = (rate - 3xx - 4xx - 5xx)
switch {
case strings.HasPrefix(code, "3"):
addToMetadataValue(destMetadata, "httpIn3xx", val)
addToMetadataValue(edgeMetadata, "http3xx", val)
case strings.HasPrefix(code, "4"):
addToMetadataValue(destMetadata, "httpIn4xx", val)
addToMetadataValue(edgeMetadata, "http4xx", val)
case strings.HasPrefix(code, "5"):
addToMetadataValue(destMetadata, "httpIn5xx", val)
addToMetadataValue(edgeMetadata, "http5xx", val)
}
}
func addToMetadataTcp(val float64, code string, sourceMetadata, destMetadata, edgeMetadata map[string]interface{}) {
addToMetadataValue(sourceMetadata, "tcpOut", val)
addToMetadataValue(destMetadata, "tcpIn", val)
addToMetadataValue(edgeMetadata, "tcp", val)
}
func AddOutgoingEdgeToMetadata(sourceMetadata, edgeMetadata map[string]interface{}) {
if val, valOk := edgeMetadata["grpc"]; valOk {
addToMetadataValue(sourceMetadata, "grpcOut", val.(float64))
}
if val, valOk := edgeMetadata["http"]; valOk {
addToMetadataValue(sourceMetadata, "httpOut", val.(float64))
}
if val, valOk := edgeMetadata["tcp"]; valOk {
addToMetadataValue(sourceMetadata, "tcpOut", val.(float64))
}
}
func AddServiceGraphTraffic(toEdge, fromEdge *Edge) {
protocol := toEdge.Metadata["protocol"]
switch protocol {
case "grpc":
addToMetadataValue(toEdge.Metadata, "grpc", fromEdge.Metadata["grpc"].(float64))
if val, ok := fromEdge.Metadata["grpcErr"]; ok {
addToMetadataValue(toEdge.Metadata, "grpcErr", val.(float64))
}
case "http":
addToMetadataValue(toEdge.Metadata, "http", fromEdge.Metadata["http"].(float64))
if val, ok := fromEdge.Metadata["http3xx"]; ok {
addToMetadataValue(toEdge.Metadata, "http3xx", val.(float64))
}
if val, ok := fromEdge.Metadata["http4xx"]; ok {
addToMetadataValue(toEdge.Metadata, "http4xx", val.(float64))
}
if val, ok := fromEdge.Metadata["http5xx"]; ok {
addToMetadataValue(toEdge.Metadata, "http5xx", val.(float64))
}
case "tcp":
addToMetadataValue(toEdge.Metadata, "tcp", fromEdge.Metadata["tcp"].(float64))
default:
Error(fmt.Sprintf("Unexpected edge protocol [%v] for edge [%+v]", protocol, toEdge))
}
// handle any appender-based edge data (nothing currently)
// note: We used to average response times of the aggregated edges but realized that
// we can't average quantiles (kiali-2297).
}
func addToMetadataValue(md map[string]interface{}, k string, v float64) {
if curr, ok := md[k]; ok {
md[k] = curr.(float64) + v
} else {
md[k] = v
}
}
func averageMetadataValue(md map[string]interface{}, k string, v float64) {
total := v
count := 1.0
kTotal := k + "_total"
kCount := k + "_count"
if prevTotal, ok := md[kTotal]; ok {
total += prevTotal.(float64)
}
if prevCount, ok := md[kCount]; ok {
count += prevCount.(float64)
}
md[kTotal] = total
md[kCount] = count
md[k] = total / count
}

45
vendor/github.com/kiali/kiali/graph/util.go generated vendored Normal file
View File

@@ -0,0 +1,45 @@
package graph
import (
"net/http"
)
type Response struct {
Message string
Code int
}
// Error panics with InternalServerError and the provided message
func Error(message string) {
Panic(message, http.StatusInternalServerError)
}
// BadRequest panics with BadRequest and the provided message
func BadRequest(message string) {
Panic(message, http.StatusBadRequest)
}
// Forbidden panics with Forbidden and the provided message
func Forbidden(message string) {
Panic(message, http.StatusForbidden)
}
// Panic panics with the provided HTTP response code and message
func Panic(message string, code int) Response {
panic(Response{
Message: message,
Code: code,
})
}
// CheckError panics with the supplied error if it is non-nil
func CheckError(err error) {
if err != nil {
panic(err.Error)
}
}
// IsOK just validates that a telemetry label value is not empty or unknown
func IsOK(telemetryVal string) bool {
return telemetryVal != "" && telemetryVal != Unknown
}

158
vendor/github.com/kiali/kiali/handlers/apps.go generated vendored Normal file
View File

@@ -0,0 +1,158 @@
package handlers
import (
"net/http"
"github.com/gorilla/mux"
"k8s.io/apimachinery/pkg/api/errors"
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/prometheus"
)
// AppList is the API handler to fetch all the apps to be displayed, related to a single namespace
func AppList(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Apps initialization error: "+err.Error())
return
}
namespace := params["namespace"]
// Fetch and build apps
appList, err := business.App.GetAppList(namespace)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
RespondWithJSON(w, http.StatusOK, appList)
}
// AppDetails is the API handler to fetch all details to be displayed, related to a single app
func AppDetails(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
namespace := params["namespace"]
app := params["app"]
// Fetch and build app
appDetails, err := business.App.GetApp(namespace, app)
if err != nil {
if errors.IsNotFound(err) {
RespondWithError(w, http.StatusNotFound, err.Error())
} else {
RespondWithError(w, http.StatusInternalServerError, err.Error())
}
return
}
RespondWithJSON(w, http.StatusOK, appDetails)
}
// AppMetrics is the API handler to fetch metrics to be displayed, related to an app-label grouping
func AppMetrics(w http.ResponseWriter, r *http.Request) {
getAppMetrics(w, r, defaultPromClientSupplier, defaultK8SClientSupplier)
}
// getAppMetrics (mock-friendly version)
func getAppMetrics(w http.ResponseWriter, r *http.Request, promSupplier promClientSupplier, k8sSupplier k8sClientSupplier) {
vars := mux.Vars(r)
namespace := vars["namespace"]
app := vars["app"]
prom, _, namespaceInfo := initClientsForMetrics(w, promSupplier, k8sSupplier, namespace)
if prom == nil {
// any returned value nil means error & response already written
return
}
params := prometheus.IstioMetricsQuery{Namespace: namespace, App: app}
err := extractIstioMetricsQueryParams(r, &params, namespaceInfo)
if err != nil {
RespondWithError(w, http.StatusBadRequest, err.Error())
return
}
metrics := prom.GetMetrics(&params)
RespondWithJSON(w, http.StatusOK, metrics)
}
// CustomDashboard is the API handler to fetch runtime metrics to be displayed, related to a single app
func CustomDashboard(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
namespace := vars["namespace"]
app := vars["app"]
template := vars["template"]
prom, _, namespaceInfo := initClientsForMetrics(w, defaultPromClientSupplier, defaultK8SClientSupplier, namespace)
if prom == nil {
// any returned value nil means error & response already written
return
}
monitoringClient, err := kubernetes.NewKialiMonitoringClient()
if err != nil {
log.Error(err)
RespondWithError(w, http.StatusServiceUnavailable, "Kiali monitoring client error: "+err.Error())
return
}
svc := business.NewDashboardsService(monitoringClient, prom)
params := prometheus.CustomMetricsQuery{Namespace: namespace, App: app}
err = extractCustomMetricsQueryParams(r, &params, namespaceInfo)
if err != nil {
RespondWithError(w, http.StatusBadRequest, err.Error())
return
}
dashboard, err := svc.GetDashboard(params, template)
if err != nil {
if errors.IsNotFound(err) {
RespondWithError(w, http.StatusNotFound, err.Error())
} else {
RespondWithError(w, http.StatusInternalServerError, err.Error())
}
return
}
RespondWithJSON(w, http.StatusOK, dashboard)
}
// AppDashboard is the API handler to fetch Istio dashboard, related to a single app
func AppDashboard(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
namespace := vars["namespace"]
app := vars["app"]
prom, _, namespaceInfo := initClientsForMetrics(w, defaultPromClientSupplier, defaultK8SClientSupplier, namespace)
if prom == nil {
// any returned value nil means error & response already written
return
}
params := prometheus.IstioMetricsQuery{Namespace: namespace, App: app}
err := extractIstioMetricsQueryParams(r, &params, namespaceInfo)
if err != nil {
RespondWithError(w, http.StatusBadRequest, err.Error())
return
}
svc := business.NewDashboardsService(nil, prom)
dashboard, err := svc.GetIstioDashboard(params)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
RespondWithJSON(w, http.StatusOK, dashboard)
}

38
vendor/github.com/kiali/kiali/handlers/base.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
package handlers
import (
"encoding/json"
"net/http"
)
func RespondWithJSON(w http.ResponseWriter, code int, payload interface{}) {
response, err := json.Marshal(payload)
if err != nil {
response, _ = json.Marshal(map[string]string{"error": err.Error()})
code = http.StatusInternalServerError
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
w.Write(response)
}
func RespondWithJSONIndent(w http.ResponseWriter, code int, payload interface{}) {
response, err := json.MarshalIndent(payload, "", " ")
if err != nil {
response, _ = json.Marshal(map[string]string{"error": err.Error()})
code = http.StatusInternalServerError
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
w.Write(response)
}
func RespondWithError(w http.ResponseWriter, code int, message string) {
RespondWithJSON(w, code, map[string]string{"error": message})
}
func RespondWithCode(w http.ResponseWriter, code int) {
w.WriteHeader(code)
}

102
vendor/github.com/kiali/kiali/handlers/config.go generated vendored Normal file
View File

@@ -0,0 +1,102 @@
package handlers
import (
"fmt"
"net/http"
"time"
yaml "gopkg.in/yaml.v2"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/prometheus"
)
const (
defaultPrometheusGlobalScrapeInterval = 15 // seconds
)
// PrometheusConfig holds actual Prometheus configuration that is useful to Kiali.
// All durations are in seconds.
type PrometheusConfig struct {
GlobalScrapeInterval int64 `json:"globalScrapeInterval,omitempty"`
StorageTsdbRetention int64 `json:"storageTsdbRetention,omitempty"`
}
// PublicConfig is a subset of Kiali configuration that can be exposed to clients to
// help them interact with the system.
type PublicConfig struct {
IstioNamespace string `json:"istioNamespace,omitempty"`
IstioLabels config.IstioLabels `json:"istioLabels,omitempty"`
Prometheus PrometheusConfig `json:"prometheus,omitempty"`
}
// Config is a REST http.HandlerFunc serving up the Kiali configuration made public to clients.
func Config(w http.ResponseWriter, r *http.Request) {
defer handlePanic(w)
// Note that determine the Prometheus config at request time because it is not
// guaranteed to remain the same during the Kiali lifespan.
promConfig := getPrometheusConfig()
config := config.Get()
publicConfig := PublicConfig{
IstioNamespace: config.IstioNamespace,
IstioLabels: config.IstioLabels,
Prometheus: PrometheusConfig{
GlobalScrapeInterval: promConfig.GlobalScrapeInterval,
StorageTsdbRetention: promConfig.StorageTsdbRetention,
},
}
RespondWithJSONIndent(w, http.StatusOK, publicConfig)
}
type PrometheusPartialConfig struct {
Global struct {
Scrape_interval string
}
}
func getPrometheusConfig() PrometheusConfig {
promConfig := PrometheusConfig{
GlobalScrapeInterval: defaultPrometheusGlobalScrapeInterval,
}
client, err := prometheus.NewClient()
if !checkErr(err, "") {
log.Error(err)
return promConfig
}
configResult, err := client.GetConfiguration()
if checkErr(err, "Failed to fetch Prometheus configuration") {
var config PrometheusPartialConfig
if checkErr(yaml.Unmarshal([]byte(configResult.YAML), &config), "Failed to unmarshal Prometheus configuration") {
scrapeIntervalString := config.Global.Scrape_interval
scrapeInterval, err := time.ParseDuration(scrapeIntervalString)
if checkErr(err, fmt.Sprintf("Invalid global scrape interval [%s]", scrapeIntervalString)) {
promConfig.GlobalScrapeInterval = int64(scrapeInterval.Seconds())
}
}
}
flags, err := client.GetFlags()
if checkErr(err, "Failed to fetch Prometheus flags") {
if retentionString, ok := flags["storage.tsdb.retention"]; ok {
retention, err := time.ParseDuration(retentionString)
if checkErr(err, fmt.Sprintf("Invalid storage.tsdb.retention [%s]", retentionString)) {
promConfig.StorageTsdbRetention = int64(retention.Seconds())
}
}
}
return promConfig
}
func checkErr(err error, message string) bool {
if err != nil {
log.Errorf("%s: %v", message, err)
return false
}
return true
}

163
vendor/github.com/kiali/kiali/handlers/grafana.go generated vendored Normal file
View File

@@ -0,0 +1,163 @@
package handlers
import (
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
"k8s.io/api/core/v1"
k8serr "k8s.io/apimachinery/pkg/api/errors"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
)
type serviceSupplier func(string, string) (*v1.ServiceSpec, error)
type dashboardSupplier func(string, string, string) ([]byte, int, error)
// GetGrafanaInfo provides the Grafana URL and other info, first by checking if a config exists
// then (if not) by inspecting the Kubernetes Grafana service in namespace istio-system
func GetGrafanaInfo(w http.ResponseWriter, r *http.Request) {
info, code, err := getGrafanaInfo(getService, findDashboard)
if err != nil {
log.Error(err)
RespondWithError(w, code, err.Error())
return
}
RespondWithJSON(w, code, info)
}
// getGrafanaInfo returns the Grafana URL and other info, the HTTP status code (int) and eventually an error
func getGrafanaInfo(serviceSupplier serviceSupplier, dashboardSupplier dashboardSupplier) (*models.GrafanaInfo, int, error) {
grafanaConfig := config.Get().ExternalServices.Grafana
if !grafanaConfig.DisplayLink {
return nil, http.StatusNoContent, nil
}
// Check if URL is in the configuration
if grafanaConfig.URL == "" {
return nil, http.StatusServiceUnavailable, errors.New("Grafana URL is not set in Kiali configuration")
}
// Check if URL is valid
_, err := validateURL(grafanaConfig.URL)
if err != nil {
return nil, http.StatusServiceUnavailable, errors.New("Wrong format for Grafana URL in Kiali configuration: " + err.Error())
}
// Find the in-cluster URL to reach Grafana's REST API
spec, err := serviceSupplier(grafanaConfig.ServiceNamespace, grafanaConfig.Service)
if err != nil {
if k8serr.IsNotFound(err) {
return nil, http.StatusServiceUnavailable, err
}
return nil, http.StatusInternalServerError, err
}
if len(spec.Ports) == 0 {
return nil, http.StatusServiceUnavailable, errors.New("No port found for Grafana service, cannot access in-cluster service")
}
if len(spec.Ports) > 1 {
log.Warning("Several ports found for Grafana service, picking the first one")
}
internalURL := fmt.Sprintf("http://%s.%s:%d", grafanaConfig.Service, grafanaConfig.ServiceNamespace, spec.Ports[0].Port)
credentials, err := buildAuthHeader(grafanaConfig)
if err != nil {
log.Warning("Failed to build auth header token: " + err.Error())
}
// Call Grafana REST API to get dashboard urls
serviceDashboardPath, err := getDashboardPath(internalURL, grafanaConfig.ServiceDashboardPattern, credentials, dashboardSupplier)
if err != nil {
return nil, http.StatusInternalServerError, err
}
workloadDashboardPath, err := getDashboardPath(internalURL, grafanaConfig.WorkloadDashboardPattern, credentials, dashboardSupplier)
if err != nil {
return nil, http.StatusInternalServerError, err
}
grafanaInfo := models.GrafanaInfo{
URL: grafanaConfig.URL,
ServiceDashboardPath: serviceDashboardPath,
WorkloadDashboardPath: workloadDashboardPath,
VarNamespace: grafanaConfig.VarNamespace,
VarService: grafanaConfig.VarService,
VarWorkload: grafanaConfig.VarWorkload,
}
return &grafanaInfo, http.StatusOK, nil
}
func getDashboardPath(url string, searchPattern string, credentials string, dashboardSupplier dashboardSupplier) (string, error) {
body, code, err := dashboardSupplier(url, searchPattern, credentials)
if err != nil {
return "", err
}
if code != http.StatusOK {
// Get error message
var f map[string]string
err = json.Unmarshal(body, &f)
if err != nil {
return "", fmt.Errorf("Unknown error from Grafana (%d)", code)
}
message, ok := f["message"]
if !ok {
return "", fmt.Errorf("Unknown error from Grafana (%d)", code)
}
return "", fmt.Errorf("Error from Grafana (%d): %s", code, message)
}
// Status OK, read dashboards info
var dashboards []map[string]interface{}
err = json.Unmarshal(body, &dashboards)
if err != nil {
return "", err
}
if len(dashboards) == 0 {
return "", fmt.Errorf("No Grafana dashboard found for search pattern '%s'", searchPattern)
}
if len(dashboards) > 1 {
log.Infof("Several Grafana dashboards found for pattern '%s', picking the first one", searchPattern)
}
dashPath, ok := dashboards[0]["url"]
if !ok {
return "", fmt.Errorf("URL field not found in Grafana dashboard for search pattern '%s'", searchPattern)
}
return dashPath.(string), nil
}
func findDashboard(url, searchPattern string, credentials string) ([]byte, int, error) {
req, err := http.NewRequest(http.MethodGet, url+"/api/search?query="+searchPattern, nil)
if err != nil {
return nil, 0, err
}
if credentials != "" {
req.Header.Add("Authorization", credentials)
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return nil, 0, err
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
return body, resp.StatusCode, err
}
func buildAuthHeader(grafanaConfig config.GrafanaConfig) (string, error) {
var credHeader string
if grafanaConfig.APIKey != "" {
credHeader = "Bearer " + grafanaConfig.APIKey
} else if grafanaConfig.Username != "" {
if grafanaConfig.Password == "" {
return "", fmt.Errorf("Grafana username set but no Grafana password provided")
}
basicAuth := base64.StdEncoding.EncodeToString([]byte(grafanaConfig.Username + ":" + grafanaConfig.Password))
credHeader = "Basic " + basicAuth
}
return credHeader, nil
}

979
vendor/github.com/kiali/kiali/handlers/graph.go generated vendored Normal file
View File

@@ -0,0 +1,979 @@
package handlers
// Graph.go provides handlers for graph request endpoints. The handlers return configuration
// for a specified vendor (default cytoscape). The configuration format is vendor-specific, typically
// JSON, and provides what is necessary to allow the vendor's graphing tool to render the graph.
//
// The algorithm is three-pass:
// First Pass: Query Prometheus (istio-requests-total metric) to retrieve the source-destination
// dependencies. Build a traffic map to provide a full representation of nodes and edges.
//
// Second Pass: Apply any requested appenders to alter or append to the graph.
//
// Third Pass: Supply the traffic map to a vendor-specific config generator that
// constructs the vendor-specific output.
//
// The current Handlers:
// GraphNamespace: Generate a graph for all services in a namespace (whether source or destination)
// GraphNode: Generate a graph centered on a specified node, limited to requesting and requested nodes.
//
// The handlers accept the following query parameters (some handlers may ignore some parameters):
// appenders: Comma-separated list of appenders to run from [circuit_breaker, unused_service...] (default all)
// Note, appenders may support appender-specific query parameters
// duration: time.Duration indicating desired query range duration, (default 10m)
// graphType: Determines how to present the telemetry data. app | service | versionedApp | workload (default workload)
// groupBy: If supported by vendor, visually group by a specified node attribute (default version)
// includeIstio: Include istio-system (infra) services (default false)
// namespaces: Comma-separated list of namespace names to use in the graph. Will override namespace path param
// queryTime: Unix time (seconds) for query such that range is queryTime-duration..queryTime (default now)
// vendor: cytoscape (default cytoscape)
//
// * Error% is the percentage of requests with response code != 2XX
// * See the vendor-specific config generators for more details about the specific vendor.
//
import (
"context"
"fmt"
"net/http"
"runtime/debug"
"time"
"github.com/prometheus/client_golang/api/prometheus/v1"
"github.com/prometheus/common/model"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/graph"
"github.com/kiali/kiali/graph/appender"
"github.com/kiali/kiali/graph/cytoscape"
"github.com/kiali/kiali/graph/options"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/prometheus"
"github.com/kiali/kiali/prometheus/internalmetrics"
)
// GraphNamespaces is a REST http.HandlerFunc handling graph generation for 1 or more namespaces
func GraphNamespaces(w http.ResponseWriter, r *http.Request) {
defer handlePanic(w)
client, err := prometheus.NewClient()
graph.CheckError(err)
graphNamespaces(w, r, client)
}
// graphNamespaces provides a testing hook that can supply a mock client
func graphNamespaces(w http.ResponseWriter, r *http.Request, client *prometheus.Client) {
o := options.NewOptions(r)
// time how long it takes to generate this graph
promtimer := internalmetrics.GetGraphGenerationTimePrometheusTimer(o.GetGraphKind(), o.GraphType, o.InjectServiceNodes)
defer promtimer.ObserveDuration()
trafficMap := buildNamespacesTrafficMap(o, client)
generateGraph(trafficMap, w, o)
// update metrics
internalmetrics.SetGraphNodes(o.GetGraphKind(), o.GraphType, o.InjectServiceNodes, len(trafficMap))
}
func buildNamespacesTrafficMap(o options.Options, client *prometheus.Client) graph.TrafficMap {
switch o.Vendor {
case "cytoscape":
default:
graph.Error(fmt.Sprintf("Vendor [%s] not supported", o.Vendor))
}
log.Debugf("Build [%s] graph for [%v] namespaces [%s]", o.GraphType, len(o.Namespaces), o.Namespaces)
trafficMap := graph.NewTrafficMap()
globalInfo := appender.NewGlobalInfo()
for _, namespace := range o.Namespaces {
log.Debugf("Build traffic map for namespace [%s]", namespace)
namespaceTrafficMap := buildNamespaceTrafficMap(namespace.Name, o, client)
namespaceInfo := appender.NewNamespaceInfo(namespace.Name)
for _, a := range o.Appenders {
appenderTimer := internalmetrics.GetGraphAppenderTimePrometheusTimer(a.Name())
a.AppendGraph(namespaceTrafficMap, globalInfo, namespaceInfo)
appenderTimer.ObserveDuration()
}
mergeTrafficMaps(trafficMap, namespace.Name, namespaceTrafficMap)
}
// The appenders can add/remove/alter nodes. After the manipulations are complete
// we can make some final adjustments:
// - mark the outsiders (i.e. nodes not in the requested namespaces)
// - mark the insider traffic generators (i.e. inside the namespace and only outgoing edges)
markOutsideOrInaccessible(trafficMap, o)
markTrafficGenerators(trafficMap)
if graph.GraphTypeService == o.GraphType {
trafficMap = reduceToServiceGraph(trafficMap)
}
return trafficMap
}
// mergeTrafficMaps ensures that we only have unique nodes by removing duplicate
// nodes and merging their edges. When removing a duplicate prefer an instance
// from the namespace being merged-in because it is guaranteed to have all appender
// information applied (i.e. not an outsider). We also need to avoid duplicate edges,
// it can happen when a terminal node of one namespace is a root node of another:
// ns1 graph: unknown -> ns1:A -> ns2:B
// ns2 graph: ns1:A -> ns2:B -> ns2:C
func mergeTrafficMaps(trafficMap graph.TrafficMap, ns string, nsTrafficMap graph.TrafficMap) {
for nsId, nsNode := range nsTrafficMap {
if node, isDup := trafficMap[nsId]; isDup {
if nsNode.Namespace == ns {
// prefer nsNode (see above comment), so do a swap
trafficMap[nsId] = nsNode
temp := node
node = nsNode
nsNode = temp
}
for _, nsEdge := range nsNode.Edges {
isDupEdge := false
for _, e := range node.Edges {
if nsEdge.Dest.ID == e.Dest.ID && nsEdge.Metadata["protocol"] == e.Metadata["protocol"] {
isDupEdge = true
break
}
}
if !isDupEdge {
node.Edges = append(node.Edges, nsEdge)
// add traffic for the new edge
graph.AddOutgoingEdgeToMetadata(node.Metadata, nsEdge.Metadata)
}
}
} else {
trafficMap[nsId] = nsNode
}
}
}
func markOutsideOrInaccessible(trafficMap graph.TrafficMap, o options.Options) {
for _, n := range trafficMap {
switch n.NodeType {
case graph.NodeTypeUnknown:
n.Metadata["isInaccessible"] = true
case graph.NodeTypeService:
if _, ok := n.Metadata["isServiceEntry"]; ok {
n.Metadata["isInaccessible"] = true
} else {
if isOutside(n, o.Namespaces) {
n.Metadata["isOutside"] = true
}
}
default:
if isOutside(n, o.Namespaces) {
n.Metadata["isOutside"] = true
}
}
if isOutsider, ok := n.Metadata["isOutside"]; ok && isOutsider.(bool) {
if _, ok2 := n.Metadata["isInaccessible"]; !ok2 {
if isInaccessible(n, o.AccessibleNamespaces) {
n.Metadata["isInaccessible"] = true
}
}
}
}
}
func isOutside(n *graph.Node, namespaces map[string]graph.NamespaceInfo) bool {
if n.Namespace == graph.Unknown {
return false
}
for _, ns := range namespaces {
if n.Namespace == ns.Name {
return false
}
}
return true
}
func isInaccessible(n *graph.Node, accessibleNamespaces map[string]time.Time) bool {
if _, found := accessibleNamespaces[n.Namespace]; !found {
return true
} else {
return false
}
}
func markTrafficGenerators(trafficMap graph.TrafficMap) {
destMap := make(map[string]*graph.Node)
for _, n := range trafficMap {
for _, e := range n.Edges {
destMap[e.Dest.ID] = e.Dest
}
}
for _, n := range trafficMap {
if len(n.Edges) == 0 {
continue
}
if _, isDest := destMap[n.ID]; !isDest {
n.Metadata["isRoot"] = true
}
}
}
// reduceToServicGraph compresses a [service-injected workload] graph by removing
// the workload nodes such that, with exception of non-service root nodes, the resulting
// graph has edges only from and to service nodes.
func reduceToServiceGraph(trafficMap graph.TrafficMap) graph.TrafficMap {
reducedTrafficMap := graph.NewTrafficMap()
for id, n := range trafficMap {
if n.NodeType != graph.NodeTypeService {
// if node isRoot then keep it to better understand traffic flow.
if val, ok := n.Metadata["isRoot"]; ok && val.(bool) {
// Remove any edge to a non-service node. The service graph only shows non-service root
// nodes, all other nodes are service nodes. The use case is direct workload-to-workload
// traffic, which is unusual but possible. This can lead to nodes with outgoing traffic
// not represented by an outgoing edge, but that is the nature of the graph type.
serviceEdges := []*graph.Edge{}
for _, e := range n.Edges {
if e.Dest.NodeType == graph.NodeTypeService {
serviceEdges = append(serviceEdges, e)
} else {
log.Debugf("Service graph ignoring non-service root destination [%s]", e.Dest.Workload)
}
}
n.Edges = serviceEdges
reducedTrafficMap[id] = n
}
continue
}
// handle service node, add to reduced traffic map and generate new edges
reducedTrafficMap[id] = n
workloadEdges := n.Edges
n.Edges = []*graph.Edge{}
for _, workloadEdge := range workloadEdges {
workload := workloadEdge.Dest
checkNodeType(graph.NodeTypeWorkload, workload)
for _, serviceEdge := range workload.Edges {
// As above, ignore edges to non-service destinations
if serviceEdge.Dest.NodeType != graph.NodeTypeService {
log.Debugf("Service graph ignoring non-service destination [%s]", serviceEdge.Dest.Workload)
continue
}
childService := serviceEdge.Dest
var edge *graph.Edge
for _, e := range n.Edges {
if childService.ID == e.Dest.ID && serviceEdge.Metadata["protocol"] == e.Metadata["protocol"] {
edge = e
break
}
}
if nil == edge {
n.Edges = append(n.Edges, serviceEdge)
} else {
addServiceGraphTraffic(edge, serviceEdge)
}
}
}
}
return reducedTrafficMap
}
func addServiceGraphTraffic(toEdge, fromEdge *graph.Edge) {
graph.AddServiceGraphTraffic(toEdge, fromEdge)
// handle any appender-based edge data (nothing currently)
// note: We used to average response times of the aggregated edges but realized that
// we can't average quantiles (kiali-2297).
}
func checkNodeType(expected string, n *graph.Node) {
if expected != n.NodeType {
graph.Error(fmt.Sprintf("Expected nodeType [%s] for node [%+v]", expected, n))
}
}
// buildNamespaceTrafficMap returns a map of all namespace nodes (key=id). All
// nodes either directly send and/or receive requests from a node in the namespace.
func buildNamespaceTrafficMap(namespace string, o options.Options, client *prometheus.Client) graph.TrafficMap {
// create map to aggregate traffic by protocol and response code
trafficMap := graph.NewTrafficMap()
requestsMetric := "istio_requests_total"
duration := o.Namespaces[namespace].Duration
// query prometheus for request traffic in three queries:
// 1) query for traffic originating from "unknown" (i.e. the internet).
groupBy := "source_workload_namespace,source_workload,source_app,source_version,destination_service_namespace,destination_service_name,destination_workload,destination_app,destination_version,request_protocol,response_code"
query := fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload="unknown",destination_service_namespace="%s"} [%vs])) by (%s)`,
requestsMetric,
namespace,
int(duration.Seconds()), // range duration for the query
groupBy)
unkVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMap(trafficMap, &unkVector, o)
// 2) query for traffic originating from a workload outside of the namespace. Exclude any "unknown" source telemetry (an unusual corner case)
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace!="%s",source_workload!="unknown",destination_service_namespace="%s"} [%vs])) by (%s)`,
requestsMetric,
namespace,
namespace,
int(duration.Seconds()), // range duration for the query
groupBy)
// fetch the externally originating request traffic time-series
extVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMap(trafficMap, &extVector, o)
// 3) query for traffic originating from a workload inside of the namespace
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace="%s"} [%vs])) by (%s)`,
requestsMetric,
namespace,
int(duration.Seconds()), // range duration for the query
groupBy)
// fetch the internally originating request traffic time-series
intVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMap(trafficMap, &intVector, o)
// istio component telemetry is only reported destination-side, so we must perform additional queries
if o.IncludeIstio {
istioNamespace := config.Get().IstioNamespace
// 4) if the target namespace is istioNamespace re-query for traffic originating from outside (other than unknown, covered in query #1)
if namespace == istioNamespace {
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload!="unknown",source_workload_namespace!="%s",destination_service_namespace="%s"} [%vs])) by (%s)`,
requestsMetric,
namespace,
namespace,
int(duration.Seconds()), // range duration for the query
groupBy)
// fetch the externally originating request traffic time-series
extIstioVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMap(trafficMap, &extIstioVector, o)
}
// 5) supplemental query for traffic originating from a workload inside of the namespace with istioSystem destination
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload_namespace="%s",destination_service_namespace="%s"} [%vs])) by (%s)`,
requestsMetric,
namespace,
istioNamespace,
int(duration.Seconds()), // range duration for the query
groupBy)
// fetch the internally originating request traffic time-series
intIstioVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMap(trafficMap, &intIstioVector, o)
}
// Section for TCP services
tcpMetric := "istio_tcp_sent_bytes_total"
// 1) query for traffic originating from "unknown" (i.e. the internet)
tcpGroupBy := "source_workload_namespace,source_workload,source_app,source_version,destination_workload_namespace,destination_service_name,destination_workload,destination_app,destination_version"
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload="unknown",destination_workload_namespace="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
int(duration.Seconds()), // range duration for the query
tcpGroupBy)
tcpUnkVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMapTcp(trafficMap, &tcpUnkVector, o)
// 2) query for traffic originating from a workload outside of the namespace. Exclude any "unknown" source telemetry (an unusual corner case)
tcpGroupBy = "source_workload_namespace,source_workload,source_app,source_version,destination_service_namespace,destination_service_name,destination_workload,destination_app,destination_version"
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace!="%s",source_workload!="unknown",destination_service_namespace="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
namespace,
int(duration.Seconds()), // range duration for the query
tcpGroupBy)
tcpExtVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMapTcp(trafficMap, &tcpExtVector, o)
// 3) query for traffic originating from a workload inside of the namespace
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
int(duration.Seconds()), // range duration for the query
tcpGroupBy)
tcpInVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMapTcp(trafficMap, &tcpInVector, o)
return trafficMap
}
func populateTrafficMap(trafficMap graph.TrafficMap, vector *model.Vector, o options.Options) {
for _, s := range *vector {
m := s.Metric
lSourceWlNs, sourceWlNsOk := m["source_workload_namespace"]
lSourceWl, sourceWlOk := m["source_workload"]
lSourceApp, sourceAppOk := m["source_app"]
lSourceVer, sourceVerOk := m["source_version"]
lDestSvcNs, destSvcNsOk := m["destination_service_namespace"]
lDestSvcName, destSvcNameOk := m["destination_service_name"]
lDestWl, destWlOk := m["destination_workload"]
lDestApp, destAppOk := m["destination_app"]
lDestVer, destVerOk := m["destination_version"]
lProtocol, protocolOk := m["request_protocol"]
lCode, codeOk := m["response_code"]
if !sourceWlNsOk || !sourceWlOk || !sourceAppOk || !sourceVerOk || !destSvcNsOk || !destSvcNameOk || !destWlOk || !destAppOk || !destVerOk || !protocolOk || !codeOk {
log.Warningf("Skipping %s, missing expected TS labels", m.String())
continue
}
sourceWlNs := string(lSourceWlNs)
sourceWl := string(lSourceWl)
sourceApp := string(lSourceApp)
sourceVer := string(lSourceVer)
destSvcNs := string(lDestSvcNs)
destSvcName := string(lDestSvcName)
destWl := string(lDestWl)
destApp := string(lDestApp)
destVer := string(lDestVer)
protocol := string(lProtocol)
code := string(lCode)
val := float64(s.Value)
if o.InjectServiceNodes {
// don't inject a service node if the dest node is already a service node. Also, we can't inject if destSvcName is not set.
destSvcNameOk = graph.IsOK(destSvcName)
_, destNodeType := graph.Id(destSvcNs, destWl, destApp, destVer, destSvcName, o.GraphType)
if destSvcNameOk && destNodeType != graph.NodeTypeService {
addTraffic(trafficMap, val, protocol, code, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, "", "", "", destSvcName, o)
addTraffic(trafficMap, val, protocol, code, destSvcNs, "", "", "", destSvcName, destSvcNs, destWl, destApp, destVer, destSvcName, o)
} else {
addTraffic(trafficMap, val, protocol, code, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, destWl, destApp, destVer, destSvcName, o)
}
} else {
addTraffic(trafficMap, val, protocol, code, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, destWl, destApp, destVer, destSvcName, o)
}
}
}
func addTraffic(trafficMap graph.TrafficMap, val float64, protocol, code, sourceWlNs, sourceWl, sourceApp, sourceVer, sourceSvcName, destSvcNs, destWl, destApp, destVer, destSvcName string, o options.Options) (source, dest *graph.Node) {
source, sourceFound := addNode(trafficMap, sourceWlNs, sourceWl, sourceApp, sourceVer, sourceSvcName, o)
dest, destFound := addNode(trafficMap, destSvcNs, destWl, destApp, destVer, destSvcName, o)
addToDestServices(dest.Metadata, destSvcName)
var edge *graph.Edge
for _, e := range source.Edges {
if dest.ID == e.Dest.ID && e.Metadata["protocol"] == protocol {
edge = e
break
}
}
if nil == edge {
edge = source.AddEdge(dest)
edge.Metadata["protocol"] = protocol
}
// A workload may mistakenly have multiple app and or version label values.
// This is a misconfiguration we need to handle. See Kiali-1309.
if sourceFound {
handleMisconfiguredLabels(source, sourceApp, sourceVer, val, o)
}
if destFound {
handleMisconfiguredLabels(dest, destApp, destVer, val, o)
}
graph.AddToMetadata(protocol, val, code, source.Metadata, dest.Metadata, edge.Metadata)
return source, dest
}
func populateTrafficMapTcp(trafficMap graph.TrafficMap, vector *model.Vector, o options.Options) {
for _, s := range *vector {
m := s.Metric
lSourceWlNs, sourceWlNsOk := m["source_workload_namespace"]
lSourceWl, sourceWlOk := m["source_workload"]
lSourceApp, sourceAppOk := m["source_app"]
lSourceVer, sourceVerOk := m["source_version"]
lDestSvcNs, destSvcNsOk := m["destination_service_namespace"]
lDestSvcName, destSvcNameOk := m["destination_service_name"]
lDestWl, destWlOk := m["destination_workload"]
lDestApp, destAppOk := m["destination_app"]
lDestVer, destVerOk := m["destination_version"]
// TCP queries doesn't use destination_service_namespace for the unknown node.
// Check if this is the case and use destination_workload_namespace
if !destSvcNsOk {
lDestSvcNs, destSvcNsOk = m["destination_workload_namespace"]
}
if !sourceWlNsOk || !sourceWlOk || !sourceAppOk || !sourceVerOk || !destSvcNsOk || !destSvcNameOk || !destWlOk || !destAppOk || !destVerOk {
log.Warningf("Skipping %s, missing expected TS labels", m.String())
continue
}
sourceWlNs := string(lSourceWlNs)
sourceWl := string(lSourceWl)
sourceApp := string(lSourceApp)
sourceVer := string(lSourceVer)
destSvcNs := string(lDestSvcNs)
destSvcName := string(lDestSvcName)
destWl := string(lDestWl)
destApp := string(lDestApp)
destVer := string(lDestVer)
val := float64(s.Value)
if o.InjectServiceNodes {
// don't inject a service node if the dest node is already a service node. Also, we can't inject if destSvcName is not set.
destSvcNameOk = graph.IsOK(destSvcName)
_, destNodeType := graph.Id(destSvcNs, destWl, destApp, destVer, destSvcName, o.GraphType)
if destSvcNameOk && destNodeType != graph.NodeTypeService {
addTcpTraffic(trafficMap, val, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, "", "", "", destSvcName, o)
addTcpTraffic(trafficMap, val, destSvcNs, "", "", "", destSvcName, destSvcNs, destWl, destApp, destVer, destSvcName, o)
} else {
addTcpTraffic(trafficMap, val, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, destWl, destApp, destVer, destSvcName, o)
}
} else {
addTcpTraffic(trafficMap, val, sourceWlNs, sourceWl, sourceApp, sourceVer, "", destSvcNs, destWl, destApp, destVer, destSvcName, o)
}
}
}
func addTcpTraffic(trafficMap graph.TrafficMap, val float64, sourceWlNs, sourceWl, sourceApp, sourceVer, sourceSvcName, destSvcNs, destWl, destApp, destVer, destSvcName string, o options.Options) (source, dest *graph.Node) {
source, sourceFound := addNode(trafficMap, sourceWlNs, sourceWl, sourceApp, sourceVer, sourceSvcName, o)
dest, destFound := addNode(trafficMap, destSvcNs, destWl, destApp, destVer, destSvcName, o)
addToDestServices(dest.Metadata, destSvcName)
var edge *graph.Edge
for _, e := range source.Edges {
if dest.ID == e.Dest.ID && e.Metadata["procotol"] == "tcp" {
edge = e
break
}
}
if nil == edge {
edge = source.AddEdge(dest)
edge.Metadata["protocol"] = "tcp"
}
// A workload may mistakenly have multiple app and or version label values.
// This is a misconfiguration we need to handle. See Kiali-1309.
if sourceFound {
handleMisconfiguredLabels(source, sourceApp, sourceVer, val, o)
}
if destFound {
handleMisconfiguredLabels(dest, destApp, destVer, val, o)
}
graph.AddToMetadata("tcp", val, "", source.Metadata, dest.Metadata, edge.Metadata)
return source, dest
}
func addToDestServices(md map[string]interface{}, destService string) {
destServices, ok := md["destServices"]
if !ok {
destServices = make(map[string]bool)
md["destServices"] = destServices
}
destServices.(map[string]bool)[destService] = true
}
func handleMisconfiguredLabels(node *graph.Node, app, version string, rate float64, o options.Options) {
isVersionedAppGraph := o.VendorOptions.GraphType == graph.GraphTypeVersionedApp
isWorkloadNode := node.NodeType == graph.NodeTypeWorkload
isVersionedAppNode := node.NodeType == graph.NodeTypeApp && isVersionedAppGraph
if isWorkloadNode || isVersionedAppNode {
labels := []string{}
if node.App != app {
labels = append(labels, "app")
}
if node.Version != version {
labels = append(labels, "version")
}
// prefer the labels of an active time series as often the other labels are inactive
if len(labels) > 0 {
node.Metadata["isMisconfigured"] = fmt.Sprintf("labels=%v", labels)
if rate > 0.0 {
node.App = app
node.Version = version
}
}
}
}
func addNode(trafficMap graph.TrafficMap, namespace, workload, app, version, service string, o options.Options) (*graph.Node, bool) {
id, nodeType := graph.Id(namespace, workload, app, version, service, o.GraphType)
node, found := trafficMap[id]
if !found {
newNode := graph.NewNodeExplicit(id, namespace, workload, app, version, service, nodeType, o.GraphType)
node = &newNode
trafficMap[id] = node
}
return node, found
}
// GraphNode is a REST http.HandlerFunc handling node-detail graph
// config generation.
func GraphNode(w http.ResponseWriter, r *http.Request) {
defer handlePanic(w)
client, err := prometheus.NewClient()
graph.CheckError(err)
graphNode(w, r, client)
}
// graphNode provides a testing hook that can supply a mock client
func graphNode(w http.ResponseWriter, r *http.Request, client *prometheus.Client) {
o := options.NewOptions(r)
switch o.Vendor {
case "cytoscape":
default:
graph.Error(fmt.Sprintf("Vendor [%s] not supported", o.Vendor))
}
if len(o.Namespaces) != 1 {
graph.Error(fmt.Sprintf("Node graph does not support the 'namespaces' query parameter or the 'all' namespace"))
}
// time how long it takes to generate this graph
promtimer := internalmetrics.GetGraphGenerationTimePrometheusTimer(o.GetGraphKind(), o.GraphType, o.InjectServiceNodes)
defer promtimer.ObserveDuration()
n := graph.NewNode(o.NodeOptions.Namespace, o.NodeOptions.Workload, o.NodeOptions.App, o.NodeOptions.Version, o.NodeOptions.Service, o.GraphType)
log.Debugf("Build graph for node [%+v]", n)
trafficMap := buildNodeTrafficMap(o.NodeOptions.Namespace, n, o, client)
globalInfo := appender.NewGlobalInfo()
namespaceInfo := appender.NewNamespaceInfo(o.NodeOptions.Namespace)
for _, a := range o.Appenders {
appenderTimer := internalmetrics.GetGraphAppenderTimePrometheusTimer(a.Name())
a.AppendGraph(trafficMap, globalInfo, namespaceInfo)
appenderTimer.ObserveDuration()
}
// The appenders can add/remove/alter nodes. After the manipulations are complete
// we can make some final adjustments:
// - mark the outsiders (i.e. nodes not in the requested namespaces)
// - mark the traffic generators
markOutsideOrInaccessible(trafficMap, o)
markTrafficGenerators(trafficMap)
// Note that this is where we would call reduceToServiceGraph for graphTypeService but
// the current decision is to not reduce the node graph to provide more detail. This may be
// confusing to users, we'll see...
generateGraph(trafficMap, w, o)
// update metrics
internalmetrics.SetGraphNodes(o.GetGraphKind(), o.GraphType, o.InjectServiceNodes, len(trafficMap))
}
// buildNodeTrafficMap returns a map of all nodes requesting or requested by the target node (key=id).
func buildNodeTrafficMap(namespace string, n graph.Node, o options.Options, client *prometheus.Client) graph.TrafficMap {
httpMetric := "istio_requests_total"
interval := o.Namespaces[namespace].Duration
// create map to aggregate traffic by response code
trafficMap := graph.NewTrafficMap()
// query prometheus for request traffic in two queries:
// 1) query for incoming traffic
var query string
groupBy := "source_workload_namespace,source_workload,source_app,source_version,destination_service_namespace,destination_service_name,destination_workload,destination_app,destination_version,request_protocol,response_code"
switch n.NodeType {
case graph.NodeTypeWorkload:
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",destination_workload_namespace="%s",destination_workload="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.Workload,
int(interval.Seconds()), // range duration for the query
groupBy)
case graph.NodeTypeApp:
if graph.IsOK(n.Version) {
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",destination_service_namespace="%s",destination_app="%s",destination_version="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.App,
n.Version,
int(interval.Seconds()), // range duration for the query
groupBy)
} else {
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",destination_service_namespace="%s",destination_app="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.App,
int(interval.Seconds()), // range duration for the query
groupBy)
}
case graph.NodeTypeService:
// for service requests we want source reporting to capture source-reported errors. But unknown only generates destination telemetry. So
// perform a special query just to capture [successful] request telemetry from unknown.
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload="unknown",destination_service_namespace="%s",destination_service_name="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.Service,
int(interval.Seconds()), // range duration for the query
groupBy)
vector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMap(trafficMap, &vector, o)
query = fmt.Sprintf(`sum(rate(%s{reporter="source",destination_service_namespace="%s",destination_service_name="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.Service,
int(interval.Seconds()), // range duration for the query
groupBy)
default:
graph.Error(fmt.Sprintf("NodeType [%s] not supported", n.NodeType))
}
inVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMap(trafficMap, &inVector, o)
// 2) query for outbound traffic
switch n.NodeType {
case graph.NodeTypeWorkload:
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace="%s",source_workload="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.Workload,
int(interval.Seconds()), // range duration for the query
groupBy)
case graph.NodeTypeApp:
if graph.IsOK(n.Version) {
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace="%s",source_app="%s",source_version="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.App,
n.Version,
int(interval.Seconds()), // range duration for the query
groupBy)
} else {
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace="%s",source_app="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.App,
int(interval.Seconds()), // range duration for the query
groupBy)
}
case graph.NodeTypeService:
query = ""
default:
graph.Error(fmt.Sprintf("NodeType [%s] not supported", n.NodeType))
}
outVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMap(trafficMap, &outVector, o)
// istio component telemetry is only reported destination-side, so we must perform additional queries
if o.IncludeIstio {
istioNamespace := config.Get().IstioNamespace
// 3) supplemental query for outbound traffic to the istio namespace
switch n.NodeType {
case graph.NodeTypeWorkload:
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload_namespace="%s",source_workload="%s",destination_service_namespace="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.Workload,
istioNamespace,
int(interval.Seconds()), // range duration for the query
groupBy)
case graph.NodeTypeApp:
if graph.IsOK(n.Version) {
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload_namespace="%s",source_app="%s",source_version="%s",destination_service_namespace="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.App,
n.Version,
istioNamespace,
int(interval.Seconds()), // range duration for the query
groupBy)
} else {
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",source_workload_namespace="%s",source_app="%s",destination_service_namespace="%s"} [%vs])) by (%s)`,
httpMetric,
namespace,
n.App,
istioNamespace,
int(interval.Seconds()), // range duration for the query
groupBy)
}
case graph.NodeTypeService:
query = fmt.Sprintf(`sum(rate(%s{reporter="destination",destination_service_namespace="%s",destination_service_name="%s"} [%vs])) by (%s)`,
httpMetric,
istioNamespace,
n.Service,
int(interval.Seconds()), // range duration for the query
groupBy)
default:
graph.Error(fmt.Sprintf("NodeType [%s] not supported", n.NodeType))
}
outIstioVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMap(trafficMap, &outIstioVector, o)
}
// Section for TCP services
tcpMetric := "istio_tcp_sent_bytes_total"
tcpGroupBy := "source_workload_namespace,source_workload,source_app,source_version,destination_service_namespace,destination_service_name,destination_workload,destination_app,destination_version"
switch n.NodeType {
case graph.NodeTypeWorkload:
query = fmt.Sprintf(`sum(rate(%s{reporter="source",destination_workload_namespace="%s",destination_workload="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
n.Workload,
int(interval.Seconds()), // range duration for the query
tcpGroupBy)
case graph.NodeTypeApp:
if graph.IsOK(n.Version) {
query = fmt.Sprintf(`sum(rate(%s{reporter="source",destination_service_namespace="%s",destination_app="%s",destination_version="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
n.App,
n.Version,
int(interval.Seconds()), // range duration for the query
tcpGroupBy)
} else {
query = fmt.Sprintf(`sum(rate(%s{reporter="source",destination_service_namespace="%s",destination_app="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
n.App,
int(interval.Seconds()), // range duration for the query
tcpGroupBy)
}
case graph.NodeTypeService:
// TODO: Do we need to handle requests from unknown in a special way (like in HTTP above)? Not sure how tcp is reported from unknown.
query = fmt.Sprintf(`sum(rate(%s{reporter="source",destination_service_namespace="%s",destination_service_name="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
n.Service,
int(interval.Seconds()), // range duration for the query
tcpGroupBy)
default:
graph.Error(fmt.Sprintf("NodeType [%s] not supported", n.NodeType))
}
tcpInVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMapTcp(trafficMap, &tcpInVector, o)
// 2) query for outbound traffic
switch n.NodeType {
case graph.NodeTypeWorkload:
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace="%s",source_workload="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
n.Workload,
int(interval.Seconds()), // range duration for the query
tcpGroupBy)
case graph.NodeTypeApp:
if graph.IsOK(n.Version) {
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace="%s",source_app="%s",source_version="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
n.App,
n.Version,
int(interval.Seconds()), // range duration for the query
tcpGroupBy)
} else {
query = fmt.Sprintf(`sum(rate(%s{reporter="source",source_workload_namespace="%s",source_app="%s"} [%vs])) by (%s)`,
tcpMetric,
namespace,
n.App,
int(interval.Seconds()), // range duration for the query
tcpGroupBy)
}
case graph.NodeTypeService:
query = ""
default:
graph.Error(fmt.Sprintf("NodeType [%s] not supported", n.NodeType))
}
tcpOutVector := promQuery(query, time.Unix(o.QueryTime, 0), client.API())
populateTrafficMapTcp(trafficMap, &tcpOutVector, o)
return trafficMap
}
func generateGraph(trafficMap graph.TrafficMap, w http.ResponseWriter, o options.Options) {
log.Debugf("Generating config for [%s] service graph...", o.Vendor)
promtimer := internalmetrics.GetGraphMarshalTimePrometheusTimer(o.GetGraphKind(), o.GraphType, o.InjectServiceNodes)
defer promtimer.ObserveDuration()
var vendorConfig interface{}
switch o.Vendor {
case "cytoscape":
vendorConfig = cytoscape.NewConfig(trafficMap, o.VendorOptions)
default:
graph.Error(fmt.Sprintf("Vendor [%s] not supported", o.Vendor))
}
log.Debugf("Done generating config for [%s] service graph.", o.Vendor)
RespondWithJSONIndent(w, http.StatusOK, vendorConfig)
}
func promQuery(query string, queryTime time.Time, api v1.API) model.Vector {
if "" == query {
return model.Vector{}
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// wrap with a round() to be in line with metrics api
query = fmt.Sprintf("round(%s,0.001)", query)
log.Debugf("Graph query:\n%s@time=%v (now=%v, %v)\n", query, queryTime.Format(graph.TF), time.Now().Format(graph.TF), queryTime.Unix())
promtimer := internalmetrics.GetPrometheusProcessingTimePrometheusTimer("Graph-Generation")
value, err := api.Query(ctx, query, queryTime)
graph.CheckError(err)
promtimer.ObserveDuration() // notice we only collect metrics for successful prom queries
switch t := value.Type(); t {
case model.ValVector: // Instant Vector
return value.(model.Vector)
default:
graph.Error(fmt.Sprintf("No handling for type %v!\n", t))
}
return nil
}
func handlePanic(w http.ResponseWriter) {
code := http.StatusInternalServerError
if r := recover(); r != nil {
var message string
switch r.(type) {
case string:
message = r.(string)
case error:
message = r.(error).Error()
case func() string:
message = r.(func() string)()
case graph.Response:
message = r.(graph.Response).Message
code = r.(graph.Response).Code
default:
message = fmt.Sprintf("%v", r)
}
if code == http.StatusInternalServerError {
log.Errorf("%s: %s", message, debug.Stack())
}
RespondWithError(w, code, message)
}
}
// some debugging utils
//func ids(r *[]graph.Node) []string {
// s := []string{}
// for _, r := range *r {
// s = append(s, r.ID)
// }
// return s
//}
//func keys(m map[string]*graph.Node) []string {
// s := []string{}
// for k := range m {
// s = append(s, k)
// }
// return s
//}

260
vendor/github.com/kiali/kiali/handlers/health.go generated vendored Normal file
View File

@@ -0,0 +1,260 @@
package handlers
import (
"net/http"
"time"
"github.com/gorilla/mux"
"k8s.io/apimachinery/pkg/api/errors"
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/util"
)
const defaultHealthRateInterval = "10m"
// NamespaceHealth is the API handler to get app-based health of every services in the given namespace
func NamespaceHealth(w http.ResponseWriter, r *http.Request) {
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
p := namespaceHealthParams{}
if ok, err := p.extract(r); !ok {
// Bad request
RespondWithError(w, http.StatusBadRequest, err)
return
}
// Adjust rate interval
rateInterval, err := adjustRateInterval(business, p.Namespace, p.RateInterval, p.QueryTime)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Adjust rate interval error: "+err.Error())
return
}
switch p.Type {
case "app":
health, err := business.Health.GetNamespaceAppHealth(p.Namespace, rateInterval, p.QueryTime)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Error while fetching app health: "+err.Error())
return
}
RespondWithJSON(w, http.StatusOK, health)
case "service":
health, err := business.Health.GetNamespaceServiceHealth(p.Namespace, rateInterval, p.QueryTime)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Error while fetching service health: "+err.Error())
return
}
RespondWithJSON(w, http.StatusOK, health)
case "workload":
health, err := business.Health.GetNamespaceWorkloadHealth(p.Namespace, rateInterval, p.QueryTime)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Error while fetching workload health: "+err.Error())
return
}
RespondWithJSON(w, http.StatusOK, health)
}
}
// AppHealth is the API handler to get health of a single app
func AppHealth(w http.ResponseWriter, r *http.Request) {
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
p := appHealthParams{}
p.extract(r)
rateInterval, err := adjustRateInterval(business, p.Namespace, p.RateInterval, p.QueryTime)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Adjust rate interval error: "+err.Error())
return
}
health, err := business.Health.GetAppHealth(p.Namespace, p.App, rateInterval, p.QueryTime)
handleHealthResponse(w, health, err)
}
// WorkloadHealth is the API handler to get health of a single workload
func WorkloadHealth(w http.ResponseWriter, r *http.Request) {
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
p := workloadHealthParams{}
p.extract(r)
rateInterval, err := adjustRateInterval(business, p.Namespace, p.RateInterval, p.QueryTime)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Adjust rate interval error: "+err.Error())
return
}
p.RateInterval = rateInterval
health, err := business.Health.GetWorkloadHealth(p.Namespace, p.Workload, rateInterval, p.QueryTime)
handleHealthResponse(w, health, err)
}
// ServiceHealth is the API handler to get health of a single service
func ServiceHealth(w http.ResponseWriter, r *http.Request) {
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
p := serviceHealthParams{}
p.extract(r)
rateInterval, err := adjustRateInterval(business, p.Namespace, p.RateInterval, p.QueryTime)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Adjust rate interval error: "+err.Error())
return
}
health, err := business.Health.GetServiceHealth(p.Namespace, p.Service, rateInterval, p.QueryTime)
handleHealthResponse(w, health, err)
}
func handleHealthResponse(w http.ResponseWriter, health interface{}, err error) {
if err != nil {
if errors.IsNotFound(err) {
RespondWithError(w, http.StatusNotFound, err.Error())
} else if statusError, isStatus := err.(*errors.StatusError); isStatus {
RespondWithError(w, http.StatusInternalServerError, statusError.ErrStatus.Message)
} else {
RespondWithError(w, http.StatusInternalServerError, err.Error())
}
} else {
RespondWithJSON(w, http.StatusOK, health)
}
}
type baseHealthParams struct {
// The namespace scope
//
// in: path
Namespace string `json:"namespace"`
// The rate interval used for fetching error rate
//
// in: query
// default: 10m
RateInterval string `json:"rateInterval"`
// The time to use for the prometheus query
QueryTime time.Time
}
func (p *baseHealthParams) baseExtract(r *http.Request, vars map[string]string) {
p.RateInterval = defaultHealthRateInterval
p.QueryTime = util.Clock.Now()
queryParams := r.URL.Query()
if rateIntervals, ok := queryParams["rateInterval"]; ok && len(rateIntervals) > 0 {
p.RateInterval = rateIntervals[0]
}
p.Namespace = vars["namespace"]
}
// namespaceHealthParams holds the path and query parameters for NamespaceHealth
//
// swagger:parameters namespaceHealth
type namespaceHealthParams struct {
baseHealthParams
// The type of health, "app", "service" or "workload".
//
// in: query
// pattern: ^(app|service|workload)$
// default: app
Type string `json:"type"`
}
func (p *namespaceHealthParams) extract(r *http.Request) (bool, string) {
vars := mux.Vars(r)
p.baseExtract(r, vars)
p.Type = "app"
queryParams := r.URL.Query()
if healthTypes, ok := queryParams["type"]; ok && len(healthTypes) > 0 {
if healthTypes[0] != "app" && healthTypes[0] != "service" && healthTypes[0] != "workload" {
// Bad request
return false, "Bad request, query parameter 'type' must be one of ['app','service','workload']"
}
p.Type = healthTypes[0]
}
return true, ""
}
// appHealthParams holds the path and query parameters for AppHealth
//
// swagger:parameters appHealth
type appHealthParams struct {
baseHealthParams
// The target app
//
// in: path
App string `json:"app"`
}
func (p *appHealthParams) extract(r *http.Request) {
vars := mux.Vars(r)
p.baseExtract(r, vars)
p.App = vars["app"]
}
// serviceHealthParams holds the path and query parameters for ServiceHealth
//
// swagger:parameters serviceHealth
type serviceHealthParams struct {
baseHealthParams
// The target service
//
// in: path
Service string `json:"service"`
}
func (p *serviceHealthParams) extract(r *http.Request) {
vars := mux.Vars(r)
p.baseExtract(r, vars)
p.Service = vars["service"]
}
// workloadHealthParams holds the path and query parameters for WorkloadHealth
//
// swagger:parameters workloadHealth
type workloadHealthParams struct {
baseHealthParams
// The target workload
//
// in: path
Workload string `json:"workload"`
}
func (p *workloadHealthParams) extract(r *http.Request) {
vars := mux.Vars(r)
p.baseExtract(r, vars)
p.Workload = vars["workload"]
}
func adjustRateInterval(business *business.Layer, namespace, rateInterval string, queryTime time.Time) (string, error) {
namespaceInfo, err := business.Namespace.GetNamespace(namespace)
if err != nil {
return "", err
}
interval, err := util.AdjustRateInterval(namespaceInfo.CreationTimestamp, queryTime, rateInterval)
if err != nil {
return "", err
}
if interval != rateInterval {
log.Debugf("Rate interval for namespace %v was adjusted to %v (original = %v, query time = %v, namespace created = %v)",
namespace, interval, rateInterval, queryTime, namespaceInfo.CreationTimestamp)
}
return interval, nil
}

350
vendor/github.com/kiali/kiali/handlers/istio_config.go generated vendored Normal file
View File

@@ -0,0 +1,350 @@
package handlers
import (
"io/ioutil"
"net/http"
"strings"
"sync"
"github.com/gorilla/mux"
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
"k8s.io/apimachinery/pkg/api/errors"
)
func IstioConfigList(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
namespace := params["namespace"]
query := r.URL.Query()
objects := ""
parsedTypes := make([]string, 0)
if _, ok := query["objects"]; ok {
objects = strings.ToLower(query.Get("objects"))
if len(objects) > 0 {
parsedTypes = strings.Split(objects, ",")
}
}
includeValidations := false
if _, found := query["validate"]; found {
includeValidations = true
}
criteria := parseCriteria(namespace, objects)
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
var istioConfigValidations models.IstioValidations
wg := sync.WaitGroup{}
if includeValidations {
wg.Add(1)
go func(namespace string, istioConfigValidations *models.IstioValidations, err *error) {
defer wg.Done()
// We don't filter by objects when calling validations, because certain validations require fetching all types to get the correct errors
istioConfigValidationResults, errValidations := business.Validations.GetValidations(namespace, "")
if errValidations != nil && err == nil {
*err = errValidations
} else {
if len(parsedTypes) > 0 {
istioConfigValidationResults = istioConfigValidationResults.FilterByTypes(parsedTypes)
}
*istioConfigValidations = istioConfigValidationResults
}
}(namespace, &istioConfigValidations, &err)
}
istioConfig, err := business.IstioConfig.GetIstioConfigList(criteria)
if includeValidations {
// Add validation results to the IstioConfigList once they're available (previously done in the UI layer)
wg.Wait()
istioConfig.IstioValidations = istioConfigValidations
}
if err != nil {
log.Error(err)
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
RespondWithJSON(w, http.StatusOK, istioConfig)
}
func checkType(types []string, name string) bool {
for _, typeName := range types {
if typeName == name {
return true
}
}
return false
}
func parseCriteria(namespace string, objects string) business.IstioConfigCriteria {
defaultInclude := objects == ""
criteria := business.IstioConfigCriteria{}
criteria.Namespace = namespace
criteria.IncludeGateways = defaultInclude
criteria.IncludeVirtualServices = defaultInclude
criteria.IncludeDestinationRules = defaultInclude
criteria.IncludeServiceEntries = defaultInclude
criteria.IncludeRules = defaultInclude
criteria.IncludeAdapters = defaultInclude
criteria.IncludeTemplates = defaultInclude
criteria.IncludeQuotaSpecs = defaultInclude
criteria.IncludeQuotaSpecBindings = defaultInclude
criteria.IncludePolicies = defaultInclude
criteria.IncludeMeshPolicies = defaultInclude
criteria.IncludeClusterRbacConfigs = defaultInclude
criteria.IncludeServiceRoles = defaultInclude
criteria.IncludeServiceRoleBindings = defaultInclude
if defaultInclude {
return criteria
}
types := strings.Split(objects, ",")
if checkType(types, business.Gateways) {
criteria.IncludeGateways = true
}
if checkType(types, business.VirtualServices) {
criteria.IncludeVirtualServices = true
}
if checkType(types, business.DestinationRules) {
criteria.IncludeDestinationRules = true
}
if checkType(types, business.ServiceEntries) {
criteria.IncludeServiceEntries = true
}
if checkType(types, business.Rules) {
criteria.IncludeRules = true
}
if checkType(types, business.Adapters) {
criteria.IncludeAdapters = true
}
if checkType(types, business.Templates) {
criteria.IncludeTemplates = true
}
if checkType(types, business.QuotaSpecs) {
criteria.IncludeQuotaSpecs = true
}
if checkType(types, business.QuotaSpecBindings) {
criteria.IncludeQuotaSpecBindings = true
}
if checkType(types, business.Policies) {
criteria.IncludePolicies = true
}
if checkType(types, business.MeshPolicies) {
criteria.IncludeMeshPolicies = true
}
if checkType(types, business.ClusterRbacConfigs) {
criteria.IncludeClusterRbacConfigs = true
}
if checkType(types, business.ServiceRoles) {
criteria.IncludeServiceRoles = true
}
if checkType(types, business.ServiceRoleBindings) {
criteria.IncludeServiceRoleBindings = true
}
return criteria
}
func IstioConfigDetails(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
namespace := params["namespace"]
objectType := params["object_type"]
objectSubtype := params["object_subtype"]
object := params["object"]
includeValidations := false
query := r.URL.Query()
if _, found := query["validate"]; found {
includeValidations = true
}
if !checkObjectType(objectType) {
RespondWithError(w, http.StatusBadRequest, "Object type not managed: "+objectType)
return
}
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
var istioConfigValidations models.IstioValidations
wg := sync.WaitGroup{}
if includeValidations {
wg.Add(1)
go func(istioConfigValidations *models.IstioValidations, err *error) {
defer wg.Done()
istioConfigValidationResults, errValidations := business.Validations.GetIstioObjectValidations(namespace, objectType, object)
if errValidations != nil && err == nil {
*err = errValidations
} else {
*istioConfigValidations = istioConfigValidationResults
}
}(&istioConfigValidations, &err)
}
istioConfigDetails, err := business.IstioConfig.GetIstioConfigDetails(namespace, objectType, objectSubtype, object)
if includeValidations && err == nil {
wg.Wait()
if validation, found := istioConfigValidations[models.IstioValidationKey{ObjectType: models.ObjectTypeSingular[objectType], Name: object}]; found {
istioConfigDetails.IstioValidation = validation
}
}
if errors.IsNotFound(err) {
RespondWithError(w, http.StatusNotFound, err.Error())
return
} else if statusError, isStatus := err.(*errors.StatusError); isStatus {
RespondWithError(w, http.StatusInternalServerError, statusError.ErrStatus.Message)
return
} else if err != nil {
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
RespondWithJSON(w, http.StatusOK, istioConfigDetails)
}
func IstioConfigDelete(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
namespace := params["namespace"]
objectType := params["object_type"]
objectSubtype := params["object_subtype"]
object := params["object"]
api := business.GetIstioAPI(objectType)
if api == "" {
RespondWithError(w, http.StatusBadRequest, "Object type not managed: "+objectType)
return
}
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
err = business.IstioConfig.DeleteIstioConfigDetail(api, namespace, objectType, objectSubtype, object)
if err != nil {
log.Error(err)
if errors.IsNotFound(err) {
RespondWithError(w, http.StatusNotFound, err.Error())
} else {
RespondWithError(w, http.StatusInternalServerError, err.Error())
}
} else {
audit(r, "DELETE on Namespace: "+namespace+" Type: "+objectType+" Subtype: "+objectSubtype+" Name: "+object)
RespondWithCode(w, http.StatusOK)
}
return
}
func IstioConfigUpdate(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
namespace := params["namespace"]
objectType := params["object_type"]
objectSubtype := params["object_subtype"]
object := params["object"]
api := business.GetIstioAPI(objectType)
if api == "" {
RespondWithError(w, http.StatusBadRequest, "Object type not managed: "+objectType)
return
}
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
body, err := ioutil.ReadAll(r.Body)
if err != nil {
RespondWithError(w, http.StatusBadRequest, "Update request with bad update patch: "+err.Error())
}
jsonPatch := string(body)
updatedConfigDetails, err := business.IstioConfig.UpdateIstioConfigDetail(api, namespace, objectType, objectSubtype, object, jsonPatch)
if errors.IsNotFound(err) {
RespondWithError(w, http.StatusNotFound, err.Error())
return
} else if statusError, isStatus := err.(*errors.StatusError); isStatus {
RespondWithError(w, http.StatusInternalServerError, statusError.ErrStatus.Message)
return
} else if err != nil {
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
audit(r, "UPDATE on Namespace: "+namespace+" Type: "+objectType+" Subtype: "+objectSubtype+" Name: "+object+" Patch: "+jsonPatch)
RespondWithJSON(w, http.StatusOK, updatedConfigDetails)
}
func IstioConfigCreate(w http.ResponseWriter, r *http.Request) {
// Feels kinda replicated for multiple functions..
params := mux.Vars(r)
namespace := params["namespace"]
objectType := params["object_type"]
objectSubtype := params["object_subtype"]
api := business.GetIstioAPI(objectType)
if api == "" {
RespondWithError(w, http.StatusBadRequest, "Object type not managed: "+objectType)
return
}
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
body, err := ioutil.ReadAll(r.Body)
if err != nil {
RespondWithError(w, http.StatusBadRequest, "Create request could not be read: "+err.Error())
}
createdConfigDetails, err := business.IstioConfig.CreateIstioConfigDetail(api, namespace, objectType, objectSubtype, body)
if errors.IsNotFound(err) {
RespondWithError(w, http.StatusNotFound, err.Error())
return
} else if statusError, isStatus := err.(*errors.StatusError); isStatus {
RespondWithError(w, http.StatusInternalServerError, statusError.ErrStatus.Message)
return
} else if err != nil {
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
audit(r, "CREATE on Namespace: "+namespace+" Type: "+objectType+" Subtype: "+objectSubtype+" Object: "+string(body))
RespondWithJSON(w, http.StatusOK, createdConfigDetails)
}
func checkObjectType(objectType string) bool {
return business.GetIstioAPI(objectType) != ""
}
func audit(r *http.Request, message string) {
if config.Get().Server.AuditLog {
user := r.Header.Get("Kiali-User")
log.Infof("AUDIT User [%s] Msg [%s]", user, message)
}
}

31
vendor/github.com/kiali/kiali/handlers/jaeger.go generated vendored Normal file
View File

@@ -0,0 +1,31 @@
package handlers
import (
"net/http"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/models"
)
// Get JaegerInfo provides the proxy Jaeger URL
func GetJaegerInfo(w http.ResponseWriter, r *http.Request) {
jaegerConfig := config.Get().ExternalServices.Jaeger
info := models.JaegerInfo{
URL: jaegerConfig.URL,
}
// Check if URL is in the configuration
if info.URL == "" {
RespondWithError(w, http.StatusNotFound, "You need to set the Jaeger URL configuration.")
return
}
// Check if URL is valid
_, error := validateURL(info.URL)
if error != nil {
RespondWithError(w, http.StatusNotAcceptable, "You need to set a correct format for Jaeger URL in the configuration error: "+error.Error())
return
}
RespondWithJSON(w, http.StatusOK, info)
}

150
vendor/github.com/kiali/kiali/handlers/metrics.go generated vendored Normal file
View File

@@ -0,0 +1,150 @@
package handlers
import (
"errors"
"fmt"
"net/http"
"net/url"
"strconv"
"time"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus"
"github.com/kiali/kiali/util"
)
func extractIstioMetricsQueryParams(r *http.Request, q *prometheus.IstioMetricsQuery, namespaceInfo *models.Namespace) error {
q.FillDefaults()
queryParams := r.URL.Query()
if filters, ok := queryParams["filters[]"]; ok && len(filters) > 0 {
q.Filters = filters
}
dir := queryParams.Get("direction")
if dir != "" {
if dir != "inbound" && dir != "outbound" {
return errors.New("Bad request, query parameter 'direction' must be either 'inbound' or 'outbound'")
}
q.Direction = dir
}
requestProtocol := queryParams.Get("requestProtocol")
if requestProtocol != "" {
q.RequestProtocol = requestProtocol
}
reporter := queryParams.Get("reporter")
if reporter != "" {
if reporter != "source" && reporter != "destination" {
return errors.New("Bad request, query parameter 'reporter' must be either 'source' or 'destination'")
}
q.Reporter = reporter
}
return extractBaseMetricsQueryParams(queryParams, &q.BaseMetricsQuery, namespaceInfo)
}
func extractCustomMetricsQueryParams(r *http.Request, q *prometheus.CustomMetricsQuery, namespaceInfo *models.Namespace) error {
q.FillDefaults()
queryParams := r.URL.Query()
q.Version = queryParams.Get("version")
op := queryParams.Get("rawDataAggregator")
// Explicit white-listing operators to prevent any kind of injection
// For a list of operators, see https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators
if op == "sum" || op == "min" || op == "max" || op == "avg" || op == "stddev" || op == "stdvar" {
q.RawDataAggregator = op
}
return extractBaseMetricsQueryParams(queryParams, &q.BaseMetricsQuery, namespaceInfo)
}
func extractBaseMetricsQueryParams(queryParams url.Values, q *prometheus.BaseMetricsQuery, namespaceInfo *models.Namespace) error {
if rateIntervals, ok := queryParams["rateInterval"]; ok && len(rateIntervals) > 0 {
// Only first is taken into consideration
q.RateInterval = rateIntervals[0]
}
if rateFuncs, ok := queryParams["rateFunc"]; ok && len(rateFuncs) > 0 {
// Only first is taken into consideration
if rateFuncs[0] != "rate" && rateFuncs[0] != "irate" {
// Bad request
return errors.New("Bad request, query parameter 'rateFunc' must be either 'rate' or 'irate'")
}
q.RateFunc = rateFuncs[0]
}
if queryTimes, ok := queryParams["queryTime"]; ok && len(queryTimes) > 0 {
if num, err := strconv.ParseInt(queryTimes[0], 10, 64); err == nil {
q.End = time.Unix(num, 0)
} else {
// Bad request
return errors.New("Bad request, cannot parse query parameter 'queryTime'")
}
}
if durations, ok := queryParams["duration"]; ok && len(durations) > 0 {
if num, err := strconv.ParseInt(durations[0], 10, 64); err == nil {
duration := time.Duration(num) * time.Second
q.Start = q.End.Add(-duration)
} else {
// Bad request
return errors.New("Bad request, cannot parse query parameter 'duration'")
}
}
if steps, ok := queryParams["step"]; ok && len(steps) > 0 {
if num, err := strconv.Atoi(steps[0]); err == nil {
q.Step = time.Duration(num) * time.Second
} else {
// Bad request
return errors.New("Bad request, cannot parse query parameter 'step'")
}
}
if quantiles, ok := queryParams["quantiles[]"]; ok && len(quantiles) > 0 {
for _, quantile := range quantiles {
f, err := strconv.ParseFloat(quantile, 64)
if err != nil {
// Non parseable quantile
return errors.New("Bad request, cannot parse query parameter 'quantiles', float expected")
}
if f < 0 || f > 1 {
return errors.New("Bad request, invalid quantile(s): should be between 0 and 1")
}
}
q.Quantiles = quantiles
}
if avgFlags, ok := queryParams["avg"]; ok && len(avgFlags) > 0 {
if avgFlag, err := strconv.ParseBool(avgFlags[0]); err == nil {
q.Avg = avgFlag
} else {
// Bad request
return errors.New("Bad request, cannot parse query parameter 'avg'")
}
}
if lbls, ok := queryParams["byLabels[]"]; ok && len(lbls) > 0 {
q.ByLabels = lbls
}
// If needed, adjust interval -- Make sure query won't fetch data before the namespace creation
intervalStartTime, err := util.GetStartTimeForRateInterval(q.End, q.RateInterval)
if err != nil {
return err
}
if intervalStartTime.Before(namespaceInfo.CreationTimestamp) {
q.RateInterval = fmt.Sprintf("%ds", int(q.End.Sub(namespaceInfo.CreationTimestamp).Seconds()))
intervalStartTime = namespaceInfo.CreationTimestamp
log.Debugf("[extractMetricsQueryParams] Interval set to: %v", q.RateInterval)
}
// If needed, adjust query start time (bound to namespace creation time)
log.Debugf("[extractMetricsQueryParams] Requested query start time: %v", q.Start)
intervalDuration := q.End.Sub(intervalStartTime)
allowedStart := namespaceInfo.CreationTimestamp.Add(intervalDuration)
if q.Start.Before(allowedStart) {
q.Start = allowedStart
log.Debugf("[extractMetricsQueryParams] Query start time set to: %v", q.Start)
if q.Start.After(q.End) {
// This means that the query range does not fall in the range
// of life of the namespace. So, there are no metrics to query.
log.Debugf("[extractMetricsQueryParams] Query end time = %v; not querying metrics.", q.End)
return errors.New("After checks, query start time is after end time")
}
}
// Adjust start & end times to be a multiple of step
stepInSecs := int64(q.Step.Seconds())
q.Start = time.Unix((q.Start.Unix()/stepInSecs)*stepInSecs, 0)
return nil
}

58
vendor/github.com/kiali/kiali/handlers/namespaces.go generated vendored Normal file
View File

@@ -0,0 +1,58 @@
package handlers
import (
"net/http"
"github.com/gorilla/mux"
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/prometheus"
)
func NamespaceList(w http.ResponseWriter, r *http.Request) {
business, err := business.Get()
if err != nil {
log.Error(err)
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
namespaces, err := business.Namespace.GetNamespaces()
if err != nil {
log.Error(err)
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
RespondWithJSON(w, http.StatusOK, namespaces)
}
// NamespaceMetrics is the API handler to fetch metrics to be displayed, related to all
// services in the namespace
func NamespaceMetrics(w http.ResponseWriter, r *http.Request) {
getNamespaceMetrics(w, r, defaultPromClientSupplier, defaultK8SClientSupplier)
}
// getServiceMetrics (mock-friendly version)
func getNamespaceMetrics(w http.ResponseWriter, r *http.Request, promSupplier promClientSupplier, k8sSupplier k8sClientSupplier) {
vars := mux.Vars(r)
namespace := vars["namespace"]
prom, _, namespaceInfo := initClientsForMetrics(w, promSupplier, k8sSupplier, namespace)
if prom == nil {
// any returned value nil means error & response already written
return
}
params := prometheus.IstioMetricsQuery{Namespace: namespace}
err := extractIstioMetricsQueryParams(r, &params, namespaceInfo)
if err != nil {
RespondWithError(w, http.StatusBadRequest, err.Error())
return
}
metrics := prom.GetMetrics(&params)
RespondWithJSON(w, http.StatusOK, metrics)
}

30
vendor/github.com/kiali/kiali/handlers/root.go generated vendored Normal file
View File

@@ -0,0 +1,30 @@
package handlers
import (
"net/http"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/status"
)
func Root(w http.ResponseWriter, r *http.Request) {
getStatus(w, r)
}
func getStatus(w http.ResponseWriter, r *http.Request) {
RespondWithJSONIndent(w, http.StatusOK, status.Get())
}
func GetToken(w http.ResponseWriter, r *http.Request) {
u, _, ok := r.BasicAuth()
if !ok {
RespondWithJSONIndent(w, http.StatusInternalServerError, u)
return
}
token, error := config.GenerateToken(u)
if error != nil {
RespondWithJSONIndent(w, http.StatusInternalServerError, error)
return
}
RespondWithJSONIndent(w, http.StatusOK, token)
}

159
vendor/github.com/kiali/kiali/handlers/services.go generated vendored Normal file
View File

@@ -0,0 +1,159 @@
package handlers
import (
"net/http"
"sync"
"github.com/gorilla/mux"
"k8s.io/apimachinery/pkg/api/errors"
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus"
"github.com/kiali/kiali/util"
)
// ServiceList is the API handler to fetch the list of services in a given namespace
func ServiceList(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
namespace := params["namespace"]
// Fetch and build services
serviceList, err := business.Svc.GetServiceList(namespace)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
RespondWithJSON(w, http.StatusOK, serviceList)
}
// ServiceMetrics is the API handler to fetch metrics to be displayed, related to a single service
func ServiceMetrics(w http.ResponseWriter, r *http.Request) {
getServiceMetrics(w, r, defaultPromClientSupplier, defaultK8SClientSupplier)
}
// getServiceMetrics (mock-friendly version)
func getServiceMetrics(w http.ResponseWriter, r *http.Request, promSupplier promClientSupplier, k8sSupplier k8sClientSupplier) {
vars := mux.Vars(r)
namespace := vars["namespace"]
service := vars["service"]
prom, _, namespaceInfo := initClientsForMetrics(w, promSupplier, k8sSupplier, namespace)
if prom == nil {
// any returned value nil means error & response already written
return
}
params := prometheus.IstioMetricsQuery{Namespace: namespace, Service: service}
err := extractIstioMetricsQueryParams(r, &params, namespaceInfo)
if err != nil {
RespondWithError(w, http.StatusBadRequest, err.Error())
return
}
metrics := prom.GetMetrics(&params)
RespondWithJSON(w, http.StatusOK, metrics)
}
// ServiceDetails is the API handler to fetch full details of an specific service
func ServiceDetails(w http.ResponseWriter, r *http.Request) {
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Services initialization error: "+err.Error())
return
}
// Rate interval is needed to fetch request rates based health
rateInterval := defaultHealthRateInterval
queryParams := r.URL.Query()
if rateIntervals, ok := queryParams["rateInterval"]; ok && len(rateIntervals) > 0 {
rateInterval = rateIntervals[0]
}
includeValidations := false
if _, found := queryParams["validate"]; found {
includeValidations = true
}
params := mux.Vars(r)
namespace := params["namespace"]
service := params["service"]
queryTime := util.Clock.Now()
rateInterval, err = adjustRateInterval(business, namespace, rateInterval, queryTime)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Adjust rate interval error: "+err.Error())
return
}
var istioConfigValidations = models.IstioValidations{}
wg := sync.WaitGroup{}
if includeValidations {
wg.Add(1)
go func(istioConfigValidations *models.IstioValidations, err *error) {
defer wg.Done()
istioConfigValidationResults, errValidations := business.Validations.GetValidations(namespace, service)
if errValidations != nil && err == nil {
*err = errValidations
} else {
*istioConfigValidations = istioConfigValidationResults
}
}(&istioConfigValidations, &err)
}
serviceDetails, err := business.Svc.GetService(namespace, service, rateInterval, queryTime)
if includeValidations && err == nil {
wg.Wait()
serviceDetails.Validations = istioConfigValidations
}
if err != nil {
if errors.IsNotFound(err) {
RespondWithError(w, http.StatusNotFound, err.Error())
} else if statusError, isStatus := err.(*errors.StatusError); isStatus {
RespondWithError(w, http.StatusInternalServerError, statusError.ErrStatus.Message)
} else {
RespondWithError(w, http.StatusInternalServerError, err.Error())
}
return
}
RespondWithJSON(w, http.StatusOK, serviceDetails)
}
// ServiceDashboard is the API handler to fetch Istio dashboard, related to a single service
func ServiceDashboard(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
namespace := vars["namespace"]
service := vars["service"]
prom, _, namespaceInfo := initClientsForMetrics(w, defaultPromClientSupplier, defaultK8SClientSupplier, namespace)
if prom == nil {
// any returned value nil means error & response already written
return
}
params := prometheus.IstioMetricsQuery{Namespace: namespace, Service: service}
err := extractIstioMetricsQueryParams(r, &params, namespaceInfo)
if err != nil {
RespondWithError(w, http.StatusBadRequest, err.Error())
return
}
svc := business.NewDashboardsService(nil, prom)
dashboard, err := svc.GetIstioDashboard(params)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
RespondWithJSON(w, http.StatusOK, dashboard)
}

70
vendor/github.com/kiali/kiali/handlers/utils.go generated vendored Normal file
View File

@@ -0,0 +1,70 @@
package handlers
import (
"net/http"
"net/url"
"k8s.io/api/core/v1"
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/models"
"github.com/kiali/kiali/prometheus"
)
type promClientSupplier func() (*prometheus.Client, error)
type k8sClientSupplier func() (kubernetes.IstioClientInterface, error)
var defaultPromClientSupplier = prometheus.NewClient
var defaultK8SClientSupplier = func() (kubernetes.IstioClientInterface, error) {
return kubernetes.NewClient()
}
func getService(namespace string, service string) (*v1.ServiceSpec, error) {
client, err := kubernetes.NewClient()
if err != nil {
return nil, err
}
svc, err := client.GetService(namespace, service)
if err != nil {
return nil, err
}
return &svc.Spec, nil
}
func validateURL(serviceURL string) (*url.URL, error) {
return url.ParseRequestURI(serviceURL)
}
func checkNamespaceAccess(w http.ResponseWriter, k8s kubernetes.IstioClientInterface, prom prometheus.ClientInterface, namespace string) *models.Namespace {
layer := business.NewWithBackends(k8s, prom)
if nsInfo, err := layer.Namespace.GetNamespace(namespace); err != nil {
RespondWithError(w, http.StatusForbidden, "Cannot access namespace data: "+err.Error())
return nil
} else {
return nsInfo
}
}
func initClientsForMetrics(w http.ResponseWriter, promSupplier promClientSupplier, k8sSupplier k8sClientSupplier, namespace string) (*prometheus.Client, kubernetes.IstioClientInterface, *models.Namespace) {
k8s, err := k8sSupplier()
if err != nil {
log.Error(err)
RespondWithError(w, http.StatusServiceUnavailable, "Kubernetes client error: "+err.Error())
return nil, nil, nil
}
prom, err := promSupplier()
if err != nil {
log.Error(err)
RespondWithError(w, http.StatusServiceUnavailable, "Prometheus client error: "+err.Error())
return nil, nil, nil
}
nsInfo := checkNamespaceAccess(w, k8s, prom, namespace)
if nsInfo == nil {
return nil, nil, nil
}
return prom, k8s, nsInfo
}

116
vendor/github.com/kiali/kiali/handlers/workloads.go generated vendored Normal file
View File

@@ -0,0 +1,116 @@
package handlers
import (
"net/http"
"github.com/gorilla/mux"
"k8s.io/apimachinery/pkg/api/errors"
"github.com/kiali/kiali/business"
"github.com/kiali/kiali/prometheus"
)
// WorkloadList is the API handler to fetch all the workloads to be displayed, related to a single namespace
func WorkloadList(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Workloads initialization error: "+err.Error())
return
}
namespace := params["namespace"]
// Fetch and build workloads
workloadList, err := business.Workload.GetWorkloadList(namespace)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
RespondWithJSON(w, http.StatusOK, workloadList)
}
// WorkloadDetails is the API handler to fetch all details to be displayed, related to a single workload
func WorkloadDetails(w http.ResponseWriter, r *http.Request) {
params := mux.Vars(r)
// Get business layer
business, err := business.Get()
if err != nil {
RespondWithError(w, http.StatusInternalServerError, "Workloads initialization error: "+err.Error())
return
}
namespace := params["namespace"]
workload := params["workload"]
// Fetch and build workload
workloadDetails, err := business.Workload.GetWorkload(namespace, workload, true)
if err != nil {
if errors.IsNotFound(err) {
RespondWithError(w, http.StatusNotFound, err.Error())
} else {
RespondWithError(w, http.StatusInternalServerError, err.Error())
}
return
}
RespondWithJSON(w, http.StatusOK, workloadDetails)
}
// WorkloadMetrics is the API handler to fetch metrics to be displayed, related to a single workload
func WorkloadMetrics(w http.ResponseWriter, r *http.Request) {
getWorkloadMetrics(w, r, defaultPromClientSupplier, defaultK8SClientSupplier)
}
// getWorkloadMetrics (mock-friendly version)
func getWorkloadMetrics(w http.ResponseWriter, r *http.Request, promSupplier promClientSupplier, k8sSupplier k8sClientSupplier) {
vars := mux.Vars(r)
namespace := vars["namespace"]
workload := vars["workload"]
prom, _, namespaceInfo := initClientsForMetrics(w, promSupplier, k8sSupplier, namespace)
if prom == nil {
// any returned value nil means error & response already written
return
}
params := prometheus.IstioMetricsQuery{Namespace: namespace, Workload: workload}
err := extractIstioMetricsQueryParams(r, &params, namespaceInfo)
if err != nil {
RespondWithError(w, http.StatusBadRequest, err.Error())
return
}
metrics := prom.GetMetrics(&params)
RespondWithJSON(w, http.StatusOK, metrics)
}
// WorkloadDashboard is the API handler to fetch Istio dashboard, related to a single workload
func WorkloadDashboard(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
namespace := vars["namespace"]
workload := vars["workload"]
prom, _, namespaceInfo := initClientsForMetrics(w, defaultPromClientSupplier, defaultK8SClientSupplier, namespace)
if prom == nil {
// any returned value nil means error & response already written
return
}
params := prometheus.IstioMetricsQuery{Namespace: namespace, Workload: workload}
err := extractIstioMetricsQueryParams(r, &params, namespaceInfo)
if err != nil {
RespondWithError(w, http.StatusBadRequest, err.Error())
return
}
svc := business.NewDashboardsService(nil, prom)
dashboard, err := svc.GetIstioDashboard(params)
if err != nil {
RespondWithError(w, http.StatusInternalServerError, err.Error())
return
}
RespondWithJSON(w, http.StatusOK, dashboard)
}

459
vendor/github.com/kiali/kiali/kubernetes/cache.go generated vendored Normal file
View File

@@ -0,0 +1,459 @@
package kubernetes
import (
"errors"
"fmt"
"sync"
"time"
"k8s.io/api/apps/v1beta1"
"k8s.io/api/apps/v1beta2"
batch_v1 "k8s.io/api/batch/v1"
batch_v1beta1 "k8s.io/api/batch/v1beta1"
"k8s.io/api/core/v1"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/informers"
kube "k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"github.com/kiali/kiali/log"
)
type (
// Inspired/reused from istio code:
// https://github.com/istio/istio/blob/master/mixer/adapter/kubernetesenv/cache.go
cacheController interface {
// Control Cache
Start()
HasSynced() bool
WaitForSync() bool
Stop()
// Business methods
GetCronJobs(namespace string) ([]batch_v1beta1.CronJob, error)
GetDeployment(namespace string, name string) (*v1beta1.Deployment, error)
GetDeployments(namespace string) ([]v1beta1.Deployment, error)
GetEndpoints(namespace, name string) (*v1.Endpoints, error)
GetJobs(namespace string) ([]batch_v1.Job, error)
GetPods(namespace string) ([]v1.Pod, error)
GetReplicationControllers(namespace string) ([]v1.ReplicationController, error)
GetReplicaSets(namespace string) ([]v1beta2.ReplicaSet, error)
GetService(namespace string, name string) (*v1.Service, error)
GetServices(namespace string) ([]v1.Service, error)
GetStatefulSet(namespace string, name string) (*v1beta2.StatefulSet, error)
GetStatefulSets(namespace string) ([]v1beta2.StatefulSet, error)
}
controllerImpl struct {
clientset kube.Interface
refreshDuration time.Duration
stopChan chan struct{}
syncCount int
maxSyncCount int
isErrorState bool
lastError error
lastErrorLock sync.Mutex
controllers map[string]cache.SharedIndexInformer
}
)
var (
lastCacheErrorLock sync.Mutex
errorCallbacks []func(error)
)
func init() {
setupErrorHandlers()
errorCallbacks = make([]func(error), 0)
}
func setupErrorHandlers() {
nErrFunc := len(utilruntime.ErrorHandlers)
customErrorHandler := make([]func(error), nErrFunc+1)
for i, errorFunc := range utilruntime.ErrorHandlers {
customErrorHandler[i] = errorFunc
}
customErrorHandler[nErrFunc] = func(err error) {
for _, callback := range errorCallbacks {
callback(err)
}
}
utilruntime.ErrorHandlers = customErrorHandler
}
func registerErrorCallback(callback func(error)) {
defer lastCacheErrorLock.Unlock()
lastCacheErrorLock.Lock()
errorCallbacks = append(errorCallbacks, callback)
}
func newCacheController(clientset kube.Interface, refreshDuration time.Duration) cacheController {
newControllerImpl := controllerImpl{
clientset: clientset,
refreshDuration: refreshDuration,
stopChan: nil,
controllers: initControllers(clientset, refreshDuration),
syncCount: 0,
maxSyncCount: 20, // Move this to config ? or this constant is good enough ?
}
registerErrorCallback(newControllerImpl.ErrorCallback)
return &newControllerImpl
}
func initControllers(clientset kube.Interface, refreshDuration time.Duration) map[string]cache.SharedIndexInformer {
sharedInformers := informers.NewSharedInformerFactory(clientset, refreshDuration)
controllers := make(map[string]cache.SharedIndexInformer)
controllers["Pod"] = sharedInformers.Core().V1().Pods().Informer()
controllers["ReplicationController"] = sharedInformers.Core().V1().ReplicationControllers().Informer()
controllers["Deployment"] = sharedInformers.Apps().V1beta1().Deployments().Informer()
controllers["ReplicaSet"] = sharedInformers.Apps().V1beta2().ReplicaSets().Informer()
controllers["StatefulSet"] = sharedInformers.Apps().V1beta2().StatefulSets().Informer()
controllers["Job"] = sharedInformers.Batch().V1().Jobs().Informer()
controllers["CronJob"] = sharedInformers.Batch().V1beta1().CronJobs().Informer()
controllers["Service"] = sharedInformers.Core().V1().Services().Informer()
controllers["Endpoints"] = sharedInformers.Core().V1().Endpoints().Informer()
return controllers
}
func (c *controllerImpl) Start() {
if c.stopChan == nil {
c.stopChan = make(chan struct{})
go c.run(c.stopChan)
log.Infof("K8S cache started")
} else {
log.Warningf("K8S cache is already running")
}
}
func (c *controllerImpl) run(stop <-chan struct{}) {
for _, cn := range c.controllers {
go cn.Run(stop)
}
<-stop
log.Infof("K8S cache stopped")
}
func (c *controllerImpl) HasSynced() bool {
if c.syncCount > c.maxSyncCount {
log.Errorf("Max attempts reached syncing cache. Error connecting to k8s API: %d > %d", c.syncCount, c.maxSyncCount)
c.Stop()
return false
}
hasSynced := true
for _, cn := range c.controllers {
hasSynced = hasSynced && cn.HasSynced()
}
if hasSynced {
c.syncCount = 0
} else {
c.syncCount = c.syncCount + 1
}
return hasSynced
}
func (c *controllerImpl) WaitForSync() bool {
return cache.WaitForCacheSync(c.stopChan, c.HasSynced)
}
func (c *controllerImpl) Stop() {
if c.stopChan != nil {
close(c.stopChan)
c.stopChan = nil
}
}
func (c *controllerImpl) ErrorCallback(err error) {
if c.isErrorState == false {
log.Warningf("Error callback received: %s", err)
c.lastErrorLock.Lock()
c.isErrorState = true
c.lastError = err
c.lastErrorLock.Unlock()
c.Stop()
}
}
func (c *controllerImpl) checkStateAndRetry() error {
if c.isErrorState == false {
return nil
}
// Retry of the cache is hold by one single goroutine
c.lastErrorLock.Lock()
if c.isErrorState == true {
// ping to check if backend is still unavailable (used namespace endpoint)
_, err := c.clientset.CoreV1().Namespaces().List(emptyListOptions)
if err != nil {
c.lastError = fmt.Errorf("Error retrying to connect to K8S API backend. %s", err)
} else {
c.lastError = nil
c.isErrorState = false
c.Start()
c.WaitForSync()
}
}
c.lastErrorLock.Unlock()
return c.lastError
}
func (c *controllerImpl) GetCronJobs(namespace string) ([]batch_v1beta1.CronJob, error) {
if err := c.checkStateAndRetry(); err != nil {
return []batch_v1beta1.CronJob{}, err
}
indexer := c.controllers["CronJob"].GetIndexer()
cronjobs, err := indexer.ByIndex("namespace", namespace)
if err != nil {
return []batch_v1beta1.CronJob{}, err
}
if len(cronjobs) > 0 {
_, ok := cronjobs[0].(*batch_v1beta1.CronJob)
if !ok {
return []batch_v1beta1.CronJob{}, errors.New("Bad CronJob type found in cache")
}
nsCronjobs := make([]batch_v1beta1.CronJob, len(cronjobs))
for i, cronjob := range cronjobs {
nsCronjobs[i] = *(cronjob.(*batch_v1beta1.CronJob))
}
return nsCronjobs, nil
}
return []batch_v1beta1.CronJob{}, nil
}
func (c *controllerImpl) GetDeployment(namespace, name string) (*v1beta1.Deployment, error) {
if err := c.checkStateAndRetry(); err != nil {
return nil, err
}
indexer := c.controllers["Deployment"].GetIndexer()
deps, exist, err := indexer.GetByKey(namespace + "/" + name)
if err != nil {
return nil, err
}
if exist {
dep, ok := deps.(*v1beta1.Deployment)
if !ok {
return nil, errors.New("Bad Deployment type found in cache")
}
return dep, nil
}
return nil, NewNotFound(name, "apps/v1beta1", "Deployment")
}
func (c *controllerImpl) GetDeployments(namespace string) ([]v1beta1.Deployment, error) {
if err := c.checkStateAndRetry(); err != nil {
return []v1beta1.Deployment{}, err
}
indexer := c.controllers["Deployment"].GetIndexer()
deps, err := indexer.ByIndex("namespace", namespace)
if err != nil {
return []v1beta1.Deployment{}, err
}
if len(deps) > 0 {
_, ok := deps[0].(*v1beta1.Deployment)
if !ok {
return nil, errors.New("Bad Deployment type found in cache")
}
nsDeps := make([]v1beta1.Deployment, len(deps))
for i, dep := range deps {
nsDeps[i] = *(dep.(*v1beta1.Deployment))
}
return nsDeps, nil
}
return []v1beta1.Deployment{}, nil
}
func (c *controllerImpl) GetEndpoints(namespace, name string) (*v1.Endpoints, error) {
if err := c.checkStateAndRetry(); err != nil {
return nil, err
}
indexer := c.controllers["Endpoints"].GetIndexer()
endpoints, exist, err := indexer.GetByKey(namespace + "/" + name)
if err != nil {
return nil, err
}
if exist {
endpoint, ok := endpoints.(*v1.Endpoints)
if !ok {
return nil, errors.New("Bad Endpoints type found in cache")
}
return endpoint, nil
}
return nil, NewNotFound(name, "core/v1", "Endpoints")
}
func (c *controllerImpl) GetJobs(namespace string) ([]batch_v1.Job, error) {
if err := c.checkStateAndRetry(); err != nil {
return []batch_v1.Job{}, err
}
indexer := c.controllers["Job"].GetIndexer()
jobs, err := indexer.ByIndex("namespace", namespace)
if err != nil {
return []batch_v1.Job{}, err
}
if len(jobs) > 0 {
_, ok := jobs[0].(*batch_v1.Job)
if !ok {
return []batch_v1.Job{}, errors.New("Bad Job type found in cache")
}
nsJobs := make([]batch_v1.Job, len(jobs))
for i, job := range jobs {
nsJobs[i] = *(job.(*batch_v1.Job))
}
return nsJobs, nil
}
return []batch_v1.Job{}, nil
}
func (c *controllerImpl) GetPods(namespace string) ([]v1.Pod, error) {
if err := c.checkStateAndRetry(); err != nil {
return []v1.Pod{}, err
}
indexer := c.controllers["Pod"].GetIndexer()
pods, err := indexer.ByIndex("namespace", namespace)
if err != nil {
return []v1.Pod{}, err
}
if len(pods) > 0 {
_, ok := pods[0].(*v1.Pod)
if !ok {
return []v1.Pod{}, errors.New("Bad Pod type found in cache")
}
nsPods := make([]v1.Pod, len(pods))
for i, pod := range pods {
nsPods[i] = *(pod.(*v1.Pod))
}
return nsPods, nil
}
return []v1.Pod{}, nil
}
func (c *controllerImpl) GetReplicationControllers(namespace string) ([]v1.ReplicationController, error) {
if err := c.checkStateAndRetry(); err != nil {
return []v1.ReplicationController{}, err
}
indexer := c.controllers["ReplicationController"].GetIndexer()
repcons, err := indexer.ByIndex("namespace", namespace)
if err != nil {
return []v1.ReplicationController{}, err
}
if len(repcons) > 0 {
_, ok := repcons[0].(*v1.ReplicationController)
if !ok {
return []v1.ReplicationController{}, errors.New("Bad ReplicationController type found in cache")
}
nsRepcons := make([]v1.ReplicationController, len(repcons))
for i, repcon := range repcons {
nsRepcons[i] = *(repcon.(*v1.ReplicationController))
}
return nsRepcons, nil
}
return []v1.ReplicationController{}, nil
}
func (c *controllerImpl) GetReplicaSets(namespace string) ([]v1beta2.ReplicaSet, error) {
if err := c.checkStateAndRetry(); err != nil {
return []v1beta2.ReplicaSet{}, err
}
indexer := c.controllers["ReplicaSet"].GetIndexer()
repsets, err := indexer.ByIndex("namespace", namespace)
if err != nil {
return []v1beta2.ReplicaSet{}, err
}
if len(repsets) > 0 {
_, ok := repsets[0].(*v1beta2.ReplicaSet)
if !ok {
return []v1beta2.ReplicaSet{}, errors.New("Bad ReplicaSet type found in cache")
}
nsRepsets := make([]v1beta2.ReplicaSet, len(repsets))
for i, repset := range repsets {
nsRepsets[i] = *(repset.(*v1beta2.ReplicaSet))
}
return nsRepsets, nil
}
return []v1beta2.ReplicaSet{}, nil
}
func (c *controllerImpl) GetStatefulSet(namespace, name string) (*v1beta2.StatefulSet, error) {
if err := c.checkStateAndRetry(); err != nil {
return nil, err
}
indexer := c.controllers["StatefulSet"].GetIndexer()
fulsets, exist, err := indexer.GetByKey(namespace + "/" + name)
if err != nil {
return nil, err
}
if exist {
fulset, ok := fulsets.(*v1beta2.StatefulSet)
if !ok {
return nil, errors.New("Bad StatefulSet type found in cache")
}
return fulset, nil
}
return nil, NewNotFound(name, "apps/v1beta2", "StatefulSet")
}
func (c *controllerImpl) GetStatefulSets(namespace string) ([]v1beta2.StatefulSet, error) {
if err := c.checkStateAndRetry(); err != nil {
return []v1beta2.StatefulSet{}, err
}
indexer := c.controllers["StatefulSet"].GetIndexer()
fulsets, err := indexer.ByIndex("namespace", namespace)
if err != nil {
return []v1beta2.StatefulSet{}, err
}
if len(fulsets) > 0 {
_, ok := fulsets[0].(*v1beta2.StatefulSet)
if !ok {
return []v1beta2.StatefulSet{}, errors.New("Bad StatefulSet type found in cache")
}
nsFulsets := make([]v1beta2.StatefulSet, len(fulsets))
for i, fulset := range fulsets {
nsFulsets[i] = *(fulset.(*v1beta2.StatefulSet))
}
return nsFulsets, nil
}
return []v1beta2.StatefulSet{}, nil
}
func (c *controllerImpl) GetService(namespace, name string) (*v1.Service, error) {
if err := c.checkStateAndRetry(); err != nil {
return nil, err
}
indexer := c.controllers["Service"].GetIndexer()
services, exist, err := indexer.GetByKey(namespace + "/" + name)
if err != nil {
return nil, err
}
if exist {
service, ok := services.(*v1.Service)
if !ok {
return nil, errors.New("Bad Service type found in cache")
}
return service, nil
}
return nil, NewNotFound(name, "core/v1", "Service")
}
func (c *controllerImpl) GetServices(namespace string) ([]v1.Service, error) {
if err := c.checkStateAndRetry(); err != nil {
return []v1.Service{}, err
}
indexer := c.controllers["Service"].GetIndexer()
services, err := indexer.ByIndex("namespace", namespace)
if err != nil {
return []v1.Service{}, err
}
if len(services) > 0 {
_, ok := services[0].(*v1.Service)
if !ok {
return []v1.Service{}, errors.New("Bad Service type found in cache")
}
nsServices := make([]v1.Service, len(services))
for i, service := range services {
nsServices[i] = *(service.(*v1.Service))
}
return nsServices, nil
}
return []v1.Service{}, nil
}

293
vendor/github.com/kiali/kiali/kubernetes/client.go generated vendored Normal file
View File

@@ -0,0 +1,293 @@
package kubernetes
import (
"errors"
"fmt"
"net"
"os"
"time"
"k8s.io/api/apps/v1beta1"
"k8s.io/api/apps/v1beta2"
auth_v1 "k8s.io/api/authorization/v1"
batch_v1 "k8s.io/api/batch/v1"
batch_v1beta1 "k8s.io/api/batch/v1beta1"
v1 "k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/runtime/serializer"
kube "k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
kialiConfig "github.com/kiali/kiali/config"
"github.com/kiali/kiali/log"
osappsv1 "github.com/openshift/api/apps/v1"
osv1 "github.com/openshift/api/project/v1"
)
var (
emptyListOptions = meta_v1.ListOptions{}
emptyGetOptions = meta_v1.GetOptions{}
)
// IstioClientInterface for mocks (only mocked function are necessary here)
type IstioClientInterface interface {
CreateIstioObject(api, namespace, resourceType, json string) (IstioObject, error)
DeleteIstioObject(api, namespace, resourceType, name string) error
GetAdapter(namespace, adapterType, adapterName string) (IstioObject, error)
GetAdapters(namespace string) ([]IstioObject, error)
GetCronJobs(namespace string) ([]batch_v1beta1.CronJob, error)
GetDeployment(namespace string, deploymentName string) (*v1beta1.Deployment, error)
GetDeployments(namespace string) ([]v1beta1.Deployment, error)
GetDeploymentConfig(namespace string, deploymentconfigName string) (*osappsv1.DeploymentConfig, error)
GetDeploymentConfigs(namespace string) ([]osappsv1.DeploymentConfig, error)
GetDestinationRule(namespace string, destinationrule string) (IstioObject, error)
GetDestinationRules(namespace string, serviceName string) ([]IstioObject, error)
GetEndpoints(namespace string, serviceName string) (*v1.Endpoints, error)
GetGateway(namespace string, gateway string) (IstioObject, error)
GetGateways(namespace string) ([]IstioObject, error)
GetIstioDetails(namespace string, serviceName string) (*IstioDetails, error)
GetIstioRule(namespace string, istiorule string) (IstioObject, error)
GetIstioRules(namespace string) ([]IstioObject, error)
GetJobs(namespace string) ([]batch_v1.Job, error)
GetNamespace(namespace string) (*v1.Namespace, error)
GetNamespaces() ([]v1.Namespace, error)
GetPods(namespace, labelSelector string) ([]v1.Pod, error)
GetProject(project string) (*osv1.Project, error)
GetProjects() ([]osv1.Project, error)
GetQuotaSpec(namespace string, quotaSpecName string) (IstioObject, error)
GetQuotaSpecs(namespace string) ([]IstioObject, error)
GetQuotaSpecBinding(namespace string, quotaSpecBindingName string) (IstioObject, error)
GetQuotaSpecBindings(namespace string) ([]IstioObject, error)
GetReplicationControllers(namespace string) ([]v1.ReplicationController, error)
GetReplicaSets(namespace string) ([]v1beta2.ReplicaSet, error)
GetSelfSubjectAccessReview(namespace, api, resourceType string, verbs []string) ([]*auth_v1.SelfSubjectAccessReview, error)
GetService(namespace string, serviceName string) (*v1.Service, error)
GetServices(namespace string, selectorLabels map[string]string) ([]v1.Service, error)
GetServiceEntries(namespace string) ([]IstioObject, error)
GetServiceEntry(namespace string, serviceEntryName string) (IstioObject, error)
GetStatefulSet(namespace string, statefulsetName string) (*v1beta2.StatefulSet, error)
GetStatefulSets(namespace string) ([]v1beta2.StatefulSet, error)
GetTemplate(namespace, templateType, templateName string) (IstioObject, error)
GetTemplates(namespace string) ([]IstioObject, error)
GetPolicy(namespace string, policyName string) (IstioObject, error)
GetPolicies(namespace string) ([]IstioObject, error)
GetMeshPolicy(namespace string, policyName string) (IstioObject, error)
GetMeshPolicies(namespace string) ([]IstioObject, error)
GetClusterRbacConfig(namespace string, name string) (IstioObject, error)
GetClusterRbacConfigs(namespace string) ([]IstioObject, error)
GetServiceRole(namespace string, name string) (IstioObject, error)
GetServiceRoles(namespace string) ([]IstioObject, error)
GetServiceRoleBinding(namespace string, name string) (IstioObject, error)
GetServiceRoleBindings(namespace string) ([]IstioObject, error)
GetVirtualService(namespace string, virtualservice string) (IstioObject, error)
GetVirtualServices(namespace string, serviceName string) ([]IstioObject, error)
IsOpenShift() bool
Stop()
UpdateIstioObject(api, namespace, resourceType, name, jsonPatch string) (IstioObject, error)
}
// IstioClient is the client struct for Kubernetes and Istio APIs
// It hides the way it queries each API
type IstioClient struct {
IstioClientInterface
k8s *kube.Clientset
istioConfigApi *rest.RESTClient
istioNetworkingApi *rest.RESTClient
istioAuthenticationApi *rest.RESTClient
istioRbacApi *rest.RESTClient
// isOpenShift private variable will check if kiali is deployed under an OpenShift cluster or not
// It is represented as a pointer to include the initialization phase.
// See kubernetes_service.go#IsOpenShift() for more details.
isOpenShift *bool
// Cache controller is a global cache for all k8s objects fetched by kiali in multiple namespaces.
// It doesn't support reduced permissions scenarios yet, don't forget to disabled on those use cases.
k8sCache cacheController
stopCache chan struct{}
}
// GetK8sApi returns the clientset referencing all K8s rest clients
func (client *IstioClient) GetK8sApi() *kube.Clientset {
return client.k8s
}
// GetIstioConfigApi returns the istio config rest client
func (client *IstioClient) GetIstioConfigApi() *rest.RESTClient {
return client.istioConfigApi
}
// GetIstioNetworkingApi returns the istio config rest client
func (client *IstioClient) GetIstioNetworkingApi() *rest.RESTClient {
return client.istioNetworkingApi
}
// GetIstioRbacApi returns the istio rbac rest client
func (client *IstioClient) GetIstioRbacApi() *rest.RESTClient {
return client.istioRbacApi
}
// ConfigClient return a client with the correct configuration
// Returns configuration if Kiali is in Cluster when InCluster is true
// Returns configuration if Kiali is not int Cluster when InCluster is false
// It returns an error on any problem
func ConfigClient() (*rest.Config, error) {
if kialiConfig.Get().InCluster {
incluster, err := rest.InClusterConfig()
if err != nil {
return nil, err
}
incluster.QPS = kialiConfig.Get().KubernetesConfig.QPS
incluster.Burst = kialiConfig.Get().KubernetesConfig.Burst
return incluster, nil
}
host, port := os.Getenv("KUBERNETES_SERVICE_HOST"), os.Getenv("KUBERNETES_SERVICE_PORT")
if len(host) == 0 || len(port) == 0 {
return nil, fmt.Errorf("unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined")
}
return &rest.Config{
// TODO: switch to using cluster DNS.
Host: "http://" + net.JoinHostPort(host, port),
QPS: kialiConfig.Get().KubernetesConfig.QPS,
Burst: kialiConfig.Get().KubernetesConfig.Burst,
}, nil
}
// NewClient creates a new client to the Kubernetes and Istio APIs.
func NewClient() (*IstioClient, error) {
config, err := ConfigClient()
if err != nil {
return nil, err
}
return NewClientFromConfig(config)
}
// NewClientFromConfig creates a new client to the Kubernetes and Istio APIs.
// It takes the assumption that Istio is deployed into the cluster.
// It hides the access to Kubernetes/Openshift credentials.
// It hides the low level use of the API of Kubernetes and Istio, it should be considered as an implementation detail.
// It returns an error on any problem.
func NewClientFromConfig(config *rest.Config) (*IstioClient, error) {
client := IstioClient{}
log.Debugf("Rest perf config QPS: %f Burst: %d", config.QPS, config.Burst)
k8s, err := kube.NewForConfig(config)
if err != nil {
return nil, err
}
client.k8s = k8s
// Init client cache
// Note that cache will work only in full permissions scenarios (similar permissions as mixer/istio-telemetry component)
kialiK8sCfg := kialiConfig.Get().KubernetesConfig
if client.k8sCache == nil && kialiK8sCfg.CacheEnabled {
log.Infof("Kiali K8S Cache enabled")
client.stopCache = make(chan struct{})
client.k8sCache = newCacheController(client.k8s, time.Duration(kialiConfig.Get().KubernetesConfig.CacheDuration))
client.k8sCache.Start()
if !client.k8sCache.WaitForSync() {
return nil, errors.New("Cache cannot connect with the k8s API on host: " + config.Host)
}
}
// Istio is a CRD extension of Kubernetes API, so any custom type should be registered here.
// KnownTypes registers the Istio objects we use, as soon as we get more info we will increase the number of types.
types := runtime.NewScheme()
schemeBuilder := runtime.NewSchemeBuilder(
func(scheme *runtime.Scheme) error {
// Register networking types
for _, nt := range networkingTypes {
scheme.AddKnownTypeWithName(NetworkingGroupVersion.WithKind(nt.objectKind), &GenericIstioObject{})
scheme.AddKnownTypeWithName(NetworkingGroupVersion.WithKind(nt.collectionKind), &GenericIstioObjectList{})
}
// Register config types
for _, cf := range configTypes {
scheme.AddKnownTypeWithName(ConfigGroupVersion.WithKind(cf.objectKind), &GenericIstioObject{})
scheme.AddKnownTypeWithName(ConfigGroupVersion.WithKind(cf.collectionKind), &GenericIstioObjectList{})
}
// Register adapter types
for _, ad := range adapterTypes {
scheme.AddKnownTypeWithName(ConfigGroupVersion.WithKind(ad.objectKind), &GenericIstioObject{})
scheme.AddKnownTypeWithName(ConfigGroupVersion.WithKind(ad.collectionKind), &GenericIstioObjectList{})
}
// Register template types
for _, tp := range templateTypes {
scheme.AddKnownTypeWithName(ConfigGroupVersion.WithKind(tp.objectKind), &GenericIstioObject{})
scheme.AddKnownTypeWithName(ConfigGroupVersion.WithKind(tp.collectionKind), &GenericIstioObjectList{})
}
// Register authentication types
for _, at := range authenticationTypes {
scheme.AddKnownTypeWithName(AuthenticationGroupVersion.WithKind(at.objectKind), &GenericIstioObject{})
scheme.AddKnownTypeWithName(AuthenticationGroupVersion.WithKind(at.collectionKind), &GenericIstioObjectList{})
}
// Register rbac types
for _, rt := range rbacTypes {
scheme.AddKnownTypeWithName(RbacGroupVersion.WithKind(rt.objectKind), &GenericIstioObject{})
scheme.AddKnownTypeWithName(RbacGroupVersion.WithKind(rt.collectionKind), &GenericIstioObjectList{})
}
meta_v1.AddToGroupVersion(scheme, ConfigGroupVersion)
meta_v1.AddToGroupVersion(scheme, NetworkingGroupVersion)
meta_v1.AddToGroupVersion(scheme, AuthenticationGroupVersion)
meta_v1.AddToGroupVersion(scheme, RbacGroupVersion)
return nil
})
err = schemeBuilder.AddToScheme(types)
if err != nil {
return nil, err
}
// Istio needs another type as it queries a different K8S API.
istioConfigAPI, err := newClientForAPI(config, ConfigGroupVersion, types)
if err != nil {
return nil, err
}
istioNetworkingAPI, err := newClientForAPI(config, NetworkingGroupVersion, types)
if err != nil {
return nil, err
}
istioAuthenticationAPI, err := newClientForAPI(config, AuthenticationGroupVersion, types)
if err != nil {
return nil, err
}
istioRbacApi, err := newClientForAPI(config, RbacGroupVersion, types)
if err != nil {
return nil, err
}
client.istioConfigApi = istioConfigAPI
client.istioNetworkingApi = istioNetworkingAPI
client.istioAuthenticationApi = istioAuthenticationAPI
client.istioRbacApi = istioRbacApi
return &client, nil
}
func newClientForAPI(fromCfg *rest.Config, groupVersion schema.GroupVersion, scheme *runtime.Scheme) (*rest.RESTClient, error) {
cfg := rest.Config{
Host: fromCfg.Host,
APIPath: "/apis",
ContentConfig: rest.ContentConfig{
GroupVersion: &groupVersion,
NegotiatedSerializer: serializer.DirectCodecFactory{CodecFactory: serializer.NewCodecFactory(scheme)},
ContentType: runtime.ContentTypeJSON,
},
BearerToken: fromCfg.BearerToken,
TLSClientConfig: fromCfg.TLSClientConfig,
QPS: fromCfg.QPS,
Burst: fromCfg.Burst,
}
return rest.RESTClientFor(&cfg)
}
func (in *IstioClient) Stop() {
if in.k8sCache != nil {
in.k8sCache.Stop()
}
}

70
vendor/github.com/kiali/kiali/kubernetes/filters.go generated vendored Normal file
View File

@@ -0,0 +1,70 @@
package kubernetes
import (
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
)
// FilterPodsForService returns a subpart of pod list filtered according service selector
func FilterPodsForService(s *v1.Service, allPods []v1.Pod) []v1.Pod {
if s == nil || allPods == nil {
return nil
}
serviceSelector := labels.Set(s.Spec.Selector).AsSelector()
pods := FilterPodsForSelector(serviceSelector, allPods)
return pods
}
func FilterPodsForSelector(selector labels.Selector, allPods []v1.Pod) []v1.Pod {
var pods []v1.Pod
for _, pod := range allPods {
if selector.Matches(labels.Set(pod.ObjectMeta.Labels)) {
pods = append(pods, pod)
}
}
return pods
}
// FilterPodsForEndpoints performs a second pass was selector may return too many data
// This case happens when a "nil" selector (such as one of default/kubernetes service) is used
func FilterPodsForEndpoints(endpoints *v1.Endpoints, unfiltered []v1.Pod) []v1.Pod {
endpointPods := make(map[string]bool)
for _, subset := range endpoints.Subsets {
for _, address := range subset.Addresses {
if address.TargetRef != nil && address.TargetRef.Kind == "Pod" {
endpointPods[address.TargetRef.Name] = true
}
}
}
var pods []v1.Pod
for _, pod := range unfiltered {
if _, ok := endpointPods[pod.Name]; ok {
pods = append(pods, pod)
}
}
return pods
}
func FilterPodsForController(controllerName string, controllerType string, allPods []v1.Pod) []v1.Pod {
var pods []v1.Pod
for _, pod := range allPods {
for _, ref := range pod.OwnerReferences {
if ref.Controller != nil && *ref.Controller && ref.Name == controllerName && ref.Kind == controllerType {
pods = append(pods, pod)
break
}
}
}
return pods
}
func FilterServicesForSelector(selector labels.Selector, allServices []v1.Service) []v1.Service {
var services []v1.Service
for _, svc := range allServices {
if selector.Matches(labels.Set(svc.Spec.Selector)) {
services = append(services, svc)
}
}
return services
}

View File

@@ -0,0 +1,728 @@
package kubernetes
import (
"fmt"
"regexp"
"strings"
"sync"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/log"
)
var portNameMatcher = regexp.MustCompile("^[\\-].*")
// GetIstioDetails returns Istio details for a given namespace,
// on this version it collects the VirtualServices and DestinationRules defined for a namespace.
// If serviceName param is provided, it filters all the Istio objects pointing to a particular service.
// It returns an error on any problem.
func (in *IstioClient) GetIstioDetails(namespace string, serviceName string) (*IstioDetails, error) {
wg := sync.WaitGroup{}
errChan := make(chan error, 4)
istioDetails := IstioDetails{}
vss := make([]IstioObject, 0)
drs := make([]IstioObject, 0)
gws := make([]IstioObject, 0)
ses := make([]IstioObject, 0)
wg.Add(4)
go fetchNoEntry(&ses, namespace, in.GetServiceEntries, &wg, errChan)
go fetchNoEntry(&gws, namespace, in.GetGateways, &wg, errChan)
go fetch(&vss, namespace, serviceName, in.GetVirtualServices, &wg, errChan)
go fetch(&drs, namespace, serviceName, in.GetDestinationRules, &wg, errChan)
wg.Wait()
if len(errChan) != 0 {
// We return first error only, likely to be the same issue for all
err := <-errChan
return nil, err
}
istioDetails.VirtualServices = vss
istioDetails.DestinationRules = drs
istioDetails.Gateways = gws
istioDetails.ServiceEntries = ses
return &istioDetails, nil
}
// CreateIstioObject creates an Istio object
func (in *IstioClient) CreateIstioObject(api, namespace, resourceType, json string) (IstioObject, error) {
var result runtime.Object
var err error
byteJson := []byte(json)
if api == ConfigGroupVersion.Group {
result, err = in.istioConfigApi.Post().Namespace(namespace).Resource(resourceType).Body(byteJson).Do().Get()
} else if api == NetworkingGroupVersion.Group {
result, err = in.istioNetworkingApi.Post().Namespace(namespace).Resource(resourceType).Body(byteJson).Do().Get()
} else if api == AuthenticationGroupVersion.Group {
result, err = in.istioAuthenticationApi.Post().Namespace(namespace).Resource(resourceType).Body(byteJson).Do().Get()
} else {
result, err = in.istioRbacApi.Post().Namespace(namespace).Resource(resourceType).Body(byteJson).Do().Get()
}
if err != nil {
return nil, err
}
istioObject, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return an IstioObject object", namespace, resourceType)
}
return istioObject, err
}
// DeleteIstioObject deletes an Istio object from either config api or networking api
func (in *IstioClient) DeleteIstioObject(api, namespace, resourceType, name string) error {
log.Debugf("DeleteIstioObject input: %s / %s / %s / %s", api, namespace, resourceType, name)
var err error
if api == ConfigGroupVersion.Group {
_, err = in.istioConfigApi.Delete().Namespace(namespace).Resource(resourceType).Name(name).Do().Get()
} else if api == NetworkingGroupVersion.Group {
_, err = in.istioNetworkingApi.Delete().Namespace(namespace).Resource(resourceType).Name(name).Do().Get()
} else if api == AuthenticationGroupVersion.Group {
_, err = in.istioAuthenticationApi.Delete().Namespace(namespace).Resource(resourceType).Name(name).Do().Get()
} else {
_, err = in.istioRbacApi.Delete().Namespace(namespace).Resource(resourceType).Name(name).Do().Get()
}
return err
}
// UpdateIstioObject updates an Istio object from either config api or networking api
func (in *IstioClient) UpdateIstioObject(api, namespace, resourceType, name, jsonPatch string) (IstioObject, error) {
log.Debugf("UpdateIstioObject input: %s / %s / %s / %s", api, namespace, resourceType, name)
var result runtime.Object
var err error
bytePatch := []byte(jsonPatch)
if api == ConfigGroupVersion.Group {
result, err = in.istioConfigApi.Patch(types.MergePatchType).Namespace(namespace).Resource(resourceType).SubResource(name).Body(bytePatch).Do().Get()
} else if api == NetworkingGroupVersion.Group {
result, err = in.istioNetworkingApi.Patch(types.MergePatchType).Namespace(namespace).Resource(resourceType).SubResource(name).Body(bytePatch).Do().Get()
} else if api == AuthenticationGroupVersion.Group {
result, err = in.istioAuthenticationApi.Patch(types.MergePatchType).Namespace(namespace).Resource(resourceType).SubResource(name).Body(bytePatch).Do().Get()
} else {
result, err = in.istioRbacApi.Patch(types.MergePatchType).Namespace(namespace).Resource(resourceType).SubResource(name).Body(bytePatch).Do().Get()
}
if err != nil {
return nil, err
}
istioObject, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return an IstioObject object", namespace, name)
}
return istioObject, err
}
// GetVirtualServices return all VirtualServices for a given namespace.
// If serviceName param is provided it will filter all VirtualServices having a host defined on a particular service.
// It returns an error on any problem.
func (in *IstioClient) GetVirtualServices(namespace string, serviceName string) ([]IstioObject, error) {
result, err := in.istioNetworkingApi.Get().Namespace(namespace).Resource(virtualServices).Do().Get()
if err != nil {
return nil, err
}
virtualServiceList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return a VirtualService list", namespace, serviceName)
}
virtualServices := make([]IstioObject, 0)
for _, virtualService := range virtualServiceList.GetItems() {
appendVirtualService := serviceName == ""
routeProtocols := []string{"http", "tcp"}
if !appendVirtualService && FilterByRoute(virtualService.GetSpec(), routeProtocols, serviceName, namespace, nil) {
appendVirtualService = true
}
if appendVirtualService {
virtualServices = append(virtualServices, virtualService.DeepCopyIstioObject())
}
}
return virtualServices, nil
}
func (in *IstioClient) GetVirtualService(namespace string, virtualservice string) (IstioObject, error) {
result, err := in.istioNetworkingApi.Get().Namespace(namespace).Resource(virtualServices).SubResource(virtualservice).Do().Get()
if err != nil {
return nil, err
}
virtualService, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return a VirtualService object", namespace, virtualservice)
}
return virtualService.DeepCopyIstioObject(), nil
}
// GetGateways return all Gateways for a given namespace.
// It returns an error on any problem.
func (in *IstioClient) GetGateways(namespace string) ([]IstioObject, error) {
result, err := in.istioNetworkingApi.Get().Namespace(namespace).Resource(gateways).Do().Get()
if err != nil {
return nil, err
}
gatewayList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s doesn't return a Gateway list", namespace)
}
gateways := make([]IstioObject, 0)
for _, gateway := range gatewayList.GetItems() {
gateways = append(gateways, gateway.DeepCopyIstioObject())
}
return gateways, nil
}
func (in *IstioClient) GetGateway(namespace string, gateway string) (IstioObject, error) {
result, err := in.istioNetworkingApi.Get().Namespace(namespace).Resource(gateways).SubResource(gateway).Do().Get()
if err != nil {
return nil, err
}
gatewayObject, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return a Gateway object", namespace, gateway)
}
return gatewayObject.DeepCopyIstioObject(), nil
}
// GetServiceEntries return all ServiceEntry objects for a given namespace.
// It returns an error on any problem.
func (in *IstioClient) GetServiceEntries(namespace string) ([]IstioObject, error) {
result, err := in.istioNetworkingApi.Get().Namespace(namespace).Resource(serviceentries).Do().Get()
if err != nil {
return nil, err
}
serviceEntriesList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s doesn't return a ServiceEntry list", namespace)
}
serviceEntries := make([]IstioObject, 0)
for _, serviceEntry := range serviceEntriesList.GetItems() {
serviceEntries = append(serviceEntries, serviceEntry.DeepCopyIstioObject())
}
return serviceEntries, nil
}
func (in *IstioClient) GetServiceEntry(namespace string, serviceEntryName string) (IstioObject, error) {
result, err := in.istioNetworkingApi.Get().Namespace(namespace).Resource(serviceentries).SubResource(serviceEntryName).Do().Get()
if err != nil {
return nil, err
}
serviceEntry, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s/%v doesn't return a ServiceEntry object", namespace, serviceEntry)
}
return serviceEntry.DeepCopyIstioObject(), nil
}
// GetDestinationRules returns all DestinationRules for a given namespace.
// If serviceName param is provided it will filter all DestinationRules having a host defined on a particular service.
// It returns an error on any problem.
func (in *IstioClient) GetDestinationRules(namespace string, serviceName string) ([]IstioObject, error) {
result, err := in.istioNetworkingApi.Get().Namespace(namespace).Resource(destinationRules).Do().Get()
if err != nil {
return nil, err
}
destinationRuleList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return a DestinationRule list", namespace, serviceName)
}
destinationRules := make([]IstioObject, 0)
for _, destinationRule := range destinationRuleList.Items {
appendDestinationRule := serviceName == ""
if host, ok := destinationRule.Spec["host"]; ok {
if dHost, ok := host.(string); ok && FilterByHost(dHost, serviceName, namespace) {
appendDestinationRule = true
}
}
if appendDestinationRule {
destinationRules = append(destinationRules, destinationRule.DeepCopyIstioObject())
}
}
return destinationRules, nil
}
func (in *IstioClient) GetDestinationRule(namespace string, destinationrule string) (IstioObject, error) {
result, err := in.istioNetworkingApi.Get().Namespace(namespace).Resource(destinationRules).SubResource(destinationrule).Do().Get()
if err != nil {
return nil, err
}
destinationRule, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return a DestinationRule object", namespace, destinationrule)
}
return destinationRule.DeepCopyIstioObject(), nil
}
// GetQuotaSpecs returns all QuotaSpecs objects for a given namespace.
// It returns an error on any problem.
func (in *IstioClient) GetQuotaSpecs(namespace string) ([]IstioObject, error) {
result, err := in.istioConfigApi.Get().Namespace(namespace).Resource(quotaspecs).Do().Get()
if err != nil {
return nil, err
}
quotaSpecList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s doesn't return a QuotaSpecList list", namespace)
}
quotaSpecs := make([]IstioObject, 0)
for _, qs := range quotaSpecList.GetItems() {
quotaSpecs = append(quotaSpecs, qs.DeepCopyIstioObject())
}
return quotaSpecs, nil
}
func (in *IstioClient) GetQuotaSpec(namespace string, quotaSpecName string) (IstioObject, error) {
result, err := in.istioConfigApi.Get().Namespace(namespace).Resource(quotaspecs).SubResource(quotaSpecName).Do().Get()
if err != nil {
return nil, err
}
quotaSpec, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return a QuotaSpec object", namespace, quotaSpecName)
}
return quotaSpec.DeepCopyIstioObject(), nil
}
// GetQuotaSpecBindings returns all QuotaSpecBindings objects for a given namespace.
// It returns an error on any problem.
func (in *IstioClient) GetQuotaSpecBindings(namespace string) ([]IstioObject, error) {
result, err := in.istioConfigApi.Get().Namespace(namespace).Resource(quotaspecbindings).Do().Get()
if err != nil {
return nil, err
}
quotaSpecBindingList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s doesn't return a QuotaSpecBindingList list", namespace)
}
quotaSpecBindings := make([]IstioObject, 0)
for _, qs := range quotaSpecBindingList.GetItems() {
quotaSpecBindings = append(quotaSpecBindings, qs.DeepCopyIstioObject())
}
return quotaSpecBindings, nil
}
func (in *IstioClient) GetQuotaSpecBinding(namespace string, quotaSpecBindingName string) (IstioObject, error) {
result, err := in.istioConfigApi.Get().Namespace(namespace).Resource(quotaspecbindings).SubResource(quotaSpecBindingName).Do().Get()
if err != nil {
return nil, err
}
quotaSpecBinding, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return a QuotaSpecBinding object", namespace, quotaSpecBindingName)
}
return quotaSpecBinding.DeepCopyIstioObject(), nil
}
func (in *IstioClient) GetPolicies(namespace string) ([]IstioObject, error) {
result, err := in.istioAuthenticationApi.Get().Namespace(namespace).Resource(policies).Do().Get()
if err != nil {
return nil, err
}
policyList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s doesn't return a PolicyList list", namespace)
}
policies := make([]IstioObject, 0)
for _, ps := range policyList.GetItems() {
policies = append(policies, ps.DeepCopyIstioObject())
}
return policies, nil
}
func (in *IstioClient) GetPolicy(namespace string, policyName string) (IstioObject, error) {
result, err := in.istioAuthenticationApi.Get().Namespace(namespace).Resource(policies).SubResource(policyName).Do().Get()
if err != nil {
return nil, err
}
policy, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s doesn't return a Policy object", namespace)
}
return policy.DeepCopyIstioObject(), nil
}
func (in *IstioClient) GetMeshPolicies(namespace string) ([]IstioObject, error) {
// MeshPolicies are not namespaced. However, API returns all the instances even asking for one specific namespace.
// Due to soft-multitenancy, the call performed is namespaced to avoid triggering an error for cluster-wide access.
result, err := in.istioAuthenticationApi.Get().Namespace(namespace).Resource(meshPolicies).Do().Get()
if err != nil {
return nil, err
}
policyList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("it doesn't return a PolicyList list")
}
policies := make([]IstioObject, 0)
for _, ps := range policyList.GetItems() {
policies = append(policies, ps.DeepCopyIstioObject())
}
return policies, nil
}
func (in *IstioClient) GetMeshPolicy(namespace string, policyName string) (IstioObject, error) {
result, err := in.istioAuthenticationApi.Get().Namespace(namespace).Resource(meshPolicies).SubResource(policyName).Do().Get()
if err != nil {
return nil, err
}
mp, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s doesn't return a MeshPolicy object", namespace)
}
return mp.DeepCopyIstioObject(), nil
}
func (in *IstioClient) GetClusterRbacConfigs(namespace string) ([]IstioObject, error) {
result, err := in.istioRbacApi.Get().Namespace(namespace).Resource(clusterrbacconfigs).Do().Get()
if err != nil {
return nil, err
}
clusterRbacConfigList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s doesn't return a RbacConfigList list", namespace)
}
clusterRbacConfigs := make([]IstioObject, 0)
for _, crc := range clusterRbacConfigList.GetItems() {
clusterRbacConfigs = append(clusterRbacConfigs, crc.DeepCopyIstioObject())
}
return clusterRbacConfigs, nil
}
func (in *IstioClient) GetClusterRbacConfig(namespace string, name string) (IstioObject, error) {
result, err := in.istioRbacApi.Get().Namespace(namespace).Resource(clusterrbacconfigs).SubResource(name).Do().Get()
if err != nil {
return nil, err
}
clusterRbacConfig, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s doesn't return a ClusterRbacConfig object", namespace)
}
return clusterRbacConfig.DeepCopyIstioObject(), nil
}
func (in *IstioClient) GetServiceRoles(namespace string) ([]IstioObject, error) {
result, err := in.istioRbacApi.Get().Namespace(namespace).Resource(serviceroles).Do().Get()
if err != nil {
return nil, err
}
serviceRoleList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s doesn't return a ServiceRoleList list", namespace)
}
serviceRoles := make([]IstioObject, 0)
for _, sr := range serviceRoleList.GetItems() {
serviceRoles = append(serviceRoles, sr.DeepCopyIstioObject())
}
return serviceRoles, nil
}
func (in *IstioClient) GetServiceRole(namespace string, name string) (IstioObject, error) {
result, err := in.istioRbacApi.Get().Namespace(namespace).Resource(serviceroles).SubResource(name).Do().Get()
if err != nil {
return nil, err
}
serviceRole, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s doesn't return a ServiceRole object", namespace)
}
return serviceRole.DeepCopyIstioObject(), nil
}
func (in *IstioClient) GetServiceRoleBindings(namespace string) ([]IstioObject, error) {
result, err := in.istioRbacApi.Get().Namespace(namespace).Resource(servicerolebindings).Do().Get()
if err != nil {
return nil, err
}
serviceRoleBindingList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s doesn't return a ServiceRoleBindingList list", namespace)
}
serviceRoleBindings := make([]IstioObject, 0)
for _, sr := range serviceRoleBindingList.GetItems() {
serviceRoleBindings = append(serviceRoleBindings, sr.DeepCopyIstioObject())
}
return serviceRoleBindings, nil
}
func (in *IstioClient) GetServiceRoleBinding(namespace string, name string) (IstioObject, error) {
result, err := in.istioRbacApi.Get().Namespace(namespace).Resource(servicerolebindings).SubResource(name).Do().Get()
if err != nil {
return nil, err
}
serviceRoleBinding, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s doesn't return a ServiceRoleBinding object", namespace)
}
return serviceRoleBinding.DeepCopyIstioObject(), nil
}
func FilterByHost(host, serviceName, namespace string) bool {
// Check single name
if host == serviceName {
return true
}
// Check service.namespace
if host == fmt.Sprintf("%s.%s", serviceName, namespace) {
return true
}
// Check the FQDN. <service>.<namespace>.svc
if host == fmt.Sprintf("%s.%s.%s", serviceName, namespace, "svc") {
return true
}
// Check the FQDN. <service>.<namespace>.svc.<zone>
if host == fmt.Sprintf("%s.%s.%s", serviceName, namespace, config.Get().ExternalServices.Istio.IstioIdentityDomain) {
return true
}
// Note, FQDN names are defined from Kubernetes registry specification [1]
// [1] https://github.com/kubernetes/dns/blob/master/docs/specification.md
return false
}
func FilterByRoute(spec map[string]interface{}, protocols []string, service string, namespace string, serviceEntries map[string]struct{}) bool {
if len(protocols) == 0 {
return false
}
for _, protocol := range protocols {
if prot, ok := spec[protocol]; ok {
if aHttp, ok := prot.([]interface{}); ok {
for _, httpRoute := range aHttp {
if mHttpRoute, ok := httpRoute.(map[string]interface{}); ok {
if route, ok := mHttpRoute["route"]; ok {
if aDestinationWeight, ok := route.([]interface{}); ok {
for _, destination := range aDestinationWeight {
if mDestination, ok := destination.(map[string]interface{}); ok {
if destinationW, ok := mDestination["destination"]; ok {
if mDestinationW, ok := destinationW.(map[string]interface{}); ok {
if host, ok := mDestinationW["host"]; ok {
if sHost, ok := host.(string); ok {
if FilterByHost(sHost, service, namespace) {
return true
}
if serviceEntries != nil {
// We have ServiceEntry to check
if _, found := serviceEntries[strings.ToLower(protocol)+sHost]; found {
return true
}
}
}
}
}
}
}
}
}
}
}
}
}
}
}
return false
}
// ServiceEntryHostnames returns a list of hostnames defined in the ServiceEntries Specs. Key in the resulting map is the protocol (in lowercase) + hostname
// exported for test
func ServiceEntryHostnames(serviceEntries []IstioObject) map[string][]string {
hostnames := make(map[string][]string)
for _, v := range serviceEntries {
if hostsSpec, found := v.GetSpec()["hosts"]; found {
if hosts, ok := hostsSpec.([]interface{}); ok {
// Seek the protocol
for _, h := range hosts {
if hostname, ok := h.(string); ok {
hostnames[hostname] = make([]string, 0, 1)
}
}
}
}
if portsSpec, found := v.GetSpec()["ports"]; found {
if portsArray, ok := portsSpec.([]interface{}); ok {
for _, portDef := range portsArray {
if ports, ok := portDef.(map[string]interface{}); ok {
if proto, found := ports["protocol"]; found {
if protocol, ok := proto.(string); ok {
protocol = mapPortToVirtualServiceProtocol(protocol)
for host := range hostnames {
hostnames[host] = append(hostnames[host], protocol)
}
}
}
}
}
}
}
}
return hostnames
}
// mapPortToVirtualServiceProtocol transforms Istio's Port-definitions' protocol names to VirtualService's protocol names
func mapPortToVirtualServiceProtocol(proto string) string {
// http: HTTP/HTTP2/GRPC/ TLS-terminated-HTTPS and service entry ports using HTTP/HTTP2/GRPC protocol
// tls: HTTPS/TLS protocols (i.e. with “passthrough” TLS mode) and service entry ports using HTTPS/TLS protocols.
// tcp: everything else
switch proto {
case "HTTP":
fallthrough
case "HTTP2":
fallthrough
case "GRPC":
return "http"
case "HTTPS":
fallthrough
case "TLS":
return "tls"
default:
return "tcp"
}
}
// ValidaPort parses the Istio Port definition and validates the naming scheme
func ValidatePort(portDef interface{}) bool {
return matchPortNameRule(parsePort(portDef))
}
func parsePort(portDef interface{}) (string, string) {
var name, proto string
if port, ok := portDef.(map[string]interface{}); ok {
if portNameDef, found := port["name"]; found {
if portName, ok := portNameDef.(string); ok {
name = portName
}
}
if protocolDef, found := port["protocol"]; found {
if protocol, ok := protocolDef.(string); ok {
proto = protocol
}
}
}
return name, proto
}
func matchPortNameRule(portName, protocol string) bool {
protocol = strings.ToLower(protocol)
// Check that portName begins with the protocol
if protocol == "tcp" || protocol == "udp" {
// TCP and UDP protocols do not care about the name
return true
}
if !strings.HasPrefix(portName, protocol) {
return false
}
// If longer than protocol, then it must adhere to <protocol>[-suffix]
// and if there's -, then there must be a suffix ..
if len(portName) > len(protocol) {
restPortName := portName[len(protocol):]
return portNameMatcher.MatchString(restPortName)
}
// Case portName == protocolName
return true
}
// GatewayNames extracts the gateway names for easier matching
func GatewayNames(gateways [][]IstioObject) map[string]struct{} {
var empty struct{}
names := make(map[string]struct{})
for _, ns := range gateways {
for _, gw := range ns {
gw := gw
clusterName := gw.GetObjectMeta().ClusterName
if clusterName == "" {
clusterName = config.Get().ExternalServices.Istio.IstioIdentityDomain
}
names[ParseHost(gw.GetObjectMeta().Name, gw.GetObjectMeta().Namespace, clusterName).String()] = empty
}
}
return names
}
// ValidateVirtualServiceGateways checks all VirtualService gateways (except mesh, which is reserved word) and checks that they're found from the given list of gatewayNames. Also return index of missing gatways to show clearer error path in editor
func ValidateVirtualServiceGateways(spec map[string]interface{}, gatewayNames map[string]struct{}, namespace, clusterName string) (bool, int) {
if gatewaysSpec, found := spec["gateways"]; found {
if gateways, ok := gatewaysSpec.([]interface{}); ok {
for index, g := range gateways {
if gate, ok := g.(string); ok {
if gate == "mesh" {
return true, -1
}
hostname := ParseHost(gate, namespace, clusterName).String()
for gw := range gatewayNames {
if found := FilterByHost(hostname, gw, namespace); found {
return true, -1
}
}
return false, index
}
}
}
}
// No gateways defined or all found. Return -1 indicates no missing gateway
return true, -1
}
func fetch(rValue *[]IstioObject, namespace string, service string, fetcher func(string, string) ([]IstioObject, error), wg *sync.WaitGroup, errChan chan error) {
defer wg.Done()
fetched, err := fetcher(namespace, service)
*rValue = append(*rValue, fetched...)
if err != nil {
errChan <- err
}
}
// Identical to above, but since k8s layer has both (namespace, serviceentry) and (namespace) queries, we need two different functions
func fetchNoEntry(rValue *[]IstioObject, namespace string, fetcher func(string) ([]IstioObject, error), wg *sync.WaitGroup, errChan chan error) {
defer wg.Done()
fetched, err := fetcher(namespace)
*rValue = append(*rValue, fetched...)
if err != nil && len(errChan) == 0 {
errChan <- err
}
}

View File

@@ -0,0 +1,137 @@
package kubernetes
import (
"fmt"
"github.com/kiali/kiali/log"
)
// GetIstioRules returns a list of mixer rules for a given namespace.
func (in *IstioClient) GetIstioRules(namespace string) ([]IstioObject, error) {
result, err := in.istioConfigApi.Get().Namespace(namespace).Resource(rules).Do().Get()
if err != nil {
return nil, err
}
ruleList, ok := result.(*GenericIstioObjectList)
if !ok {
return nil, fmt.Errorf("%s doesn't return a rules list", namespace)
}
istioRules := make([]IstioObject, 0)
for _, rule := range ruleList.Items {
istioRules = append(istioRules, rule.DeepCopyIstioObject())
}
return istioRules, nil
}
func (in *IstioClient) GetAdapters(namespace string) ([]IstioObject, error) {
return in.getAdaptersTemplates(namespace, "adapter", adapterPlurals)
}
func (in *IstioClient) GetTemplates(namespace string) ([]IstioObject, error) {
return in.getAdaptersTemplates(namespace, "template", templatePlurals)
}
func (in *IstioClient) GetIstioRule(namespace string, istiorule string) (IstioObject, error) {
result, err := in.istioConfigApi.Get().Namespace(namespace).Resource(rules).SubResource(istiorule).Do().Get()
if err != nil {
return nil, err
}
mRule, ok := result.(*GenericIstioObject)
if !ok {
return nil, fmt.Errorf("%s/%s doesn't return a Rule", namespace, istiorule)
}
return mRule.DeepCopyIstioObject(), nil
}
func (in *IstioClient) GetAdapter(namespace, adapterType, adapterName string) (IstioObject, error) {
return in.getAdapterTemplate(namespace, "adapter", adapterType, adapterName, adapterPlurals)
}
func (in *IstioClient) GetTemplate(namespace, templateType, templateName string) (IstioObject, error) {
return in.getAdapterTemplate(namespace, "template", templateType, templateName, templatePlurals)
}
func (in *IstioClient) getAdaptersTemplates(namespace string, itemType string, pluralsMap map[string]string) ([]IstioObject, error) {
resultsChan := make(chan istioResponse)
for name, plural := range pluralsMap {
go func(name, plural string) {
results, err := in.istioConfigApi.Get().Namespace(namespace).Resource(plural).Do().Get()
istioObjects := istioResponse{}
resultList, ok := results.(*GenericIstioObjectList)
if !ok {
err = fmt.Errorf("%s doesn't return a %s list", namespace, plural)
}
if err == nil {
istioObjects.results = make([]IstioObject, 0)
for _, result := range resultList.Items {
adapter := result.DeepCopyIstioObject()
// We need to specifically add the adapter name in the label
if adapter.GetObjectMeta().Labels == nil {
objectMeta := adapter.GetObjectMeta()
objectMeta.Labels = make(map[string]string)
adapter.SetObjectMeta(objectMeta)
}
adapter.GetObjectMeta().Labels[itemType] = name
// To support plural, we have only adapter/template -> adapters/templates
adapter.GetObjectMeta().Labels[itemType] = name
adapter.GetObjectMeta().Labels[itemType+"s"] = plural
istioObjects.results = append(istioObjects.results, adapter)
istioObjects.err = nil
}
} else {
istioObjects.results = nil
istioObjects.err = err
}
resultsChan <- istioObjects
}(name, plural)
}
results := make([]IstioObject, 0)
for i := 0; i < len(pluralsMap); i++ {
adapterTemplate := <-resultsChan
if adapterTemplate.err == nil {
for _, istioObject := range adapterTemplate.results {
results = append(results, istioObject)
}
} else {
log.Warningf("Querying %s got an error: %s", itemType, adapterTemplate.err)
}
}
return results, nil
}
func (in *IstioClient) getAdapterTemplate(namespace string, itemType string, itemSubtype, itemName string, pluralsMap map[string]string) (IstioObject, error) {
ok := false
subtype := ""
for key, plural := range pluralsMap {
if itemSubtype == plural {
subtype = key
ok = true
break
}
}
if !ok {
return nil, fmt.Errorf("%s is not supported", itemSubtype)
}
result, err := in.istioConfigApi.Get().Namespace(namespace).Resource(itemSubtype).SubResource(itemName).Do().Get()
istioObject, ok := result.(IstioObject)
if !ok {
istioObject = nil
if err == nil {
err = fmt.Errorf("%s/%s doesn't return a valid IstioObject", itemSubtype, itemName)
}
}
if err != nil {
return nil, err
}
if istioObject.GetObjectMeta().Labels == nil {
objectMeta := istioObject.GetObjectMeta()
objectMeta.Labels = make(map[string]string)
istioObject.SetObjectMeta(objectMeta)
}
// Adding the singular name of the adapter/template to propagate it into the Kiali model
istioObject.GetObjectMeta().Labels[itemType] = subtype
istioObject.GetObjectMeta().Labels[itemType+"s"] = itemSubtype
return istioObject, nil
}

View File

@@ -0,0 +1,55 @@
package kubernetes
import (
"encoding/json"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/rest"
)
// KialiMonitoringInterface for mocks (only mocked function are necessary here)
type KialiMonitoringInterface interface {
GetDashboard(namespace string, name string) (*MonitoringDashboard, error)
}
// KialiMonitoringClient is the client struct for Kiali Monitoring API over Kubernetes
// API to get MonitoringDashboards
type KialiMonitoringClient struct {
KialiMonitoringInterface
client *rest.RESTClient
}
// NewKialiMonitoringClient creates a new client able to fetch Kiali Monitoring API.
func NewKialiMonitoringClient() (*KialiMonitoringClient, error) {
config, err := ConfigClient()
if err != nil {
return nil, err
}
types := runtime.NewScheme()
schemeBuilder := runtime.NewSchemeBuilder(
func(scheme *runtime.Scheme) error {
return nil
})
err = schemeBuilder.AddToScheme(types)
if err != nil {
return nil, err
}
client, err := newClientForAPI(config, kialiMonitoringGroupVersion, types)
return &KialiMonitoringClient{
client: client,
}, err
}
// GetDashboard returns a MonitoringDashboard for the given name
func (in *KialiMonitoringClient) GetDashboard(namespace string, name string) (*MonitoringDashboard, error) {
result, err := in.client.Get().Namespace(namespace).Resource("monitoringdashboards").SubResource(name).Do().Raw()
if err != nil {
return nil, err
}
var dashboard MonitoringDashboard
err = json.Unmarshal(result, &dashboard)
return &dashboard, err
}

View File

@@ -0,0 +1,44 @@
package kubernetes
import (
"k8s.io/apimachinery/pkg/runtime/schema"
)
const (
// Raw constant for DataType
Raw = "raw"
// Rate constant for DataType
Rate = "rate"
// Histogram constant for DataType
Histogram = "histogram"
)
var kialiMonitoringGroupVersion = schema.GroupVersion{
Group: "monitoring.kiali.io",
Version: "v1alpha1",
}
type MonitoringDashboard struct {
Metadata map[string]interface{}
Spec MonitoringDashboardSpec
}
type MonitoringDashboardSpec struct {
Title string
Charts []MonitoringDashboardChart
}
type MonitoringDashboardChart struct {
Name string
Unit string
Spans int
MetricName string
DataType string // MetricType is either "raw", "rate" or "histogram"
Aggregator string // Aggregator can be set for raw data. Ex: "sum", "avg". See https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators
Aggregations []MonitoringDashboardAggregation
}
type MonitoringDashboardAggregation struct {
Label string
DisplayName string
}

View File

@@ -0,0 +1,305 @@
package kubernetes
import (
"k8s.io/api/apps/v1beta1"
"k8s.io/api/apps/v1beta2"
auth_v1 "k8s.io/api/authorization/v1"
batch_v1 "k8s.io/api/batch/v1"
batch_v1beta1 "k8s.io/api/batch/v1beta1"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
osappsv1 "github.com/openshift/api/apps/v1"
osv1 "github.com/openshift/api/project/v1"
)
// GetNamespace fetches and returns the specified namespace definition
// from the cluster
func (in *IstioClient) GetNamespace(namespace string) (*v1.Namespace, error) {
ns, err := in.k8s.CoreV1().Namespaces().Get(namespace, emptyGetOptions)
if err != nil {
return &v1.Namespace{}, err
}
return ns, nil
}
// GetNamespaces returns a list of all namespaces of the cluster.
// It returns a list of all namespaces of the cluster.
// It returns an error on any problem.
func (in *IstioClient) GetNamespaces() ([]v1.Namespace, error) {
namespaces, err := in.k8s.CoreV1().Namespaces().List(emptyListOptions)
if err != nil {
return nil, err
}
return namespaces.Items, nil
}
// GetProject fetches and returns the definition of the project with
// the specified name by querying the cluster API. GetProject will fail
// if the underlying cluster is not Openshift.
func (in *IstioClient) GetProject(name string) (*osv1.Project, error) {
result := &osv1.Project{}
err := in.k8s.RESTClient().Get().Prefix("apis", "project.openshift.io", "v1", "projects", name).Do().Into(result)
if err != nil {
return nil, err
}
return result, nil
}
func (in *IstioClient) GetProjects() ([]osv1.Project, error) {
result := &osv1.ProjectList{}
err := in.k8s.RESTClient().Get().Prefix("apis", "project.openshift.io", "v1", "projects").Do().Into(result)
if err != nil {
return nil, err
}
return result.Items, nil
}
func (in *IstioClient) IsOpenShift() bool {
if in.isOpenShift == nil {
isOpenShift := false
_, err := in.k8s.RESTClient().Get().AbsPath("/apis/project.openshift.io").Do().Raw()
if err == nil {
isOpenShift = true
}
in.isOpenShift = &isOpenShift
}
return *in.isOpenShift
}
// GetServices returns a list of services for a given namespace.
// If selectorLabels is defined the list of services is filtered for those that matches Services selector labels.
// It returns an error on any problem.
func (in *IstioClient) GetServices(namespace string, selectorLabels map[string]string) ([]v1.Service, error) {
var allServices []v1.Service
var err error
if in.k8sCache != nil {
allServices, err = in.k8sCache.GetServices(namespace)
} else {
if allServicesList, err := in.k8s.CoreV1().Services(namespace).List(emptyListOptions); err == nil {
allServices = allServicesList.Items
}
}
if err != nil {
return []v1.Service{}, err
}
if selectorLabels == nil {
return allServices, nil
}
var services []v1.Service
for _, svc := range allServices {
svcSelector := labels.Set(svc.Spec.Selector).AsSelector()
if svcSelector.Matches(labels.Set(selectorLabels)) {
services = append(services, svc)
}
}
return services, nil
}
// GetDeployment returns the definition of a specific deployment.
// It returns an error on any problem.
func (in *IstioClient) GetDeployment(namespace, deploymentName string) (*v1beta1.Deployment, error) {
if in.k8sCache != nil {
return in.k8sCache.GetDeployment(namespace, deploymentName)
}
return in.k8s.AppsV1beta1().Deployments(namespace).Get(deploymentName, emptyGetOptions)
}
// GetDeployments returns an array of deployments for a given namespace and a set of labels.
// It returns an error on any problem.
func (in *IstioClient) GetDeployments(namespace string) ([]v1beta1.Deployment, error) {
if in.k8sCache != nil {
return in.k8sCache.GetDeployments(namespace)
}
if depList, err := in.k8s.AppsV1beta1().Deployments(namespace).List(emptyListOptions); err == nil {
return depList.Items, nil
} else {
return []v1beta1.Deployment{}, err
}
}
// GetDeployment returns the definition of a specific deployment.
// It returns an error on any problem.
func (in *IstioClient) GetDeploymentConfig(namespace, deploymentconfigName string) (*osappsv1.DeploymentConfig, error) {
result := &osappsv1.DeploymentConfig{}
err := in.k8s.RESTClient().Get().Prefix("apis", "apps.openshift.io", "v1").Namespace(namespace).Resource("deploymentconfigs").SubResource(deploymentconfigName).Do().Into(result)
if err != nil {
return nil, err
}
return result, nil
}
// GetDeployments returns an array of deployments for a given namespace and a set of labels.
// An empty labelSelector will fetch all Deployments for a namespace.
// It returns an error on any problem.
func (in *IstioClient) GetDeploymentConfigs(namespace string) ([]osappsv1.DeploymentConfig, error) {
result := &osappsv1.DeploymentConfigList{}
err := in.k8s.RESTClient().Get().Prefix("apis", "apps.openshift.io", "v1").Namespace(namespace).Resource("deploymentconfigs").Do().Into(result)
if err != nil {
return nil, err
}
return result.Items, nil
}
func (in *IstioClient) GetReplicaSets(namespace string) ([]v1beta2.ReplicaSet, error) {
if in.k8sCache != nil {
return in.k8sCache.GetReplicaSets(namespace)
}
if rsList, err := in.k8s.AppsV1beta2().ReplicaSets(namespace).List(emptyListOptions); err == nil {
return rsList.Items, nil
} else {
return []v1beta2.ReplicaSet{}, err
}
}
func (in *IstioClient) GetStatefulSet(namespace string, statefulsetName string) (*v1beta2.StatefulSet, error) {
if in.k8sCache != nil {
return in.k8sCache.GetStatefulSet(namespace, statefulsetName)
}
return in.k8s.AppsV1beta2().StatefulSets(namespace).Get(statefulsetName, emptyGetOptions)
}
func (in *IstioClient) GetStatefulSets(namespace string) ([]v1beta2.StatefulSet, error) {
if in.k8sCache != nil {
return in.k8sCache.GetStatefulSets(namespace)
}
if ssList, err := in.k8s.AppsV1beta2().StatefulSets(namespace).List(emptyListOptions); err == nil {
return ssList.Items, nil
} else {
return []v1beta2.StatefulSet{}, err
}
}
func (in *IstioClient) GetReplicationControllers(namespace string) ([]v1.ReplicationController, error) {
if in.k8sCache != nil {
return in.k8sCache.GetReplicationControllers(namespace)
}
if rcList, err := in.k8s.CoreV1().ReplicationControllers(namespace).List(emptyListOptions); err == nil {
return rcList.Items, nil
} else {
return []v1.ReplicationController{}, err
}
}
// GetService returns the definition of a specific service.
// It returns an error on any problem.
func (in *IstioClient) GetService(namespace, serviceName string) (*v1.Service, error) {
if in.k8sCache != nil {
return in.k8sCache.GetService(namespace, serviceName)
}
return in.k8s.CoreV1().Services(namespace).Get(serviceName, emptyGetOptions)
}
// GetEndpoints return the list of endpoint of a specific service.
// It returns an error on any problem.
func (in *IstioClient) GetEndpoints(namespace, serviceName string) (*v1.Endpoints, error) {
if in.k8sCache != nil {
return in.k8sCache.GetEndpoints(namespace, serviceName)
}
return in.k8s.CoreV1().Endpoints(namespace).Get(serviceName, emptyGetOptions)
}
// GetPods returns the pods definitions for a given set of labels.
// An empty labelSelector will fetch all pods found per a namespace.
// It returns an error on any problem.
func (in *IstioClient) GetPods(namespace, labelSelector string) ([]v1.Pod, error) {
if in.k8sCache != nil {
pods, err := in.k8sCache.GetPods(namespace)
if err != nil {
return []v1.Pod{}, err
}
if labelSelector != "" {
selector, err := labels.Parse(labelSelector)
if err != nil {
return []v1.Pod{}, err
}
pods = FilterPodsForSelector(selector, pods)
}
return pods, nil
}
// An empty selector is ambiguous in the go client, could mean either "select all" or "select none"
// Here we assume empty == select all
// (see also https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors)
if pods, err := in.k8s.CoreV1().Pods(namespace).List(meta_v1.ListOptions{LabelSelector: labelSelector}); err == nil {
return pods.Items, nil
} else {
return []v1.Pod{}, err
}
}
func (in *IstioClient) GetCronJobs(namespace string) ([]batch_v1beta1.CronJob, error) {
if in.k8sCache != nil {
return in.k8sCache.GetCronJobs(namespace)
}
if cjList, err := in.k8s.BatchV1beta1().CronJobs(namespace).List(emptyListOptions); err == nil {
return cjList.Items, nil
} else {
return []batch_v1beta1.CronJob{}, err
}
}
func (in *IstioClient) GetJobs(namespace string) ([]batch_v1.Job, error) {
if in.k8sCache != nil {
return in.k8sCache.GetJobs(namespace)
}
if jList, err := in.k8s.BatchV1().Jobs(namespace).List(emptyListOptions); err == nil {
return jList.Items, nil
} else {
return []batch_v1.Job{}, err
}
}
// NewNotFound is a helper method to create a NotFound error similar as used by the kubernetes client.
// This method helps upper layers to send a explicit NotFound error without querying the backend.
func NewNotFound(name, group, resource string) error {
return errors.NewNotFound(schema.GroupResource{Group: group, Resource: resource}, name)
}
// GetSelfSubjectAccessReview provides information on Kiali permissions
func (in *IstioClient) GetSelfSubjectAccessReview(namespace, api, resourceType string, verbs []string) ([]*auth_v1.SelfSubjectAccessReview, error) {
calls := len(verbs)
ch := make(chan *auth_v1.SelfSubjectAccessReview, calls)
errChan := make(chan error)
for _, v := range verbs {
go func(verb string) {
res, err := in.k8s.AuthorizationV1().SelfSubjectAccessReviews().Create(&auth_v1.SelfSubjectAccessReview{
Spec: auth_v1.SelfSubjectAccessReviewSpec{
ResourceAttributes: &auth_v1.ResourceAttributes{
Namespace: namespace,
Verb: verb,
Group: api,
Resource: resourceType,
},
},
})
if err != nil {
errChan <- err
} else {
ch <- res
}
}(v)
}
var err error
result := []*auth_v1.SelfSubjectAccessReview{}
for count := 0; count < calls; count++ {
select {
case res := <-ch:
result = append(result, res)
case err = <-errChan:
// No op
}
}
return result, err
}

View File

@@ -0,0 +1,442 @@
package kubetest
import (
"fmt"
osappsv1 "github.com/openshift/api/apps/v1"
osv1 "github.com/openshift/api/project/v1"
"github.com/stretchr/testify/mock"
"k8s.io/api/apps/v1beta1"
"k8s.io/api/apps/v1beta2"
auth_v1 "k8s.io/api/authorization/v1"
batch_v1 "k8s.io/api/batch/v1"
batch_v1beta1 "k8s.io/api/batch/v1beta1"
v1 "k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type K8SClientMock struct {
mock.Mock
}
// Constructor
func NewK8SClientMock() *K8SClientMock {
k8s := new(K8SClientMock)
k8s.On("IsOpenShift").Return(true)
return k8s
}
// Business methods
// MockEmptyWorkloads setup the current mock to return empty workloads for every type of workloads (deployment, dc, rs, jobs, etc.)
func (o *K8SClientMock) MockEmptyWorkloads(namespace interface{}) {
o.On("GetDeployments", namespace).Return([]v1beta1.Deployment{}, nil)
o.On("GetReplicaSets", namespace).Return([]v1beta2.ReplicaSet{}, nil)
o.On("GetReplicationControllers", namespace).Return([]v1.ReplicationController{}, nil)
o.On("GetDeploymentConfigs", namespace).Return([]osappsv1.DeploymentConfig{}, nil)
o.On("GetStatefulSets", namespace).Return([]v1beta2.StatefulSet{}, nil)
o.On("GetJobs", namespace).Return([]batch_v1.Job{}, nil)
o.On("GetCronJobs", namespace).Return([]batch_v1beta1.CronJob{}, nil)
}
// MockEmptyWorkload setup the current mock to return an empty workload for every type of workloads (deployment, dc, rs, jobs, etc.)
func (o *K8SClientMock) MockEmptyWorkload(namespace interface{}, workload interface{}) {
notfound := fmt.Errorf("not found")
o.On("GetDeployment", namespace, workload).Return(&v1beta1.Deployment{}, notfound)
o.On("GetStatefulSet", namespace, workload).Return(&v1beta2.StatefulSet{}, notfound)
o.On("GetDeploymentConfig", namespace, workload).Return(&osappsv1.DeploymentConfig{}, notfound)
o.On("GetReplicaSets", namespace).Return([]v1beta2.ReplicaSet{}, nil)
o.On("GetReplicationControllers", namespace).Return([]v1.ReplicationController{}, nil)
o.On("GetJobs", namespace).Return([]batch_v1.Job{}, nil)
o.On("GetCronJobs", namespace).Return([]batch_v1beta1.CronJob{}, nil)
}
func (o *K8SClientMock) CreateIstioObject(api, namespace, resourceType, json string) (kubernetes.IstioObject, error) {
args := o.Called(api, namespace, resourceType, json)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) DeleteIstioObject(api, namespace, objectType, objectName string) error {
args := o.Called(api, namespace, objectType, objectName)
return args.Error(0)
}
func (o *K8SClientMock) GetAdapter(namespace, adapterType, adapterName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace, adapterType, adapterName)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetAdapters(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetCronJobs(namespace string) ([]batch_v1beta1.CronJob, error) {
args := o.Called(namespace)
return args.Get(0).([]batch_v1beta1.CronJob), args.Error(1)
}
func (o *K8SClientMock) GetDeployment(namespace string, deploymentName string) (*v1beta1.Deployment, error) {
args := o.Called(namespace, deploymentName)
return args.Get(0).(*v1beta1.Deployment), args.Error(1)
}
func (o *K8SClientMock) GetDeployments(namespace string) ([]v1beta1.Deployment, error) {
args := o.Called(namespace)
return args.Get(0).([]v1beta1.Deployment), args.Error(1)
}
func (o *K8SClientMock) GetDeploymentConfig(namespace string, deploymentName string) (*osappsv1.DeploymentConfig, error) {
args := o.Called(namespace, deploymentName)
return args.Get(0).(*osappsv1.DeploymentConfig), args.Error(1)
}
func (o *K8SClientMock) GetDeploymentConfigs(namespace string) ([]osappsv1.DeploymentConfig, error) {
args := o.Called(namespace)
return args.Get(0).([]osappsv1.DeploymentConfig), args.Error(1)
}
func (o *K8SClientMock) GetDestinationRules(namespace string, serviceName string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace, serviceName)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetDestinationRule(namespace string, destinationrule string) (kubernetes.IstioObject, error) {
args := o.Called(namespace, destinationrule)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetEndpoints(namespace string, serviceName string) (*v1.Endpoints, error) {
args := o.Called(namespace, serviceName)
return args.Get(0).(*v1.Endpoints), args.Error(1)
}
func (o *K8SClientMock) GetGateways(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetGateway(namespace string, gateway string) (kubernetes.IstioObject, error) {
args := o.Called(namespace, gateway)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetIstioDetails(namespace string, serviceName string) (*kubernetes.IstioDetails, error) {
args := o.Called(namespace, serviceName)
return args.Get(0).(*kubernetes.IstioDetails), args.Error(1)
}
func (o *K8SClientMock) GetIstioRule(namespace string, istiorule string) (kubernetes.IstioObject, error) {
args := o.Called(namespace, istiorule)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetIstioRules(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetJobs(namespace string) ([]batch_v1.Job, error) {
args := o.Called(namespace)
return args.Get(0).([]batch_v1.Job), args.Error(1)
}
func (o *K8SClientMock) GetNamespace(namespace string) (*v1.Namespace, error) {
args := o.Called(namespace)
return args.Get(0).(*v1.Namespace), args.Error(1)
}
func (o *K8SClientMock) GetNamespaces() ([]v1.Namespace, error) {
args := o.Called()
return args.Get(0).([]v1.Namespace), args.Error(1)
}
func (o *K8SClientMock) GetPods(namespace, labelSelector string) ([]v1.Pod, error) {
args := o.Called(namespace, labelSelector)
return args.Get(0).([]v1.Pod), args.Error(1)
}
func (o *K8SClientMock) GetProject(project string) (*osv1.Project, error) {
args := o.Called(project)
return args.Get(0).(*osv1.Project), args.Error(1)
}
func (o *K8SClientMock) GetProjects() ([]osv1.Project, error) {
args := o.Called()
return args.Get(0).([]osv1.Project), args.Error(1)
}
func (o *K8SClientMock) GetQuotaSpec(namespace string, quotaSpecName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace, quotaSpecName)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetQuotaSpecs(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetQuotaSpecBinding(namespace string, quotaSpecBindingName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace, quotaSpecBindingName)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetQuotaSpecBindings(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetReplicationControllers(namespace string) ([]v1.ReplicationController, error) {
args := o.Called(namespace)
return args.Get(0).([]v1.ReplicationController), args.Error(1)
}
func (o *K8SClientMock) GetReplicaSets(namespace string) ([]v1beta2.ReplicaSet, error) {
args := o.Called(namespace)
return args.Get(0).([]v1beta2.ReplicaSet), args.Error(1)
}
func (o *K8SClientMock) GetSelfSubjectAccessReview(namespace, api, resourceType string, verbs []string) ([]*auth_v1.SelfSubjectAccessReview, error) {
args := o.Called(namespace, api, resourceType, verbs)
return args.Get(0).([]*auth_v1.SelfSubjectAccessReview), args.Error(1)
}
func (o *K8SClientMock) GetService(namespace string, serviceName string) (*v1.Service, error) {
args := o.Called(namespace, serviceName)
return args.Get(0).(*v1.Service), args.Error(1)
}
func (o *K8SClientMock) GetServices(namespace string, selectorLabels map[string]string) ([]v1.Service, error) {
args := o.Called(namespace, selectorLabels)
return args.Get(0).([]v1.Service), args.Error(1)
}
func (o *K8SClientMock) GetServiceEntries(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetServiceEntry(namespace string, serviceEntryName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace, serviceEntryName)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetStatefulSet(namespace string, statefulsetName string) (*v1beta2.StatefulSet, error) {
args := o.Called(namespace, statefulsetName)
return args.Get(0).(*v1beta2.StatefulSet), args.Error(1)
}
func (o *K8SClientMock) GetStatefulSets(namespace string) ([]v1beta2.StatefulSet, error) {
args := o.Called(namespace)
return args.Get(0).([]v1beta2.StatefulSet), args.Error(1)
}
func (o *K8SClientMock) GetTemplate(namespace, templateType, templateName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace, templateType, templateName)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetTemplates(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetVirtualServices(namespace string, serviceName string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace, serviceName)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetVirtualService(namespace string, virtualservice string) (kubernetes.IstioObject, error) {
args := o.Called(namespace, virtualservice)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetPolicies(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetPolicy(namespace string, policyName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetMeshPolicies(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetMeshPolicy(namespace string, policyName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetClusterRbacConfigs(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetClusterRbacConfig(namespace string, policyName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetServiceRoles(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetServiceRole(namespace string, policyName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetServiceRoleBindings(namespace string) ([]kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).([]kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) GetServiceRoleBinding(namespace string, policyName string) (kubernetes.IstioObject, error) {
args := o.Called(namespace)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
func (o *K8SClientMock) IsOpenShift() bool {
args := o.Called()
return args.Get(0).(bool)
}
func (o *K8SClientMock) Stop() {
}
func (o *K8SClientMock) UpdateIstioObject(api, namespace, resourceType, name, jsonPatch string) (kubernetes.IstioObject, error) {
args := o.Called(api, namespace, resourceType, name, jsonPatch)
return args.Get(0).(kubernetes.IstioObject), args.Error(1)
}
// Fake methods doesn't need an entry point
func FakeService() *v1.Service {
return &v1.Service{
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin",
Namespace: "tutorial",
Labels: map[string]string{
"app": "httpbin",
"version": "v1"}},
Spec: v1.ServiceSpec{
ClusterIP: "fromservice",
Type: "ClusterIP",
Selector: map[string]string{"app": "httpbin"},
Ports: []v1.ServicePort{
{
Name: "http",
Protocol: "TCP",
Port: 3001},
{
Name: "http",
Protocol: "TCP",
Port: 3000}}}}
}
func FakeServiceList() []v1.Service {
return []v1.Service{
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "reviews",
Namespace: "tutorial",
Labels: map[string]string{
"app": "reviews",
"version": "v1"}},
Spec: v1.ServiceSpec{
ClusterIP: "fromservice",
Type: "ClusterIP",
Selector: map[string]string{"app": "reviews"},
Ports: []v1.ServicePort{
{
Name: "http",
Protocol: "TCP",
Port: 3001},
{
Name: "http",
Protocol: "TCP",
Port: 3000}}}},
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin",
Namespace: "tutorial",
Labels: map[string]string{
"app": "httpbin",
"version": "v1"}},
Spec: v1.ServiceSpec{
ClusterIP: "fromservice",
Type: "ClusterIP",
Selector: map[string]string{"app": "httpbin"},
Ports: []v1.ServicePort{
{
Name: "http",
Protocol: "TCP",
Port: 3001},
{
Name: "http",
Protocol: "TCP",
Port: 3000}}}},
}
}
func FakePodListWithoutSidecar() []v1.Pod {
return []v1.Pod{
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "reviews-v1",
Labels: map[string]string{"app": "reviews", "version": "v1"}}},
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "reviews-v2",
Labels: map[string]string{"app": "reviews", "version": "v2"}}},
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v1",
Labels: map[string]string{"app": "httpbin", "version": "v1"}}},
}
}
func FakePodList() []v1.Pod {
return []v1.Pod{
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "reviews-v1",
Labels: map[string]string{"app": "reviews", "version": "v1"},
Annotations: FakeIstioAnnotations(),
},
},
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "reviews-v2",
Labels: map[string]string{"app": "reviews", "version": "v2"},
Annotations: FakeIstioAnnotations(),
},
},
{
ObjectMeta: meta_v1.ObjectMeta{
Name: "httpbin-v1",
Labels: map[string]string{"app": "httpbin", "version": "v1"},
Annotations: FakeIstioAnnotations(),
},
},
}
}
func FakeIstioAnnotations() map[string]string {
return map[string]string{"sidecar.istio.io/status": "{\"version\":\"\",\"initContainers\":[\"istio-init\",\"enable-core-dump\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-certs\"]}"}
}
func FakeNamespace(name string) *v1.Namespace {
return &v1.Namespace{
ObjectMeta: meta_v1.ObjectMeta{
Name: name,
},
}
}

684
vendor/github.com/kiali/kiali/kubernetes/types.go generated vendored Normal file
View File

@@ -0,0 +1,684 @@
package kubernetes
import (
"fmt"
"strings"
"github.com/kiali/kiali/config"
"k8s.io/api/apps/v1beta1"
autoscalingV1 "k8s.io/api/autoscaling/v1"
v1 "k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
)
const (
// Networking
destinationRules = "destinationrules"
destinationRuleType = "DestinationRule"
destinationRuleTypeList = "DestinationRuleList"
gateways = "gateways"
gatewayType = "Gateway"
gatewayTypeList = "GatewayList"
serviceentries = "serviceentries"
serviceentryType = "ServiceEntry"
serviceentryTypeList = "ServiceEntryList"
virtualServices = "virtualservices"
virtualServiceType = "VirtualService"
virtualServiceTypeList = "VirtualServiceList"
// Quotas
quotaspecs = "quotaspecs"
quotaspecType = "QuotaSpec"
quotaspecTypeList = "QuotaSpecList"
quotaspecbindings = "quotaspecbindings"
quotaspecbindingType = "QuotaSpecBinding"
quotaspecbindingTypeList = "QuotaSpecBindingList"
// Policies
policies = "policies"
policyType = "Policy"
policyTypeList = "PolicyList"
//MeshPolicies
meshPolicies = "meshpolicies"
meshPolicyType = "MeshPolicy"
meshPolicyTypeList = "MeshPolicyList"
// Rbac
clusterrbacconfigs = "clusterrbacconfigs"
clusterrbacconfigType = "ClusterRbacConfig"
clusterrbacconfigTypeList = "ClusterRbacConfigList"
serviceroles = "serviceroles"
serviceroleType = "ServiceRole"
serviceroleTypeList = "ServiceRoleList"
servicerolebindings = "servicerolebindings"
servicerolebindingType = "ServiceRoleBinding"
servicerolebindingTypeList = "ServiceRoleBindingList"
// Config - Rules
rules = "rules"
ruleType = "rule"
ruleTypeList = "ruleList"
// Config - Adapters
circonuses = "circonuses"
circonusType = "circonus"
circonusTypeList = "circonusList"
deniers = "deniers"
denierType = "denier"
denierTypeList = "denierList"
fluentds = "fluentds"
fluentdType = "fluentd"
fluentdTypeList = "fluentdList"
fluentdLabel = "fluentd"
handlers = "handlers"
handlerType = "handler"
handlerTypeList = "handlerList"
kubernetesenvs = "kubernetesenvs"
kubernetesenvType = "kubernetesenv"
kubernetesenvTypeList = "kubernetesenvList"
listcheckers = "listcheckers"
listcheckerType = "listchecker"
listcheckerTypeList = "listcheckerList"
memquotas = "memquotas"
memquotaType = "memquota"
memquotaTypeList = "memquotaList"
opas = "opas"
opaType = "opa"
opaTypeList = "opaList"
prometheuses = "prometheuses"
prometheusType = "prometheus"
prometheusTypeList = "prometheusList"
rbacs = "rbacs"
rbacType = "rbac"
rbacTypeList = "rbacList"
servicecontrols = "servicecontrols"
servicecontrolType = "servicecontrol"
servicecontrolTypeList = "servicecontrolList"
solarwindses = "solarwindses"
solarwindsType = "solarwinds"
solarwindsTypeList = "solarwindsList"
stackdrivers = "stackdrivers"
stackdriverType = "stackdriver"
stackdriverTypeList = "stackdriverList"
statsds = "statsds"
statsdType = "statsd"
statsdTypeList = "statsdList"
stdios = "stdios"
stdioType = "stdio"
stdioTypeList = "stdioList"
// Config - Templates
apikeys = "apikeys"
apikeyType = "apikey"
apikeyTypeList = "apikeyList"
authorizations = "authorizations"
authorizationType = "authorization"
authorizationTypeList = "authorizationList"
checknothings = "checknothings"
checknothingType = "checknothing"
checknothingTypeList = "checknothingList"
kuberneteses = "kuberneteses"
kubernetesType = "kubernetes"
kubernetesTypeList = "kubernetesList"
listEntries = "listentries"
listEntryType = "listentry"
listEntryTypeList = "listentryList"
logentries = "logentries"
logentryType = "logentry"
logentryTypeList = "logentryList"
metrics = "metrics"
metricType = "metric"
metricTypeList = "metricList"
quotas = "quotas"
quotaType = "quota"
quotaTypeList = "quotaList"
reportnothings = "reportnothings"
reportnothingType = "reportnothing"
reportnothingTypeList = "reportnothingList"
servicecontrolreports = "servicecontrolreports"
servicecontrolreportType = "servicecontrolreport"
servicecontrolreportTypeList = "servicecontrolreportList"
)
var (
ConfigGroupVersion = schema.GroupVersion{
Group: "config.istio.io",
Version: "v1alpha2",
}
ApiConfigVersion = ConfigGroupVersion.Group + "/" + ConfigGroupVersion.Version
NetworkingGroupVersion = schema.GroupVersion{
Group: "networking.istio.io",
Version: "v1alpha3",
}
ApiNetworkingVersion = NetworkingGroupVersion.Group + "/" + NetworkingGroupVersion.Version
AuthenticationGroupVersion = schema.GroupVersion{
Group: "authentication.istio.io",
Version: "v1alpha1",
}
ApiAuthenticationVersion = AuthenticationGroupVersion.Group + "/" + AuthenticationGroupVersion.Version
RbacGroupVersion = schema.GroupVersion{
Group: "rbac.istio.io",
Version: "v1alpha1",
}
ApiRbacVersion = RbacGroupVersion.Group + "/" + RbacGroupVersion.Version
networkingTypes = []struct {
objectKind string
collectionKind string
}{
{
objectKind: gatewayType,
collectionKind: gatewayTypeList,
},
{
objectKind: virtualServiceType,
collectionKind: virtualServiceTypeList,
},
{
objectKind: destinationRuleType,
collectionKind: destinationRuleTypeList,
},
{
objectKind: serviceentryType,
collectionKind: serviceentryTypeList,
},
}
configTypes = []struct {
objectKind string
collectionKind string
}{
{
objectKind: ruleType,
collectionKind: ruleTypeList,
},
// Quota specs depends on Quota template but are not a "template" object itselft
{
objectKind: quotaspecType,
collectionKind: quotaspecTypeList,
},
{
objectKind: quotaspecbindingType,
collectionKind: quotaspecbindingTypeList,
},
}
authenticationTypes = []struct {
objectKind string
collectionKind string
}{
{
objectKind: policyType,
collectionKind: policyTypeList,
},
{
objectKind: meshPolicyType,
collectionKind: meshPolicyTypeList,
},
}
// TODO Adapters and Templates can be loaded from external config for easy maintenance
adapterTypes = []struct {
objectKind string
collectionKind string
}{
{
objectKind: circonusType,
collectionKind: circonusTypeList,
},
{
objectKind: denierType,
collectionKind: denierTypeList,
},
{
objectKind: fluentdType,
collectionKind: fluentdTypeList,
},
{
objectKind: handlerType,
collectionKind: handlerTypeList,
},
{
objectKind: kubernetesenvType,
collectionKind: kubernetesenvTypeList,
},
{
objectKind: listcheckerType,
collectionKind: listcheckerTypeList,
},
{
objectKind: memquotaType,
collectionKind: memquotaTypeList,
},
{
objectKind: opaType,
collectionKind: opaTypeList,
},
{
objectKind: prometheusType,
collectionKind: prometheusTypeList,
},
{
objectKind: rbacType,
collectionKind: rbacTypeList,
},
{
objectKind: servicecontrolType,
collectionKind: servicecontrolTypeList,
},
{
objectKind: solarwindsType,
collectionKind: solarwindsTypeList,
},
{
objectKind: stackdriverType,
collectionKind: stackdriverTypeList,
},
{
objectKind: statsdType,
collectionKind: statsdTypeList,
},
{
objectKind: stdioType,
collectionKind: stdioTypeList,
},
}
templateTypes = []struct {
objectKind string
collectionKind string
}{
{
objectKind: apikeyType,
collectionKind: apikeyTypeList,
},
{
objectKind: authorizationType,
collectionKind: authorizationTypeList,
},
{
objectKind: checknothingType,
collectionKind: checknothingTypeList,
},
{
objectKind: kubernetesType,
collectionKind: kubernetesTypeList,
},
{
objectKind: listEntryType,
collectionKind: listEntryTypeList,
},
{
objectKind: logentryType,
collectionKind: logentryTypeList,
},
{
objectKind: metricType,
collectionKind: metricTypeList,
},
{
objectKind: quotaType,
collectionKind: quotaTypeList,
},
{
objectKind: reportnothingType,
collectionKind: reportnothingTypeList,
},
{
objectKind: servicecontrolreportType,
collectionKind: servicecontrolreportTypeList,
},
}
rbacTypes = []struct {
objectKind string
collectionKind string
}{
{
objectKind: clusterrbacconfigType,
collectionKind: clusterrbacconfigTypeList,
},
{
objectKind: serviceroleType,
collectionKind: serviceroleTypeList,
},
{
objectKind: servicerolebindingType,
collectionKind: servicerolebindingTypeList,
},
}
// A map to get the plural for a Istio type using the singlar type
// Used for fetch istio actions details, so only applied to handlers (adapters) and instances (templates) types
// It should be one entry per adapter/template
adapterPlurals = map[string]string{
circonusType: circonuses,
denierType: deniers,
fluentdType: fluentds,
handlerType: handlers,
kubernetesenvType: kubernetesenvs,
listcheckerType: listcheckers,
memquotaType: memquotas,
opaType: opas,
prometheusType: prometheuses,
rbacType: rbacs,
servicecontrolType: servicecontrols,
solarwindsType: solarwindses,
stackdriverType: stackdrivers,
statsdType: statsds,
stdioType: stdios,
}
templatePlurals = map[string]string{
apikeyType: apikeys,
authorizationType: authorizations,
checknothingType: checknothings,
kubernetesType: kuberneteses,
listEntryType: listEntries,
logentryType: logentries,
metricType: metrics,
quotaType: quotas,
reportnothingType: reportnothings,
servicecontrolreportType: servicecontrolreports,
}
PluralType = map[string]string{
// Networking
gateways: gatewayType,
virtualServices: virtualServiceType,
destinationRules: destinationRuleType,
serviceentries: serviceentryType,
// Main Config files
rules: ruleType,
quotaspecs: quotaspecType,
quotaspecbindings: quotaspecbindingType,
// Adapters
circonuses: circonusType,
deniers: denierType,
fluentds: fluentdType,
handlers: handlerType,
kubernetesenvs: kubernetesenvType,
listcheckers: listcheckerType,
memquotas: memquotaType,
opas: opaType,
prometheuses: prometheusType,
rbacs: rbacType,
servicecontrols: servicecontrolType,
solarwindses: solarwindsType,
stackdrivers: stackdriverType,
statsds: statsdType,
stdios: stdioType,
// Templates
apikeys: apikeyType,
authorizations: authorizationType,
checknothings: checknothingType,
kuberneteses: kubernetesType,
listEntries: listEntryType,
logentries: logentryType,
metrics: metricType,
quotas: quotaType,
reportnothings: reportnothingType,
servicecontrolreports: servicecontrolreportType,
// Policies
policies: policyType,
meshPolicies: meshPolicyType,
// Rbac
clusterrbacconfigs: clusterrbacconfigType,
serviceroles: serviceroleType,
servicerolebindings: servicerolebindingType,
}
)
// IstioObject is a k8s wrapper interface for config objects.
// Taken from istio.io
type IstioObject interface {
runtime.Object
GetSpec() map[string]interface{}
SetSpec(map[string]interface{})
GetObjectMeta() meta_v1.ObjectMeta
SetObjectMeta(meta_v1.ObjectMeta)
DeepCopyIstioObject() IstioObject
}
// IstioObjectList is a k8s wrapper interface for list config objects.
// Taken from istio.io
type IstioObjectList interface {
runtime.Object
GetItems() []IstioObject
}
// ServiceList holds list of services, pods and deployments
type ServiceList struct {
Services *v1.ServiceList
Pods *v1.PodList
Deployments *v1beta1.DeploymentList
}
// ServiceDetails is a wrapper to group full Service description, Endpoints and Pods.
// Used to fetch all details in a single operation instead to invoke individual APIs per each group.
type ServiceDetails struct {
Service *v1.Service `json:"service"`
Endpoints *v1.Endpoints `json:"endpoints"`
Deployments *v1beta1.DeploymentList `json:"deployments"`
Autoscalers *autoscalingV1.HorizontalPodAutoscalerList `json:"autoscalers"`
Pods []v1.Pod `json:"pods"`
}
// IstioDetails is a wrapper to group all Istio objects related to a Service.
// Used to fetch all Istio information in a single operation instead to invoke individual APIs per each group.
type IstioDetails struct {
VirtualServices []IstioObject `json:"virtualservices"`
DestinationRules []IstioObject `json:"destinationrules"`
ServiceEntries []IstioObject `json:"serviceentries"`
Gateways []IstioObject `json:"gateways"`
}
// MTLSDetails is a wrapper to group all Istio objects related to non-local mTLS configurations
type MTLSDetails struct {
DestinationRules []IstioObject `json:"destinationrules"`
MeshPolicies []IstioObject `json:"meshpolicies"`
}
type istioResponse struct {
result IstioObject
results []IstioObject
err error
}
// GenericIstioObject is a type to test Istio types defined by Istio as a Kubernetes extension.
type GenericIstioObject struct {
meta_v1.TypeMeta `json:",inline"`
meta_v1.ObjectMeta `json:"metadata"`
Spec map[string]interface{} `json:"spec"`
}
// GenericIstioObjectList is the generic Kubernetes API list wrapper
type GenericIstioObjectList struct {
meta_v1.TypeMeta `json:",inline"`
meta_v1.ListMeta `json:"metadata"`
Items []GenericIstioObject `json:"items"`
}
// GetSpec from a wrapper
func (in *GenericIstioObject) GetSpec() map[string]interface{} {
return in.Spec
}
// SetSpec for a wrapper
func (in *GenericIstioObject) SetSpec(spec map[string]interface{}) {
in.Spec = spec
}
// GetObjectMeta from a wrapper
func (in *GenericIstioObject) GetObjectMeta() meta_v1.ObjectMeta {
return in.ObjectMeta
}
// SetObjectMeta for a wrapper
func (in *GenericIstioObject) SetObjectMeta(metadata meta_v1.ObjectMeta) {
in.ObjectMeta = metadata
}
// GetItems from a wrapper
func (in *GenericIstioObjectList) GetItems() []IstioObject {
out := make([]IstioObject, len(in.Items))
for i := range in.Items {
out[i] = &in.Items[i]
}
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GenericIstioObject) DeepCopyInto(out *GenericIstioObject) {
*out = *in
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
out.Spec = in.Spec
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GenericIstioObject.
func (in *GenericIstioObject) DeepCopy() *GenericIstioObject {
if in == nil {
return nil
}
out := new(GenericIstioObject)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *GenericIstioObject) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyIstioObject is an autogenerated deepcopy function, copying the receiver, creating a new IstioObject.
func (in *GenericIstioObject) DeepCopyIstioObject() IstioObject {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GenericIstioObjectList) DeepCopyInto(out *GenericIstioObjectList) {
*out = *in
out.TypeMeta = in.TypeMeta
out.ListMeta = in.ListMeta
if in.Items != nil {
in, out := &in.Items, &out.Items
*out = make([]GenericIstioObject, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GenericIstioObjectList.
func (in *GenericIstioObjectList) DeepCopy() *GenericIstioObjectList {
if in == nil {
return nil
}
out := new(GenericIstioObjectList)
in.DeepCopyInto(out)
return out
}
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
func (in *GenericIstioObjectList) DeepCopyObject() runtime.Object {
if c := in.DeepCopy(); c != nil {
return c
}
return nil
}
// Host represents the FQDN format for Istio hostnames
type Host struct {
Service string
Namespace string
Cluster string
}
// Parse takes as an input a hostname (simple or full FQDN), namespace and clusterName and returns a parsed Host struct
func ParseHost(hostName, namespace, cluster string) Host {
domainParts := strings.Split(hostName, ".")
host := Host{
Service: domainParts[0],
}
if len(domainParts) > 1 {
host.Namespace = domainParts[1]
if len(domainParts) > 2 {
host.Cluster = strings.Join(domainParts[2:], ".")
}
}
// Fill in missing details, we take precedence from the full hostname and not from DestinationRule details
if host.Cluster == "" {
if cluster != "" {
host.Cluster = cluster
} else {
host.Cluster = config.Get().ExternalServices.Istio.IstioIdentityDomain
}
}
if host.Namespace == "" {
host.Namespace = namespace
}
return host
}
// String outputs a full FQDN version of the Host
func (h Host) String() string {
return fmt.Sprintf("%s.%s.%s", h.Service, h.Namespace, h.Cluster)
}

72
vendor/github.com/kiali/kiali/log/log.go generated vendored Normal file
View File

@@ -0,0 +1,72 @@
package log
import (
"fmt"
"github.com/golang/glog"
)
const (
debug glog.Level = glog.Level(4)
trace glog.Level = glog.Level(5)
)
func Info(args ...interface{}) {
glog.InfoDepth(1, args...)
}
func Infof(format string, args ...interface{}) {
glog.InfoDepth(1, fmt.Sprintf(format, args...))
}
func Warning(args ...interface{}) {
glog.WarningDepth(1, args...)
}
func Warningf(format string, args ...interface{}) {
glog.WarningDepth(1, fmt.Sprintf(format, args...))
}
func Error(args ...interface{}) {
glog.ErrorDepth(1, args...)
}
func Errorf(format string, args ...interface{}) {
glog.ErrorDepth(1, fmt.Sprintf(format, args...))
}
// Debug will log a message at verbose level 4 and will ensure the caller's stack frame is used
func Debug(args ...interface{}) {
if glog.V(debug) {
glog.InfoDepth(1, "DEBUG: "+fmt.Sprint(args...)) // 1 == depth in the stack of the caller
}
}
// Debugf will log a message at verbose level 4 and will ensure the caller's stack frame is used
func Debugf(format string, args ...interface{}) {
if glog.V(debug) {
glog.InfoDepth(1, fmt.Sprintf("DEBUG: "+format, args...)) // 1 == depth in the stack of the caller
}
}
func IsDebug() bool {
return bool(glog.V(debug))
}
// Trace will log a message at verbose level 5 and will ensure the caller's stack frame is used
func Trace(args ...interface{}) {
if glog.V(trace) {
glog.InfoDepth(1, "TRACE: "+fmt.Sprint(args...)) // 1 == depth in the stack of the caller
}
}
// Tracef will log a message at verbose level 5 and will ensure the caller's stack frame is used
func Tracef(format string, args ...interface{}) {
if glog.V(trace) {
glog.InfoDepth(1, fmt.Sprintf("TRACE: "+format, args...)) // 1 == depth in the stack of the caller
}
}
func IsTrace() bool {
return bool(glog.V(trace))
}

27
vendor/github.com/kiali/kiali/models/address.go generated vendored Normal file
View File

@@ -0,0 +1,27 @@
package models
import "k8s.io/api/core/v1"
type Addresses []Address
type Address struct {
Kind string `json:"kind"`
Name string `json:"name"`
IP string `json:"ip"`
}
func (addresses *Addresses) Parse(as []v1.EndpointAddress) {
for _, address := range as {
castedAddress := Address{}
castedAddress.Parse(address)
*addresses = append(*addresses, castedAddress)
}
}
func (address *Address) Parse(a v1.EndpointAddress) {
address.IP = a.IP
if a.TargetRef != nil {
address.Kind = a.TargetRef.Kind
address.Name = a.TargetRef.Name
}
}

60
vendor/github.com/kiali/kiali/models/app.go generated vendored Normal file
View File

@@ -0,0 +1,60 @@
package models
type AppList struct {
// Namespace where the apps live in
// required: true
// example: bookinfo
Namespace Namespace `json:"namespace"`
// Applications for a given namespace
// required: true
Apps []AppListItem `json:"applications"`
}
// AppListItem has the necessary information to display the console app list
type AppListItem struct {
// Name of the application
// required: true
// example: reviews
Name string `json:"name"`
// Define if all Pods related to the Workloads of this app has an IstioSidecar deployed
// required: true
// example: true
IstioSidecar bool `json:"istioSidecar"`
}
type WorkloadItem struct {
// Name of a workload member of an application
// required: true
// example: reviews-v1
WorkloadName string `json:"workloadName"`
// Define if all Pods related to the Workload has an IstioSidecar deployed
// required: true
// example: true
IstioSidecar bool `json:"istioSidecar"`
}
type App struct {
// Namespace where the app lives in
// required: true
// example: bookinfo
Namespace Namespace `json:"namespace"`
// Name of the application
// required: true
// example: reviews
Name string `json:"name"`
// Workloads for a given application
// required: true
Workloads []WorkloadItem `json:"workloads"`
// List of service names linked with an application
// required: true
ServiceNames []string `json:"serviceNames"`
// Runtimes and associated dashboards
Runtimes []Runtime `json:"runtimes"`
}

54
vendor/github.com/kiali/kiali/models/autoscalers.go generated vendored Normal file
View File

@@ -0,0 +1,54 @@
package models
import (
"k8s.io/api/autoscaling/v1"
)
type Autoscaler struct {
Name string `json:"name"`
Labels map[string]string `json:"labels"`
CreatedAt string `json:"createdAt"`
// Spec
MinReplicas int32 `json:"minReplicas"`
MaxReplicas int32 `json:"maxReplicas"`
TargetCPUUtilizationPercentage int32 `json:"targetCPUUtilizationPercentage"`
// Status
ObservedGeneration int64 `json:"observedGeneration,omitempty"`
LastScaleTime string `json:"lastScaleTime,omitempty"`
CurrentReplicas int32 `json:"currentReplicas"`
DesiredReplicas int32 `json:"desiredReplicas"`
CurrentCPUUtilizationPercentage int32 `json:"currentCPUUtilizationPercentage,omitempty"`
}
func (autoscaler *Autoscaler) Parse(d *v1.HorizontalPodAutoscaler) {
autoscaler.Name = d.Name
autoscaler.Labels = d.Labels
autoscaler.CreatedAt = formatTime(d.CreationTimestamp.Time)
// Spec
autoscaler.MaxReplicas = d.Spec.MaxReplicas
if d.Spec.MinReplicas != nil {
autoscaler.MinReplicas = *d.Spec.MinReplicas
}
if d.Spec.TargetCPUUtilizationPercentage != nil {
autoscaler.TargetCPUUtilizationPercentage = *d.Spec.TargetCPUUtilizationPercentage
}
// Status
autoscaler.CurrentReplicas = d.Status.CurrentReplicas
autoscaler.DesiredReplicas = d.Status.DesiredReplicas
if d.Status.ObservedGeneration != nil {
autoscaler.ObservedGeneration = *d.Status.ObservedGeneration
}
if d.Status.LastScaleTime != nil {
autoscaler.LastScaleTime = formatTime((*d.Status.LastScaleTime).Time)
}
if d.Status.CurrentCPUUtilizationPercentage != nil {
autoscaler.CurrentCPUUtilizationPercentage = *d.Status.CurrentCPUUtilizationPercentage
}
}

View File

@@ -0,0 +1,32 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type ClusterRbacConfigs []ClusterRbacConfig
type ClusterRbacConfig struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Mode interface{} `json:"mode"`
Inclusion interface{} `json:"inclusion"`
Exclusion interface{} `json:"exclusion"`
} `json:"spec"`
}
func (rcs *ClusterRbacConfigs) Parse(clusterRbacConfigs []kubernetes.IstioObject) {
for _, rc := range clusterRbacConfigs {
clusterRbacConfig := ClusterRbacConfig{}
clusterRbacConfig.Parse(rc)
*rcs = append(*rcs, clusterRbacConfig)
}
}
func (rc *ClusterRbacConfig) Parse(clusterRbacConfig kubernetes.IstioObject) {
rc.Metadata = clusterRbacConfig.GetObjectMeta()
rc.Spec.Mode = clusterRbacConfig.GetSpec()["mode"]
rc.Spec.Inclusion = clusterRbacConfig.GetSpec()["inclusion"]
rc.Spec.Exclusion = clusterRbacConfig.GetSpec()["exclusion"]
}

100
vendor/github.com/kiali/kiali/models/dashboards.go generated vendored Normal file
View File

@@ -0,0 +1,100 @@
package models
import (
"fmt"
"sort"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/prometheus"
)
// MonitoringDashboard is the model representing custom monitoring dashboard, transformed from MonitoringDashboard k8s resource
type MonitoringDashboard struct {
Title string `json:"title"`
Charts []Chart `json:"charts"`
Aggregations []Aggregation `json:"aggregations"`
}
// Chart is the model representing a custom chart, transformed from charts in MonitoringDashboard k8s resource
type Chart struct {
Name string `json:"name"`
Unit string `json:"unit"`
Spans int `json:"spans"`
Metric *prometheus.Metric `json:"metric"`
Histogram prometheus.Histogram `json:"histogram"`
}
// ConvertChart converts a k8s chart (from MonitoringDashboard k8s resource) into this models chart
func ConvertChart(from kubernetes.MonitoringDashboardChart) Chart {
return Chart{
Name: from.Name,
Unit: from.Unit,
Spans: from.Spans,
}
}
// Aggregation is the model representing label's allowed aggregation, transformed from aggregation in MonitoringDashboard k8s resource
type Aggregation struct {
Label string `json:"label"`
DisplayName string `json:"displayName"`
}
// ConvertAggregations converts a k8s aggregations (from MonitoringDashboard k8s resource) into this models aggregations
// Results are sorted by DisplayName
func ConvertAggregations(from kubernetes.MonitoringDashboardSpec) []Aggregation {
uniqueAggs := make(map[string]Aggregation)
for _, chart := range from.Charts {
for _, agg := range chart.Aggregations {
uniqueAggs[agg.DisplayName] = Aggregation{Label: agg.Label, DisplayName: agg.DisplayName}
}
}
aggs := []Aggregation{}
for _, agg := range uniqueAggs {
aggs = append(aggs, agg)
}
sort.Slice(aggs, func(i, j int) bool {
return aggs[i].DisplayName < aggs[j].DisplayName
})
return aggs
}
func buildIstioAggregations(local, remote string) []Aggregation {
return []Aggregation{
{
Label: fmt.Sprintf("%s_version", local),
DisplayName: "Local version",
},
{
Label: fmt.Sprintf("%s_app", remote),
DisplayName: "Remote app",
},
{
Label: fmt.Sprintf("%s_version", remote),
DisplayName: "Remote version",
},
{
Label: "response_code",
DisplayName: "Response code",
},
}
}
// PrepareIstioDashboard prepares the Istio dashboard title and aggregations dynamically for input values
func PrepareIstioDashboard(direction, local, remote string) MonitoringDashboard {
return MonitoringDashboard{
Title: fmt.Sprintf("%s Metrics", direction),
Aggregations: buildIstioAggregations(local, remote),
}
}
// Runtime holds the runtime title and associated dashboard template(s)
type Runtime struct {
Name string `json:"name"`
DashboardRefs []DashboardRef `json:"dashboardRefs"`
}
// DashboardRef holds template name and title for a custom dashboard
type DashboardRef struct {
Template string `json:"template"`
Title string `json:"title"`
}

View File

@@ -0,0 +1,95 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/kubernetes"
)
// DestinationRules destinationRules
//
// This is used for returning an array of DestinationRules
//
// swagger:model destinationRules
// An array of destinationRule
// swagger:allOf
type DestinationRules struct {
Permissions ResourcePermissions `json:"permissions"`
Items []DestinationRule `json:"items"`
}
// DestinationRule destinationRule
//
// This is used for returning a DestinationRule
//
// swagger:model destinationRule
type DestinationRule struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Host interface{} `json:"host"`
TrafficPolicy interface{} `json:"trafficPolicy"`
Subsets interface{} `json:"subsets"`
} `json:"spec"`
}
func (dRules *DestinationRules) Parse(destinationRules []kubernetes.IstioObject) {
dRules.Items = []DestinationRule{}
for _, dr := range destinationRules {
destinationRule := DestinationRule{}
destinationRule.Parse(dr)
dRules.Items = append(dRules.Items, destinationRule)
}
}
func (dRule *DestinationRule) Parse(destinationRule kubernetes.IstioObject) {
dRule.Metadata = destinationRule.GetObjectMeta()
dRule.Spec.Host = destinationRule.GetSpec()["host"]
dRule.Spec.TrafficPolicy = destinationRule.GetSpec()["trafficPolicy"]
dRule.Spec.Subsets = destinationRule.GetSpec()["subsets"]
}
func (dRule *DestinationRule) HasCircuitBreaker(namespace string, serviceName string, version string) bool {
if host, ok := dRule.Spec.Host.(string); ok && kubernetes.FilterByHost(host, serviceName, namespace) {
// CB is set at DR level, so it's true for the service and all versions
if isCircuitBreakerTrafficPolicy(dRule.Spec.TrafficPolicy) {
return true
}
if subsets, ok := dRule.Spec.Subsets.([]interface{}); ok {
cfg := config.Get()
for _, subsetInterface := range subsets {
if subset, ok := subsetInterface.(map[string]interface{}); ok {
if trafficPolicy, ok := subset["trafficPolicy"]; ok && isCircuitBreakerTrafficPolicy(trafficPolicy) {
// set the service true if it has a subset with a CB
if "" == version {
return true
}
if labels, ok := subset["labels"]; ok {
if dLabels, ok := labels.(map[string]interface{}); ok {
if versionValue, ok := dLabels[cfg.IstioLabels.VersionLabelName]; ok && versionValue == version {
return true
}
}
}
}
}
}
}
}
return false
}
func isCircuitBreakerTrafficPolicy(trafficPolicy interface{}) bool {
if trafficPolicy == nil {
return false
}
if dTrafficPolicy, ok := trafficPolicy.(map[string]interface{}); ok {
if _, ok := dTrafficPolicy["connectionPool"]; ok {
return true
}
if _, ok := dTrafficPolicy["outlierDetection"]; ok {
return true
}
}
return false
}

26
vendor/github.com/kiali/kiali/models/endpoint.go generated vendored Normal file
View File

@@ -0,0 +1,26 @@
package models
import "k8s.io/api/core/v1"
type Endpoints []Endpoint
type Endpoint struct {
Addresses Addresses `json:"addresses"`
Ports Ports `json:"ports"`
}
func (endpoints *Endpoints) Parse(es *v1.Endpoints) {
if es == nil {
return
}
for _, subset := range es.Subsets {
endpoint := Endpoint{}
endpoint.Parse(subset)
*endpoints = append(*endpoints, endpoint)
}
}
func (endpoint *Endpoint) Parse(s v1.EndpointSubset) {
(&endpoint.Ports).ParseEndpointPorts(s.Ports)
(&endpoint.Addresses).Parse(s.Addresses)
}

30
vendor/github.com/kiali/kiali/models/gateway.go generated vendored Normal file
View File

@@ -0,0 +1,30 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type Gateways []Gateway
type Gateway struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Servers interface{} `json:"servers"`
Selector interface{} `json:"selector"`
} `json:"spec"`
}
func (gws *Gateways) Parse(gateways []kubernetes.IstioObject) {
for _, gw := range gateways {
gateway := Gateway{}
gateway.Parse(gw)
*gws = append(*gws, gateway)
}
}
func (gw *Gateway) Parse(gateway kubernetes.IstioObject) {
gw.Metadata = gateway.GetObjectMeta()
gw.Spec.Servers = gateway.GetSpec()["servers"]
gw.Spec.Selector = gateway.GetSpec()["selector"]
}

11
vendor/github.com/kiali/kiali/models/grafana_info.go generated vendored Normal file
View File

@@ -0,0 +1,11 @@
package models
// GrafanaInfo provides information to access Grafana dashboards
type GrafanaInfo struct {
URL string `json:"url"`
ServiceDashboardPath string `json:"serviceDashboardPath"`
WorkloadDashboardPath string `json:"workloadDashboardPath"`
VarNamespace string `json:"varNamespace"`
VarService string `json:"varService"`
VarWorkload string `json:"varWorkload"`
}

105
vendor/github.com/kiali/kiali/models/health.go generated vendored Normal file
View File

@@ -0,0 +1,105 @@
package models
import (
"github.com/prometheus/common/model"
)
// NamespaceAppHealth is an alias of map of app name x health
type NamespaceAppHealth map[string]*AppHealth
// NamespaceServiceHealth is an alias of map of service name x health
type NamespaceServiceHealth map[string]*ServiceHealth
// NamespaceWorkloadHealth is an alias of map of workload name x health
type NamespaceWorkloadHealth map[string]*WorkloadHealth
// ServiceHealth contains aggregated health from various sources, for a given service
type ServiceHealth struct {
Requests RequestHealth `json:"requests"`
}
// AppHealth contains aggregated health from various sources, for a given app
type AppHealth struct {
WorkloadStatuses []WorkloadStatus `json:"workloadStatuses"`
Requests RequestHealth `json:"requests"`
}
func NewEmptyRequestHealth() RequestHealth {
return RequestHealth{ErrorRatio: -1, InboundErrorRatio: -1, OutboundErrorRatio: -1}
}
// EmptyAppHealth create an empty AppHealth
func EmptyAppHealth() AppHealth {
return AppHealth{
WorkloadStatuses: []WorkloadStatus{},
Requests: NewEmptyRequestHealth(),
}
}
// EmptyServiceHealth create an empty ServiceHealth
func EmptyServiceHealth() ServiceHealth {
return ServiceHealth{
Requests: NewEmptyRequestHealth(),
}
}
// WorkloadHealth contains aggregated health from various sources, for a given workload
type WorkloadHealth struct {
WorkloadStatus WorkloadStatus `json:"workloadStatus"`
Requests RequestHealth `json:"requests"`
}
// WorkloadStatus gives the available / total replicas in a deployment of a pod
type WorkloadStatus struct {
Name string `json:"name"`
Replicas int32 `json:"replicas"`
AvailableReplicas int32 `json:"available"`
}
// RequestHealth holds several stats about recent request errors
type RequestHealth struct {
inboundErrorRate float64
outboundErrorRate float64
inboundRequestRate float64
outboundRequestRate float64
ErrorRatio float64 `json:"errorRatio"`
InboundErrorRatio float64 `json:"inboundErrorRatio"`
OutboundErrorRatio float64 `json:"outboundErrorRatio"`
}
// AggregateInbound adds the provided metric sample to internal inbound counters and updates error ratios
func (in *RequestHealth) AggregateInbound(sample *model.Sample) {
aggregate(sample, &in.inboundRequestRate, &in.inboundErrorRate, &in.InboundErrorRatio)
in.updateGlobalErrorRatio()
}
// AggregateOutbound adds the provided metric sample to internal outbound counters and updates error ratios
func (in *RequestHealth) AggregateOutbound(sample *model.Sample) {
aggregate(sample, &in.outboundRequestRate, &in.outboundErrorRate, &in.OutboundErrorRatio)
in.updateGlobalErrorRatio()
}
func (in *RequestHealth) updateGlobalErrorRatio() {
globalRequestRate := in.inboundRequestRate + in.outboundRequestRate
globalErrorRate := in.inboundErrorRate + in.outboundErrorRate
if globalRequestRate == 0 {
in.ErrorRatio = -1
} else {
in.ErrorRatio = globalErrorRate / globalRequestRate
}
}
func aggregate(sample *model.Sample, requestRate, errorRate, errorRatio *float64) {
*requestRate += float64(sample.Value)
responseCode := sample.Metric["response_code"][0]
if responseCode == '5' || responseCode == '4' {
*errorRate += float64(sample.Value)
}
if *requestRate == 0 {
*errorRatio = -1
} else {
*errorRatio = *errorRate / *requestRate
}
}

57
vendor/github.com/kiali/kiali/models/istio_config.go generated vendored Normal file
View File

@@ -0,0 +1,57 @@
package models
// IstioConfigList istioConfigList
//
// This type is used for returning a response of IstioConfigList
//
// swagger:model IstioConfigList
type IstioConfigList struct {
// The namespace of istioConfiglist
//
// required: true
Namespace Namespace `json:"namespace"`
Gateways Gateways `json:"gateways"`
VirtualServices VirtualServices `json:"virtualServices"`
DestinationRules DestinationRules `json:"destinationRules"`
ServiceEntries ServiceEntries `json:"serviceEntries"`
Rules IstioRules `json:"rules"`
Adapters IstioAdapters `json:"adapters"`
Templates IstioTemplates `json:"templates"`
QuotaSpecs QuotaSpecs `json:"quotaSpecs"`
QuotaSpecBindings QuotaSpecBindings `json:"quotaSpecBindings"`
Policies Policies `json:"policies"`
MeshPolicies MeshPolicies `json:"meshPolicies"`
ClusterRbacConfigs ClusterRbacConfigs `json:"clusterRbacConfigs"`
ServiceRoles ServiceRoles `json:"serviceRoles"`
ServiceRoleBindings ServiceRoleBindings `json:"serviceRoleBindings"`
IstioValidations IstioValidations `json:"validations"`
}
type IstioConfigDetails struct {
Namespace Namespace `json:"namespace"`
ObjectType string `json:"objectType"`
Gateway *Gateway `json:"gateway"`
VirtualService *VirtualService `json:"virtualService"`
DestinationRule *DestinationRule `json:"destinationRule"`
ServiceEntry *ServiceEntry `json:"serviceEntry"`
Rule *IstioRule `json:"rule"`
Adapter *IstioAdapter `json:"adapter"`
Template *IstioTemplate `json:"template"`
QuotaSpec *QuotaSpec `json:"quotaSpec"`
QuotaSpecBinding *QuotaSpecBinding `json:"quotaSpecBinding"`
Policy *Policy `json:"policy"`
MeshPolicy *MeshPolicy `json:"meshPolicy"`
ClusterRbacConfig *ClusterRbacConfig `json:"clusterRbacConfig"`
ServiceRole *ServiceRole `json:"serviceRole"`
ServiceRoleBinding *ServiceRoleBinding `json:"serviceRoleBinding"`
Permissions ResourcePermissions `json:"permissions"`
IstioValidation *IstioValidation `json:"validation"`
}
// ResourcePermissions holds permission flags for an object type
// True means allowed.
type ResourcePermissions struct {
Create bool `json:"create"`
Update bool `json:"update"`
Delete bool `json:"delete"`
}

128
vendor/github.com/kiali/kiali/models/istio_rule.go generated vendored Normal file
View File

@@ -0,0 +1,128 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type IstioRuleList struct {
Namespace Namespace `json:"namespace"`
Rules []IstioRule `json:"rules"`
}
// IstioRules istioRules
//
// This type type is used for returning an array of IstioRules
//
// swagger:model istioRules
// An array of istioRule
// swagger:allOf
type IstioRules []IstioRule
// IstioRule istioRule
//
// This type type is used for returning a IstioRule
//
// swagger:model istioRule
type IstioRule struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Match interface{} `json:"match"`
Actions interface{} `json:"actions"`
} `json:"spec"`
}
// IstioAdapters istioAdapters
//
// This type type is used for returning an array of IstioAdapters
//
// swagger:model istioAdapters
// An array of istioAdapter
// swagger:allOf
type IstioAdapters []IstioAdapter
// IstioAdapter istioAdapter
//
// This type type is used for returning a IstioAdapter
//
// swagger:model istioAdapter
type IstioAdapter struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec interface{} `json:"spec"`
Adapter string `json:"adapter"`
// We need to bring the plural to use it from the UI to build the API
Adapters string `json:"adapters"`
}
// IstioTemplates istioTemplates
//
// This type type is used for returning an array of IstioTemplates
//
// swagger:model istioTemplates
// An array of istioTemplates
// swagger:allOf
type IstioTemplates []IstioTemplate
// IstioTemplate istioTemplate
//
// This type type is used for returning a IstioTemplate
//
// swagger:model istioTemplate
type IstioTemplate struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec interface{} `json:"spec"`
Template string `json:"template"`
// We need to bring the plural to use it from the UI to build the API
Templates string `json:"templates"`
}
func CastIstioRulesCollection(rules []kubernetes.IstioObject) IstioRules {
istioRules := make([]IstioRule, len(rules))
for i, rule := range rules {
istioRules[i] = CastIstioRule(rule)
}
return istioRules
}
func CastIstioRule(rule kubernetes.IstioObject) IstioRule {
istioRule := IstioRule{}
istioRule.Metadata = rule.GetObjectMeta()
istioRule.Spec.Match = rule.GetSpec()["match"]
istioRule.Spec.Actions = rule.GetSpec()["actions"]
return istioRule
}
func CastIstioAdaptersCollection(adapters []kubernetes.IstioObject) IstioAdapters {
istioAdapters := make([]IstioAdapter, len(adapters))
for i, adapter := range adapters {
istioAdapters[i] = CastIstioAdapter(adapter)
}
return istioAdapters
}
func CastIstioAdapter(adapter kubernetes.IstioObject) IstioAdapter {
istioAdapter := IstioAdapter{}
istioAdapter.Metadata = adapter.GetObjectMeta()
istioAdapter.Spec = adapter.GetSpec()
istioAdapter.Adapter = adapter.GetObjectMeta().Labels["adapter"]
istioAdapter.Adapters = adapter.GetObjectMeta().Labels["adapters"]
return istioAdapter
}
func CastIstioTemplatesCollection(templates []kubernetes.IstioObject) IstioTemplates {
istioTemplates := make([]IstioTemplate, len(templates))
for i, template := range templates {
istioTemplates[i] = CastIstioTemplate(template)
}
return istioTemplates
}
func CastIstioTemplate(template kubernetes.IstioObject) IstioTemplate {
istioTemplate := IstioTemplate{}
istioTemplate.Metadata = template.GetObjectMeta()
istioTemplate.Spec = template.GetSpec()
istioTemplate.Template = template.GetObjectMeta().Labels["template"]
istioTemplate.Templates = template.GetObjectMeta().Labels["templates"]
return istioTemplate
}

View File

@@ -0,0 +1,208 @@
package models
import (
"encoding/json"
)
// NamespaceValidations represents a set of IstioValidations grouped by namespace
type NamespaceValidations map[string]IstioValidations
// IstioValidationKey is the key value composed of an Istio ObjectType and Name.
type IstioValidationKey struct {
ObjectType string
Name string
}
// IstioValidations represents a set of IstioValidation grouped by IstioValidationKey.
type IstioValidations map[IstioValidationKey]*IstioValidation
// IstioValidation represents a list of checks associated to an Istio object.
// swagger:model
type IstioValidation struct {
// Name of the object itself
// required: true
// example: reviews
Name string `json:"name"`
// Type of the object
// required: true
// example: virtualservice
ObjectType string `json:"objectType"`
// Represents validity of the object: in case of warning, validity remains as true
// required: true
// example: false
Valid bool `json:"valid"`
// Array of checks. It might be empty.
Checks []*IstioCheck `json:"checks"`
}
// IstioCheck represents an individual check.
// swagger:model
type IstioCheck struct {
// Description of the check
// required: true
// example: Weight sum should be 100
Message string `json:"message"`
// Indicates the level of importance: error or warning
// required: true
// example: error
Severity SeverityLevel `json:"severity"`
// String that describes where in the yaml file is the check located
// example: spec/http[0]/route
Path string `json:"path"`
}
type SeverityLevel string
const (
ErrorSeverity SeverityLevel = "error"
WarningSeverity SeverityLevel = "warning"
)
var ObjectTypeSingular = map[string]string{
"gateways": "gateway",
"virtualservices": "virtualservice",
"destinationrules": "destinationrule",
"serviceentries": "serviceentry",
"rules": "rule",
"quotaspecs": "quotaspec",
"quotaspecbindings": "quotaspecbinding",
}
var checkDescriptors = map[string]IstioCheck{
"destinationrules.multimatch": {
Message: "More than one DestinationRules for the same host subset combination",
Severity: WarningSeverity,
},
"destinationrules.nodest.matchingworkload": {
Message: "This host has no matching workloads",
Severity: ErrorSeverity,
},
"destinationrules.nodest.subsetlabels": {
Message: "This subset's labels are not found in any matching host",
Severity: ErrorSeverity,
},
"destinationrules.trafficpolicy.notlssettings": {
Message: "mTLS settings of a non-local Destination Rule are overridden",
Severity: WarningSeverity,
},
"gateways.multimatch": {
Message: "More than one Gateway for the same host port combination",
Severity: WarningSeverity,
},
"port.name.mismatch": {
Message: "Port name must follow <protocol>[-suffix] form",
Severity: ErrorSeverity,
},
"virtualservices.nogateway": {
Message: "VirtualService is pointing to a non-existent gateway",
Severity: ErrorSeverity,
},
"virtualservices.nohost.hostnotfound": {
Message: "DestinationWeight on route doesn't have a valid service (host not found)",
Severity: ErrorSeverity,
},
"virtualservices.nohost.invalidprotocol": {
Message: "VirtualService doesn't define any valid route protocol",
Severity: ErrorSeverity,
},
"virtualservices.route.numericweight": {
Message: "Weight must be a number",
Severity: ErrorSeverity,
},
"virtualservices.route.weightrange": {
Message: "Weight should be between 0 and 100",
Severity: ErrorSeverity,
},
"virtualservices.route.weightsum": {
Message: "Weight sum should be 100",
Severity: ErrorSeverity,
},
"virtualservices.route.allweightspresent": {
Message: "All routes should have weight",
Severity: WarningSeverity,
},
"virtualservices.singlehost": {
Message: "More than one Virtual Service for same host",
Severity: WarningSeverity,
},
"virtualservices.subsetpresent.destinationmandatory": {
Message: "Destination field is mandatory",
Severity: ErrorSeverity,
},
"virtualservices.subsetpresent.subsetnotfound": {
Message: "Subset not found",
Severity: WarningSeverity,
},
}
func Build(checkId string, path string) IstioCheck {
check := checkDescriptors[checkId]
check.Path = path
return check
}
func BuildKey(objectType, name string) IstioValidationKey {
return IstioValidationKey{ObjectType: objectType, Name: name}
}
func CheckMessage(checkId string) string {
return checkDescriptors[checkId].Message
}
func (iv IstioValidations) FilterByKey(objectType, name string) IstioValidations {
fiv := IstioValidations{}
for k, v := range iv {
if k.Name == name && k.ObjectType == objectType {
fiv[k] = v
}
}
return fiv
}
// FilterByTypes takes an input as ObjectTypes, transforms to singular types and filters the validations
func (iv IstioValidations) FilterByTypes(objectTypes []string) IstioValidations {
types := make(map[string]bool, len(objectTypes))
for _, objectType := range objectTypes {
types[ObjectTypeSingular[objectType]] = true
}
fiv := IstioValidations{}
for k, v := range iv {
if _, found := types[k.ObjectType]; found {
fiv[k] = v
}
}
return fiv
}
func (iv IstioValidations) MergeValidations(validations IstioValidations) IstioValidations {
for key, validation := range validations {
v, ok := iv[key]
if !ok {
iv[key] = validation
} else {
v.Checks = append(v.Checks, validation.Checks...)
v.Valid = v.Valid && validation.Valid
}
}
return iv
}
// MarshalJSON implements the json.Marshaler interface.
func (iv IstioValidations) MarshalJSON() ([]byte, error) {
out := make(map[string]map[string]*IstioValidation)
for k, v := range iv {
_, ok := out[k.ObjectType]
if !ok {
out[k.ObjectType] = make(map[string]*IstioValidation)
}
out[k.ObjectType][k.Name] = v
}
return json.Marshal(out)
}

6
vendor/github.com/kiali/kiali/models/jaeger_info.go generated vendored Normal file
View File

@@ -0,0 +1,6 @@
package models
// JaegerInfo provides information to access Jaeger UI
type JaegerInfo struct {
URL string `json:"url"`
}

38
vendor/github.com/kiali/kiali/models/mesh_policy.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type MeshPolicies []MeshPolicy
type MeshPolicy struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Targets interface{} `json:"targets"`
Peers interface{} `json:"peers"`
PeerIsOptional interface{} `json:"peerIsOptional"`
Origins interface{} `json:"origins"`
OriginIsOptional interface{} `json:"originIsOptional"`
PrincipalBinding interface{} `json:"principalBinding"`
} `json:"spec"`
}
func (mps *MeshPolicies) Parse(meshPolicies []kubernetes.IstioObject) {
for _, qs := range meshPolicies {
meshPolicy := MeshPolicy{}
meshPolicy.Parse(qs)
*mps = append(*mps, meshPolicy)
}
}
func (mp *MeshPolicy) Parse(meshPolicy kubernetes.IstioObject) {
mp.Metadata = meshPolicy.GetObjectMeta()
mp.Spec.Targets = meshPolicy.GetSpec()["targets"]
mp.Spec.Peers = meshPolicy.GetSpec()["peers"]
mp.Spec.PeerIsOptional = meshPolicy.GetSpec()["peersIsOptional"]
mp.Spec.Origins = meshPolicy.GetSpec()["origins"]
mp.Spec.OriginIsOptional = meshPolicy.GetSpec()["originIsOptinal"]
mp.Spec.PrincipalBinding = meshPolicy.GetSpec()["principalBinding"]
}

61
vendor/github.com/kiali/kiali/models/namespace.go generated vendored Normal file
View File

@@ -0,0 +1,61 @@
package models
import (
"k8s.io/api/core/v1"
"time"
osv1 "github.com/openshift/api/project/v1"
)
// A Namespace provide a scope for names
// This type is used to describe a set of objects.
//
// swagger:model namespace
type Namespace struct {
// The id of the namespace.
//
// example: istio-system
// required: true
Name string `json:"name"`
// Creation date of the namespace.
// There is no need to export this through the API. So, this is
// set to be ignored by JSON package.
//
// required: true
CreationTimestamp time.Time `json:"-"`
}
func CastNamespaceCollection(ns []v1.Namespace) []Namespace {
namespaces := make([]Namespace, len(ns))
for i, item := range ns {
namespaces[i] = CastNamespace(item)
}
return namespaces
}
func CastNamespace(ns v1.Namespace) Namespace {
namespace := Namespace{}
namespace.Name = ns.Name
namespace.CreationTimestamp = ns.CreationTimestamp.Time
return namespace
}
func CastProjectCollection(ps []osv1.Project) []Namespace {
namespaces := make([]Namespace, len(ps))
for i, project := range ps {
namespaces[i] = CastProject(project)
}
return namespaces
}
func CastProject(p osv1.Project) Namespace {
namespace := Namespace{}
namespace.Name = p.Name
namespace.CreationTimestamp = p.CreationTimestamp.Time
return namespace
}

123
vendor/github.com/kiali/kiali/models/pod.go generated vendored Normal file
View File

@@ -0,0 +1,123 @@
package models
import (
"encoding/json"
"strings"
"k8s.io/api/core/v1"
"github.com/kiali/kiali/config"
)
// Pods alias for list of Pod structs
type Pods []*Pod
// Pod holds a subset of v1.Pod data that is meaningful in Kiali
type Pod struct {
Name string `json:"name"`
Labels map[string]string `json:"labels"`
CreatedAt string `json:"createdAt"`
CreatedBy []Reference `json:"createdBy"`
IstioContainers []*ContainerInfo `json:"istioContainers"`
IstioInitContainers []*ContainerInfo `json:"istioInitContainers"`
Status string `json:"status"`
AppLabel bool `json:"appLabel"`
VersionLabel bool `json:"versionLabel"`
RuntimesAnnotation []string `json:"runtimesAnnotation"`
}
// Reference holds some information on the pod creator
type Reference struct {
Name string `json:"name"`
Kind string `json:"kind"`
}
// ContainerInfo holds container name and image
type ContainerInfo struct {
Name string `json:"name"`
Image string `json:"image"`
}
// ParseDeployment extracts desired information from k8s []Pod info
func (pods *Pods) Parse(list []v1.Pod) {
if list == nil {
return
}
for _, pod := range list {
casted := Pod{}
casted.Parse(&pod)
*pods = append(*pods, &casted)
}
}
// Below types are used for unmarshalling json
type createdBy struct {
Reference Reference `json:"reference"`
}
type sideCarStatus struct {
Containers []string `json:"containers"`
InitContainers []string `json:"initContainers"`
}
// ParseDeployment extracts desired information from k8s Pod info
func (pod *Pod) Parse(p *v1.Pod) {
pod.Name = p.Name
pod.Labels = p.Labels
pod.CreatedAt = formatTime(p.CreationTimestamp.Time)
for _, ref := range p.OwnerReferences {
pod.CreatedBy = append(pod.CreatedBy, Reference{
Name: ref.Name,
Kind: ref.Kind,
})
}
conf := config.Get()
// ParseDeployment some annotations
if jSon, ok := p.Annotations[conf.ExternalServices.Istio.IstioSidecarAnnotation]; ok {
var scs sideCarStatus
err := json.Unmarshal([]byte(jSon), &scs)
if err == nil {
for _, name := range scs.InitContainers {
container := ContainerInfo{
Name: name,
Image: lookupImage(name, p.Spec.InitContainers)}
pod.IstioInitContainers = append(pod.IstioInitContainers, &container)
}
for _, name := range scs.Containers {
container := ContainerInfo{
Name: name,
Image: lookupImage(name, p.Spec.Containers)}
pod.IstioContainers = append(pod.IstioContainers, &container)
}
}
}
// Check for custom dashboards annotation
if rawRuntimes, ok := p.Annotations["kiali.io/runtimes"]; ok {
pod.RuntimesAnnotation = strings.Split(strings.TrimSpace(rawRuntimes), ",")
}
pod.Status = string(p.Status.Phase)
_, pod.AppLabel = p.Labels[conf.IstioLabels.AppLabelName]
_, pod.VersionLabel = p.Labels[conf.IstioLabels.VersionLabelName]
}
func lookupImage(containerName string, containers []v1.Container) string {
for _, c := range containers {
if c.Name == containerName {
return c.Image
}
}
return ""
}
func (pods Pods) HasIstioSideCar() bool {
for _, pod := range pods {
if pod.HasIstioSideCar() {
return true
}
}
return false
}
func (pod Pod) HasIstioSideCar() bool {
return len(pod.IstioContainers) > 0
}

38
vendor/github.com/kiali/kiali/models/policy.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type Policies []Policy
type Policy struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Targets interface{} `json:"targets"`
Peers interface{} `json:"peers"`
PeerIsOptional interface{} `json:"peerIsOptional"`
Origins interface{} `json:"origins"`
OriginIsOptional interface{} `json:"originIsOptional"`
PrincipalBinding interface{} `json:"principalBinding"`
} `json:"spec"`
}
func (ps *Policies) Parse(policies []kubernetes.IstioObject) {
for _, qs := range policies {
policy := Policy{}
policy.Parse(qs)
*ps = append(*ps, policy)
}
}
func (p *Policy) Parse(policy kubernetes.IstioObject) {
p.Metadata = policy.GetObjectMeta()
p.Spec.Targets = policy.GetSpec()["targets"]
p.Spec.Peers = policy.GetSpec()["peers"]
p.Spec.PeerIsOptional = policy.GetSpec()["peersIsOptional"]
p.Spec.Origins = policy.GetSpec()["origins"]
p.Spec.OriginIsOptional = policy.GetSpec()["originIsOptinal"]
p.Spec.PrincipalBinding = policy.GetSpec()["principalBinding"]
}

38
vendor/github.com/kiali/kiali/models/port.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
package models
import "k8s.io/api/core/v1"
type Ports []Port
type Port struct {
Name string `json:"name"`
Protocol string `json:"protocol"`
Port int32 `json:"port"`
}
func (ports *Ports) Parse(ps []v1.ServicePort) {
for _, servicePort := range ps {
port := Port{}
port.Parse(servicePort)
*ports = append(*ports, port)
}
}
func (port *Port) Parse(p v1.ServicePort) {
port.Name = p.Name
port.Protocol = string(p.Protocol)
port.Port = p.Port
}
func (ports *Ports) ParseEndpointPorts(ps []v1.EndpointPort) {
for _, endpointPort := range ps {
port := Port{}
port.ParseEndpointPort(endpointPort)
*ports = append(*ports, port)
}
}
func (port *Port) ParseEndpointPort(p v1.EndpointPort) {
port.Name = p.Name
port.Protocol = string(p.Protocol)
port.Port = p.Port
}

28
vendor/github.com/kiali/kiali/models/quota_spec.go generated vendored Normal file
View File

@@ -0,0 +1,28 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type QuotaSpecs []QuotaSpec
type QuotaSpec struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Rules interface{} `json:"rules"`
} `json:"spec"`
}
func (qss *QuotaSpecs) Parse(quotaSpecs []kubernetes.IstioObject) {
for _, qs := range quotaSpecs {
quotaSpec := QuotaSpec{}
quotaSpec.Parse(qs)
*qss = append(*qss, quotaSpec)
}
}
func (qs *QuotaSpec) Parse(quotaSpec kubernetes.IstioObject) {
qs.Metadata = quotaSpec.GetObjectMeta()
qs.Spec.Rules = quotaSpec.GetSpec()["rules"]
}

View File

@@ -0,0 +1,30 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type QuotaSpecBindings []QuotaSpecBinding
type QuotaSpecBinding struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
QuotaSpecs interface{} `json:"quotaSpecs"`
Services interface{} `json:"services"`
} `json:"spec"`
}
func (qsbs *QuotaSpecBindings) Parse(quotaSpecBindings []kubernetes.IstioObject) {
for _, qsb := range quotaSpecBindings {
quotaSpecBinding := QuotaSpecBinding{}
quotaSpecBinding.Parse(qsb)
*qsbs = append(*qsbs, quotaSpecBinding)
}
}
func (qsb *QuotaSpecBinding) Parse(quotaSpecBinding kubernetes.IstioObject) {
qsb.Metadata = quotaSpecBinding.GetObjectMeta()
qsb.Spec.QuotaSpecs = quotaSpecBinding.GetSpec()["quotaSpecs"]
qsb.Spec.Services = quotaSpecBinding.GetSpec()["services"]
}

125
vendor/github.com/kiali/kiali/models/service.go generated vendored Normal file
View File

@@ -0,0 +1,125 @@
package models
import (
v1 "k8s.io/api/core/v1"
"github.com/kiali/kiali/kubernetes"
"github.com/kiali/kiali/prometheus"
)
type ServiceOverview struct {
// Name of the Service
// required: true
// example: reviews-v1
Name string `json:"name"`
// Define if Pods related to this Service has an IstioSidecar deployed
// required: true
// example: true
IstioSidecar bool `json:"istioSidecar"`
// Has label app
// required: true
// example: true
AppLabel bool `json:"appLabel"`
}
type ServiceList struct {
Namespace Namespace `json:"namespace"`
Services []ServiceOverview `json:"services"`
}
type ServiceDetails struct {
Service Service `json:"service"`
IstioSidecar bool `json:"istioSidecar"`
Endpoints Endpoints `json:"endpoints"`
VirtualServices VirtualServices `json:"virtualServices"`
DestinationRules DestinationRules `json:"destinationRules"`
Dependencies map[string][]SourceWorkload `json:"dependencies"`
Workloads WorkloadOverviews `json:"workloads"`
Health ServiceHealth `json:"health"`
Validations IstioValidations `json:"validations"`
ErrorTraces int `json:"errorTraces"`
}
type Services []*Service
type Service struct {
Name string `json:"name"`
CreatedAt string `json:"createdAt"`
ResourceVersion string `json:"resourceVersion"`
Namespace Namespace `json:"namespace"`
Labels map[string]string `json:"labels"`
Type string `json:"type"`
Ip string `json:"ip"`
Ports Ports `json:"ports"`
}
// SourceWorkload holds workload identifiers used for service dependencies
type SourceWorkload struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
}
func (ss *Services) Parse(services []v1.Service) {
if ss == nil {
return
}
for _, item := range services {
service := &Service{}
service.Parse(&item)
*ss = append(*ss, service)
}
}
func (s *Service) Parse(service *v1.Service) {
if service != nil {
s.Name = service.Name
s.Namespace = Namespace{Name: service.Namespace}
s.Labels = service.Labels
s.Type = string(service.Spec.Type)
s.Ip = service.Spec.ClusterIP
s.CreatedAt = formatTime(service.CreationTimestamp.Time)
s.ResourceVersion = service.ResourceVersion
(&s.Ports).Parse(service.Spec.Ports)
}
}
func (s *ServiceDetails) SetService(svc *v1.Service) {
s.Service.Parse(svc)
}
func (s *ServiceDetails) SetEndpoints(eps *v1.Endpoints) {
(&s.Endpoints).Parse(eps)
}
func (s *ServiceDetails) SetPods(pods []v1.Pod) {
mPods := Pods{}
mPods.Parse(pods)
s.IstioSidecar = mPods.HasIstioSideCar()
}
func (s *ServiceDetails) SetVirtualServices(vs []kubernetes.IstioObject, canCreate, canUpdate, canDelete bool) {
s.VirtualServices.Permissions = ResourcePermissions{Create: canCreate, Update: canUpdate, Delete: canDelete}
(&s.VirtualServices).Parse(vs)
}
func (s *ServiceDetails) SetDestinationRules(dr []kubernetes.IstioObject, canCreate, canUpdate, canDelete bool) {
s.DestinationRules.Permissions = ResourcePermissions{Create: canCreate, Update: canUpdate, Delete: canDelete}
(&s.DestinationRules).Parse(dr)
}
func (s *ServiceDetails) SetErrorTraces(errorTraces int) {
s.ErrorTraces = errorTraces
}
func (s *ServiceDetails) SetSourceWorkloads(sw map[string][]prometheus.Workload) {
// Transform dependencies for UI
s.Dependencies = make(map[string][]SourceWorkload)
for version, workloads := range sw {
for _, workload := range workloads {
s.Dependencies[version] = append(s.Dependencies[version], SourceWorkload{
Name: workload.Workload,
Namespace: workload.Namespace,
})
}
}
}

38
vendor/github.com/kiali/kiali/models/service_entry.go generated vendored Normal file
View File

@@ -0,0 +1,38 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type ServiceEntries []ServiceEntry
type ServiceEntry struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Hosts interface{} `json:"hosts"`
Addresses interface{} `json:"addresses"`
Ports interface{} `json:"ports"`
Location interface{} `json:"location"`
Resolution interface{} `json:"resolution"`
Endpoints interface{} `json:"endpoints"`
} `json:"spec"`
}
func (ses *ServiceEntries) Parse(serviceEntries []kubernetes.IstioObject) {
for _, se := range serviceEntries {
serviceEntry := ServiceEntry{}
serviceEntry.Parse(se)
*ses = append(*ses, serviceEntry)
}
}
func (se *ServiceEntry) Parse(serviceEntry kubernetes.IstioObject) {
se.Metadata = serviceEntry.GetObjectMeta()
se.Spec.Hosts = serviceEntry.GetSpec()["hosts"]
se.Spec.Addresses = serviceEntry.GetSpec()["addresses"]
se.Spec.Ports = serviceEntry.GetSpec()["ports"]
se.Spec.Location = serviceEntry.GetSpec()["location"]
se.Spec.Resolution = serviceEntry.GetSpec()["resolution"]
se.Spec.Endpoints = serviceEntry.GetSpec()["endpoints"]
}

28
vendor/github.com/kiali/kiali/models/service_role.go generated vendored Normal file
View File

@@ -0,0 +1,28 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type ServiceRoles []ServiceRole
type ServiceRole struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Rules interface{} `json:"rules"`
} `json:"spec"`
}
func (srs *ServiceRoles) Parse(serviceRoles []kubernetes.IstioObject) {
for _, sr := range serviceRoles {
serviceRole := ServiceRole{}
serviceRole.Parse(sr)
*srs = append(*srs, serviceRole)
}
}
func (sr *ServiceRole) Parse(serviceRole kubernetes.IstioObject) {
sr.Metadata = serviceRole.GetObjectMeta()
sr.Spec.Rules = serviceRole.GetSpec()["rules"]
}

View File

@@ -0,0 +1,30 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
type ServiceRoleBindings []ServiceRoleBinding
type ServiceRoleBinding struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Subjects interface{} `json:"subjects"`
RoleRef interface{} `json:"roleRef"`
} `json:"spec"`
}
func (srbs *ServiceRoleBindings) Parse(serviceRoleBindings []kubernetes.IstioObject) {
for _, srb := range serviceRoleBindings {
serviceRoleBinding := ServiceRoleBinding{}
serviceRoleBinding.Parse(srb)
*srbs = append(*srbs, serviceRoleBinding)
}
}
func (srb *ServiceRoleBinding) Parse(serviceRoleBinding kubernetes.IstioObject) {
srb.Metadata = serviceRoleBinding.GetObjectMeta()
srb.Spec.Subjects = serviceRoleBinding.GetSpec()["subjects"]
srb.Spec.RoleRef = serviceRoleBinding.GetSpec()["roleRef"]
}

7
vendor/github.com/kiali/kiali/models/util.go generated vendored Normal file
View File

@@ -0,0 +1,7 @@
package models
import "time"
func formatTime(t time.Time) string {
return t.Format(time.RFC3339)
}

View File

@@ -0,0 +1,68 @@
package models
import (
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/kiali/kiali/kubernetes"
)
// VirtualServices virtualServices
//
// This type is used for returning an array of VirtualServices with some permission flags
//
// swagger:model virtualServices
// An array of virtualService
// swagger:allOf
type VirtualServices struct {
Permissions ResourcePermissions `json:"permissions"`
Items []VirtualService `json:"items"`
}
// VirtualService virtualService
//
// This type is used for returning a VirtualService
//
// swagger:model virtualService
type VirtualService struct {
Metadata meta_v1.ObjectMeta `json:"metadata"`
Spec struct {
Hosts interface{} `json:"hosts"`
Gateways interface{} `json:"gateways"`
Http interface{} `json:"http"`
Tcp interface{} `json:"tcp"`
Tls interface{} `json:"tls"`
} `json:"spec"`
}
func (vServices *VirtualServices) Parse(virtualServices []kubernetes.IstioObject) {
vServices.Items = []VirtualService{}
for _, vs := range virtualServices {
virtualService := VirtualService{}
virtualService.Parse(vs)
vServices.Items = append(vServices.Items, virtualService)
}
}
func (vService *VirtualService) Parse(virtualService kubernetes.IstioObject) {
vService.Metadata = virtualService.GetObjectMeta()
vService.Spec.Hosts = virtualService.GetSpec()["hosts"]
vService.Spec.Gateways = virtualService.GetSpec()["gateways"]
vService.Spec.Http = virtualService.GetSpec()["http"]
vService.Spec.Tcp = virtualService.GetSpec()["tcp"]
vService.Spec.Tls = virtualService.GetSpec()["tls"]
}
// IsValidHost returns true if VirtualService hosts applies to the service
func (vService *VirtualService) IsValidHost(namespace string, serviceName string) bool {
if serviceName == "" {
return false
}
if hosts, ok := vService.Spec.Hosts.([]interface{}); ok {
for _, host := range hosts {
if kubernetes.FilterByHost(host.(string), serviceName, namespace) {
return true
}
}
}
return false
}

373
vendor/github.com/kiali/kiali/models/workload.go generated vendored Normal file
View File

@@ -0,0 +1,373 @@
package models
import (
osappsv1 "github.com/openshift/api/apps/v1"
"k8s.io/api/apps/v1beta1"
"k8s.io/api/apps/v1beta2"
batch_v1 "k8s.io/api/batch/v1"
batch_v1beta1 "k8s.io/api/batch/v1beta1"
"k8s.io/api/core/v1"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/prometheus"
)
type WorkloadList struct {
// Namespace where the workloads live in
// required: true
// example: bookinfo
Namespace Namespace `json:"namespace"`
// Workloads for a given namespace
// required: true
Workloads []WorkloadListItem `json:"workloads"`
}
// WorkloadListItem has the necessary information to display the console workload list
type WorkloadListItem struct {
// Name of the workload
// required: true
// example: reviews-v1
Name string `json:"name"`
// Type of the workload
// required: true
// example: deployment
Type string `json:"type"`
// Creation timestamp (in RFC3339 format)
// required: true
// example: 2018-07-31T12:24:17Z
CreatedAt string `json:"createdAt"`
// Kubernetes ResourceVersion
// required: true
// example: 192892127
ResourceVersion string `json:"resourceVersion"`
// Define if Pods related to this Workload has an IstioSidecar deployed
// required: true
// example: true
IstioSidecar bool `json:"istioSidecar"`
// Workload labels
Labels map[string]string `json:"labels"`
// Define if Pods related to this Workload has the label App
// required: true
// example: true
AppLabel bool `json:"appLabel"`
// Define if Pods related to this Workload has the label Version
// required: true
// example: true
VersionLabel bool `json:"versionLabel"`
// Number of current workload pods
// required: true
// example: 1
PodCount int `json:"podCount"`
}
type WorkloadOverviews []*WorkloadListItem
// Workload has the details of a workload
type Workload struct {
WorkloadListItem
// Number of desired replicas
// required: true
// example: 2
Replicas int32 `json:"replicas"`
// Number of available replicas
// required: true
// example: 1
AvailableReplicas int32 `json:"availableReplicas"`
// Number of unavailable replicas
// required: true
// example: 1
UnavailableReplicas int32 `json:"unavailableReplicas"`
// Pods bound to the workload
Pods Pods `json:"pods"`
// Services that match workload selector
Services Services `json:"services"`
DestinationServices []DestinationService `json:"destinationServices"`
// Runtimes and associated dashboards
Runtimes []Runtime `json:"runtimes"`
}
// DestinationService holds service identifiers used for workload dependencies
type DestinationService struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
}
type Workloads []*Workload
func (workload *WorkloadListItem) ParseWorkload(w *Workload) {
conf := config.Get()
workload.Name = w.Name
workload.Type = w.Type
workload.CreatedAt = w.CreatedAt
workload.ResourceVersion = w.ResourceVersion
workload.IstioSidecar = w.Pods.HasIstioSideCar()
workload.Labels = w.Labels
workload.PodCount = len(w.Pods)
/** Check the labels app and version required by Istio in template Pods*/
_, workload.AppLabel = w.Labels[conf.IstioLabels.AppLabelName]
_, workload.VersionLabel = w.Labels[conf.IstioLabels.VersionLabelName]
}
func (workload *Workload) ParseDeployment(d *v1beta1.Deployment) {
conf := config.Get()
workload.Name = d.Name
workload.Type = "Deployment"
workload.Labels = d.Spec.Template.Labels
/** Check the labels app and version required by Istio in template Pods*/
_, workload.AppLabel = workload.Labels[conf.IstioLabels.AppLabelName]
_, workload.VersionLabel = workload.Labels[conf.IstioLabels.VersionLabelName]
workload.CreatedAt = formatTime(d.CreationTimestamp.Time)
workload.ResourceVersion = d.ResourceVersion
workload.Replicas = d.Status.Replicas
workload.AvailableReplicas = d.Status.AvailableReplicas
// Deployments/ReplicaSets have a different parameters to indicate unavailable
// calculate "desired" - "current" sounds reasonable on this context
workload.UnavailableReplicas = workload.Replicas - workload.AvailableReplicas
}
func (workload *Workload) ParseReplicaSet(r *v1beta2.ReplicaSet) {
conf := config.Get()
workload.Name = r.Name
workload.Type = "ReplicaSet"
workload.Labels = r.Spec.Template.Labels
/** Check the labels app and version required by Istio in template Pods*/
_, workload.AppLabel = workload.Labels[conf.IstioLabels.AppLabelName]
_, workload.VersionLabel = workload.Labels[conf.IstioLabels.VersionLabelName]
workload.CreatedAt = formatTime(r.CreationTimestamp.Time)
workload.ResourceVersion = r.ResourceVersion
workload.Replicas = r.Status.Replicas
workload.AvailableReplicas = r.Status.AvailableReplicas
// Deployments/ReplicaSets have a different parameters to indicate unavailable
// calculate "desired" - "current" sounds reasonable on this context
workload.UnavailableReplicas = workload.Replicas - workload.AvailableReplicas
}
func (workload *Workload) ParseReplicationController(r *v1.ReplicationController) {
conf := config.Get()
workload.Name = r.Name
workload.Type = "ReplicationController"
workload.Labels = r.Spec.Template.Labels
/** Check the labels app and version required by Istio in template Pods*/
_, workload.AppLabel = workload.Labels[conf.IstioLabels.AppLabelName]
_, workload.VersionLabel = workload.Labels[conf.IstioLabels.VersionLabelName]
workload.CreatedAt = formatTime(r.CreationTimestamp.Time)
workload.ResourceVersion = r.ResourceVersion
workload.Replicas = r.Status.Replicas
workload.AvailableReplicas = r.Status.AvailableReplicas
// Deployments/ReplicaSets have a different parameters to indicate unavailable
// calculate "desired" - "current" sounds reasonable on this context
workload.UnavailableReplicas = workload.Replicas - workload.AvailableReplicas
}
func (workload *Workload) ParseDeploymentConfig(dc *osappsv1.DeploymentConfig) {
workload.Name = dc.Name
workload.Type = "DeploymentConfig"
workload.Labels = dc.Spec.Template.Labels
workload.CreatedAt = formatTime(dc.CreationTimestamp.Time)
workload.ResourceVersion = dc.ResourceVersion
workload.Replicas = dc.Status.Replicas
workload.AvailableReplicas = dc.Status.AvailableReplicas
// Deployments/ReplicaSets have a different parameters to indicate unavailable
// calculate "desired" - "current" sounds reasonable on this context
workload.UnavailableReplicas = workload.Replicas - workload.AvailableReplicas
}
func (workload *Workload) ParseStatefulSet(s *v1beta2.StatefulSet) {
conf := config.Get()
workload.Name = s.Name
workload.Type = "StatefulSet"
workload.Labels = s.Spec.Template.Labels
/** Check the labels app and version required by Istio in template Pods*/
_, workload.AppLabel = workload.Labels[conf.IstioLabels.AppLabelName]
_, workload.VersionLabel = workload.Labels[conf.IstioLabels.VersionLabelName]
workload.CreatedAt = formatTime(s.CreationTimestamp.Time)
workload.ResourceVersion = s.ResourceVersion
workload.Replicas = s.Status.Replicas
workload.AvailableReplicas = s.Status.ReadyReplicas
// Deployments/ReplicaSets have a different parameters to indicate unavailable
// calculate "desired" - "current" sounds reasonable on this context
workload.UnavailableReplicas = workload.Replicas - workload.AvailableReplicas
}
func (workload *Workload) ParsePod(pod *v1.Pod) {
conf := config.Get()
workload.Name = pod.Name
workload.Type = "Pod"
workload.Labels = pod.Labels
/** Check the labels app and version required by Istio in template Pods*/
_, workload.AppLabel = workload.Labels[conf.IstioLabels.AppLabelName]
_, workload.VersionLabel = workload.Labels[conf.IstioLabels.VersionLabelName]
workload.CreatedAt = formatTime(pod.CreationTimestamp.Time)
workload.ResourceVersion = pod.ResourceVersion
var podReplicas, podAvailableReplicas int32
podReplicas = 1
podAvailableReplicas = 1
// When a Workload is a single pod we don't have access to any controller replicas
// On this case we differentiate when pod is terminated with success versus not running
// Probably it might be more cases to refine here
if pod.Status.Phase == "Succeed" {
podReplicas = 0
podAvailableReplicas = 0
} else if pod.Status.Phase != "Running" {
podAvailableReplicas = 0
}
workload.Replicas = podReplicas
workload.AvailableReplicas = podAvailableReplicas
// Deployments/ReplicaSets have a different parameters to indicate unavailable
// calculate "desired" - "current" sounds reasonable on this context
workload.UnavailableReplicas = workload.Replicas - workload.AvailableReplicas
}
func (workload *Workload) ParseJob(job *batch_v1.Job) {
conf := config.Get()
workload.Name = job.Name
workload.Type = "Job"
workload.Labels = job.Labels
/** Check the labels app and version required by Istio in template Pods*/
_, workload.AppLabel = workload.Labels[conf.IstioLabels.AppLabelName]
_, workload.VersionLabel = workload.Labels[conf.IstioLabels.VersionLabelName]
workload.CreatedAt = formatTime(job.CreationTimestamp.Time)
workload.ResourceVersion = job.ResourceVersion
workload.Replicas = job.Status.Active + job.Status.Succeeded + job.Status.Failed
workload.AvailableReplicas = job.Status.Active + job.Status.Succeeded
// Deployments/ReplicaSets have a different parameters to indicate unavailable
// calculate "desired" - "current" sounds reasonable on this context
workload.UnavailableReplicas = job.Status.Failed
}
func (workload *Workload) ParseCronJob(cnjb *batch_v1beta1.CronJob) {
conf := config.Get()
workload.Name = cnjb.Name
workload.Type = "CronJob"
workload.Labels = cnjb.Labels
/** Check the labels app and version required by Istio in template Pods*/
_, workload.AppLabel = workload.Labels[conf.IstioLabels.AppLabelName]
_, workload.VersionLabel = workload.Labels[conf.IstioLabels.VersionLabelName]
workload.CreatedAt = formatTime(cnjb.CreationTimestamp.Time)
workload.ResourceVersion = cnjb.ResourceVersion
// We don't have the information of this controller
// We will infer the number of replicas as the number of pods without succeed state
// We will infer the number of available as the number of pods with running state
// If this is not enough, we should try to fetch the controller, it is not doing now to not overload kiali fetching all types of controllers
var podReplicas, podAvailableReplicas int32
podReplicas = 0
podAvailableReplicas = 0
for _, pod := range workload.Pods {
if pod.Status != "Succeeded" {
podReplicas++
}
if pod.Status == "Running" {
podAvailableReplicas++
}
}
workload.Replicas = podReplicas
workload.AvailableReplicas = podAvailableReplicas
// Deployments/ReplicaSets have a different parameters to indicate unavailable
// calculate "desired" - "current" sounds reasonable on this context
if podReplicas > podAvailableReplicas {
workload.UnavailableReplicas = workload.Replicas - workload.AvailableReplicas
} else {
// On this case a Job may have all pods terminated
// Then it is not an unhealth condition
workload.UnavailableReplicas = 0
}
}
func (workload *Workload) ParsePods(controllerName string, controllerType string, pods []v1.Pod) {
conf := config.Get()
workload.Name = controllerName
workload.Type = controllerType
// We don't have the information of this controller
// We will infer the number of replicas as the number of pods without succeed state
// We will infer the number of available as the number of pods with running state
// If this is not enough, we should try to fetch the controller, it is not doing now to not overload kiali fetching all types of controllers
var podReplicas, podAvailableReplicas int32
podReplicas = 0
podAvailableReplicas = 0
for _, pod := range pods {
if pod.Status.Phase != "Succeeded" {
podReplicas++
}
if pod.Status.Phase == "Running" {
podAvailableReplicas++
}
}
workload.Replicas = podReplicas
workload.AvailableReplicas = podAvailableReplicas
// Deployments/ReplicaSets have a different parameters to indicate unavailable
// calculate "desired" - "current" sounds reasonable on this context
if podReplicas > podAvailableReplicas {
workload.UnavailableReplicas = workload.Replicas - workload.AvailableReplicas
} else {
// On this case a Job may have all pods terminated
// Then it is not an unhealth condition
workload.UnavailableReplicas = 0
}
// We fetch one pod as template for labels
// There could be corner cases not correct, then we should support more controllers
if len(pods) > 0 {
workload.Labels = pods[0].Labels
workload.CreatedAt = formatTime(pods[0].CreationTimestamp.Time)
workload.ResourceVersion = pods[0].ResourceVersion
}
/** Check the labels app and version required by Istio in template Pods*/
_, workload.AppLabel = workload.Labels[conf.IstioLabels.AppLabelName]
_, workload.VersionLabel = workload.Labels[conf.IstioLabels.VersionLabelName]
}
func (workload *Workload) SetPods(pods []v1.Pod) {
workload.Pods.Parse(pods)
workload.IstioSidecar = workload.Pods.HasIstioSideCar()
}
func (workload *Workload) SetServices(svcs []v1.Service) {
workload.Services.Parse(svcs)
}
func (workload *Workload) SetDestinationServices(dss []prometheus.Service) {
workload.DestinationServices = make([]DestinationService, 0, len(dss))
for _, service := range dss {
workload.DestinationServices = append(workload.DestinationServices, DestinationService{
Name: service.ServiceName,
Namespace: service.Namespace,
})
}
}

253
vendor/github.com/kiali/kiali/prometheus/client.go generated vendored Normal file
View File

@@ -0,0 +1,253 @@
package prometheus
import (
"context"
"errors"
"fmt"
"time"
"github.com/prometheus/client_golang/api"
"github.com/prometheus/client_golang/api/prometheus/v1"
"github.com/prometheus/common/model"
"github.com/kiali/kiali/config"
"github.com/kiali/kiali/log"
"github.com/kiali/kiali/prometheus/internalmetrics"
"github.com/kiali/kiali/util"
)
// ClientInterface for mocks (only mocked function are necessary here)
type ClientInterface interface {
FetchHistogramRange(metricName, labels, grouping string, q *BaseMetricsQuery) Histogram
FetchRange(metricName, labels, grouping, aggregator string, q *BaseMetricsQuery) *Metric
FetchRateRange(metricName, labels, grouping string, q *BaseMetricsQuery) *Metric
GetAllRequestRates(namespace, ratesInterval string, queryTime time.Time) (model.Vector, error)
GetAppRequestRates(namespace, app, ratesInterval string, queryTime time.Time) (model.Vector, model.Vector, error)
GetConfiguration() (v1.ConfigResult, error)
GetDestinationServices(namespace string, namespaceCreationTime time.Time, workloadname string) ([]Service, error)
GetFlags() (v1.FlagsResult, error)
GetMetrics(query *IstioMetricsQuery) Metrics
GetNamespaceServicesRequestRates(namespace, ratesInterval string, queryTime time.Time) (model.Vector, error)
GetServiceRequestRates(namespace, service, ratesInterval string, queryTime time.Time) (model.Vector, error)
GetSourceWorkloads(namespace string, namespaceCreationTime time.Time, servicename string) (map[string][]Workload, error)
GetWorkloadRequestRates(namespace, workload, ratesInterval string, queryTime time.Time) (model.Vector, model.Vector, error)
}
// Client for Prometheus API.
// It hides the way we query Prometheus offering a layer with a high level defined API.
type Client struct {
ClientInterface
p8s api.Client
api v1.API
}
// Workload describes a workload with contextual information
type Workload struct {
Namespace string
App string
Workload string
Version string
}
// Service describes a service with contextual information
type Service struct {
Namespace string
App string
ServiceName string
}
// NewClient creates a new client to the Prometheus API.
// It returns an error on any problem.
func NewClient() (*Client, error) {
if config.Get() == nil {
return nil, errors.New("config.Get() must be not null")
}
p8s, err := api.NewClient(api.Config{Address: config.Get().ExternalServices.PrometheusServiceURL})
if err != nil {
return nil, err
}
client := Client{p8s: p8s, api: v1.NewAPI(p8s)}
return &client, nil
}
// Inject allows for replacing the API with a mock For testing
func (in *Client) Inject(api v1.API) {
in.api = api
}
// GetSourceWorkloads returns a map of list of source workloads for a given service
// identified by its namespace and service name.
// Returned map has a destination version as a key and a list of workloads as values.
// It returns an error on any problem.
func (in *Client) GetSourceWorkloads(namespace string, namespaceCreationTime time.Time, servicename string) (map[string][]Workload, error) {
reporter := "source"
if config.Get().IstioNamespace == namespace {
reporter = "destination"
}
// The query needs a lower bound to make sure that no outdated data is fetched
// So, a range is set and an "easy" function (delta) is applied to return an instant-vector,
// since only labels are needed.
queryTime := util.Clock.Now()
queryInterval := queryTime.Sub(namespaceCreationTime)
query := fmt.Sprintf("delta(istio_requests_total{reporter=\"%s\",destination_service_name=\"%s\",destination_service_namespace=\"%s\"}[%vs])",
reporter, servicename, namespace, int(queryInterval.Seconds()))
log.Debugf("GetSourceWorkloads query: %s", query)
promtimer := internalmetrics.GetPrometheusProcessingTimePrometheusTimer("GetSourceWorkloads")
result, err := in.api.Query(context.Background(), query, queryTime)
if err != nil {
return nil, err
}
promtimer.ObserveDuration() // notice we only collect metrics for successful prom queries
routes := make(map[string][]Workload)
switch result.Type() {
case model.ValVector:
matrix := result.(model.Vector)
for _, sample := range matrix {
metric := sample.Metric
index := string(metric["destination_version"])
source := Workload{
Namespace: string(metric["source_workload_namespace"]),
App: string(metric["source_app"]),
Workload: string(metric["source_workload"]),
Version: string(metric["source_version"]),
}
if arr, ok := routes[index]; ok {
found := false
for _, s := range arr {
if s.Workload == source.Workload {
found = true
break
}
}
if !found {
routes[index] = append(arr, source)
}
} else {
routes[index] = []Workload{source}
}
}
}
return routes, nil
}
func (in *Client) GetDestinationServices(namespace string, namespaceCreationTime time.Time, workloadname string) ([]Service, error) {
reporter := "source"
if config.Get().IstioNamespace == namespace {
reporter = "destination"
}
queryTime := util.Clock.Now()
queryInterval := queryTime.Sub(namespaceCreationTime)
groupBy := "(destination_service_namespace, destination_service_name, destination_service)"
query := fmt.Sprintf("sum(rate(istio_requests_total{reporter=\"%s\",source_workload=\"%s\",source_workload_namespace=\"%s\"}[%vs])) by %s",
reporter, workloadname, namespace, int(queryInterval.Seconds()), groupBy)
log.Debugf("GetDestinationServices query: %s", query)
promtimer := internalmetrics.GetPrometheusProcessingTimePrometheusTimer("GetDestinationServices")
result, err := in.api.Query(context.Background(), query, queryTime)
if err != nil {
return nil, err
}
promtimer.ObserveDuration() // notice we only collect metrics for successful prom queries
routes := make([]Service, 0)
switch result.Type() {
case model.ValVector:
matrix := result.(model.Vector)
for _, sample := range matrix {
metric := sample.Metric
destination := Service{
App: string(metric["destination_app"]),
ServiceName: string(metric["destination_service_name"]),
Namespace: string(metric["destination_service_namespace"]),
}
routes = append(routes, destination)
}
}
return routes, nil
}
// GetMetrics returns the Metrics related to the provided query options.
func (in *Client) GetMetrics(query *IstioMetricsQuery) Metrics {
return getMetrics(in.api, query)
}
// GetAllRequestRates queries Prometheus to fetch request counter rates, over a time interval, for requests
// into, internal to, or out of the namespace.
// Returns (rates, error)
func (in *Client) GetAllRequestRates(namespace string, ratesInterval string, queryTime time.Time) (model.Vector, error) {
return getAllRequestRates(in.api, namespace, queryTime, ratesInterval)
}
// GetNamespaceServicesRequestRates queries Prometheus to fetch request counter rates, over a time interval, limited to
// requests for services in the namespace.
// Returns (rates, error)
func (in *Client) GetNamespaceServicesRequestRates(namespace string, ratesInterval string, queryTime time.Time) (model.Vector, error) {
return getNamespaceServicesRequestRates(in.api, namespace, queryTime, ratesInterval)
}
// GetServiceRequestRates queries Prometheus to fetch request counters rates over a time interval
// for a given service (hence only inbound).
// Returns (in, error)
func (in *Client) GetServiceRequestRates(namespace, service, ratesInterval string, queryTime time.Time) (model.Vector, error) {
return getServiceRequestRates(in.api, namespace, service, queryTime, ratesInterval)
}
// GetAppRequestRates queries Prometheus to fetch request counters rates over a time interval
// for a given app, both in and out.
// Returns (in, out, error)
func (in *Client) GetAppRequestRates(namespace, app, ratesInterval string, queryTime time.Time) (model.Vector, model.Vector, error) {
return getItemRequestRates(in.api, namespace, app, "app", queryTime, ratesInterval)
}
// GetWorkloadRequestRates queries Prometheus to fetch request counters rates over a time interval
// for a given workload, both in and out.
// Returns (in, out, error)
func (in *Client) GetWorkloadRequestRates(namespace, workload, ratesInterval string, queryTime time.Time) (model.Vector, model.Vector, error) {
return getItemRequestRates(in.api, namespace, workload, "workload", queryTime, ratesInterval)
}
// FetchRange fetches a simple metric (gauge or counter) in given range
func (in *Client) FetchRange(metricName, labels, grouping, aggregator string, q *BaseMetricsQuery) *Metric {
query := fmt.Sprintf("%s(%s%s)", aggregator, metricName, labels)
if grouping != "" {
query += fmt.Sprintf(" by (%s)", grouping)
}
query = roundSignificant(query, 0.001)
return fetchRange(in.api, query, q.Range)
}
// FetchRateRange fetches a counter's rate in given range
func (in *Client) FetchRateRange(metricName, labels, grouping string, q *BaseMetricsQuery) *Metric {
return fetchRateRange(in.api, metricName, labels, grouping, q)
}
// FetchHistogramRange fetches bucketed metric as histogram in given range
func (in *Client) FetchHistogramRange(metricName, labels, grouping string, q *BaseMetricsQuery) Histogram {
return fetchHistogramRange(in.api, metricName, labels, grouping, q)
}
// API returns the Prometheus V1 HTTP API for performing calls not supported natively by this client
func (in *Client) API() v1.API {
return in.api
}
// Address return the configured Prometheus service URL
func (in *Client) Address() string {
return config.Get().ExternalServices.PrometheusServiceURL
}
func (in *Client) GetConfiguration() (v1.ConfigResult, error) {
config, err := in.API().Config(context.Background())
if err != nil {
return v1.ConfigResult{}, err
}
return config, nil
}
func (in *Client) GetFlags() (v1.FlagsResult, error) {
flags, err := in.API().Flags(context.Background())
if err != nil {
return nil, err
}
return flags, nil
}

View File

@@ -0,0 +1,269 @@
// Package internalmetrics provides functionality to collect Prometheus metrics.
package internalmetrics
import (
"strconv"
"github.com/prometheus/client_golang/prometheus"
// Because this package is used all throughout the codebase, be VERY careful adding new
// kiali imports here. Most likely you will encounter an import cycle error that will
// cause a compilation failure.
)
// These constants define the different label names for the different metric timeseries
const (
labelGraphKind = "graph_kind"
labelGraphType = "graph_type"
labelWithServiceNodes = "with_service_nodes"
labelAppender = "appender"
labelRoute = "route"
labelQueryGroup = "query_group"
labelPackage = "package"
labelType = "type"
labelFunction = "function"
)
// MetricsType defines all of Kiali's own internal metrics.
type MetricsType struct {
GraphNodes *prometheus.GaugeVec
GraphGenerationTime *prometheus.HistogramVec
GraphAppenderTime *prometheus.HistogramVec
GraphMarshalTime *prometheus.HistogramVec
APIProcessingTime *prometheus.HistogramVec
PrometheusProcessingTime *prometheus.HistogramVec
GoFunctionProcessingTime *prometheus.HistogramVec
GoFunctionFailures *prometheus.CounterVec
}
// Metrics contains all of Kiali's own internal metrics.
// These metrics can be accessed directly to update their values, or
// you can use available utility functions defined below.
var Metrics = MetricsType{
GraphNodes: prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Name: "kiali_graph_nodes",
Help: "The number of nodes in a generated graph.",
},
[]string{labelGraphKind, labelGraphType, labelWithServiceNodes},
),
GraphGenerationTime: prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "kiali_graph_generation_duration_seconds",
Help: "The time required to generate a graph.",
},
[]string{labelGraphKind, labelGraphType, labelWithServiceNodes},
),
GraphAppenderTime: prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "kiali_graph_appender_duration_seconds",
Help: "The time required to execute an appender while generating a graph.",
},
[]string{labelAppender},
),
GraphMarshalTime: prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "kiali_graph_marshal_duration_seconds",
Help: "The time required to marshal and return the JSON for a graph.",
},
[]string{labelGraphKind, labelGraphType, labelWithServiceNodes},
),
APIProcessingTime: prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "kiali_api_processing_duration_seconds",
Help: "The time required to execute a particular REST API route request.",
},
[]string{labelRoute},
),
PrometheusProcessingTime: prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "kiali_prometheus_processing_duration_seconds",
Help: "The time required to execute a Prometheus query.",
},
[]string{labelQueryGroup},
),
GoFunctionProcessingTime: prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "kiali_go_function_processing_duration_seconds",
Help: "The time required to execute a particular Go function.",
},
[]string{labelPackage, labelType, labelFunction},
),
GoFunctionFailures: prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "kiali_go_function_failures_total",
Help: "Counts the total number of failures encountered by a particular Go function.",
},
[]string{labelPackage, labelType, labelFunction},
),
}
// SuccessOrFailureMetricType let's you capture metrics for both successes and failures,
// where successes are tracked using a duration histogram and failures are tracked with a counter.
// Typical usage is:
// func SomeFunction(...) (..., err error) {
// sof := GetSuccessOrFailureMetricTypeObject()
// defer sof.ObserveNow(&err)
// ... do the work of SomeFunction here...
// }
//
// If a function doesn't support returning an error, then call ObserveDuration directly:
//
// func SomeFunction(...) (...) {
// sof := GetSuccessOrFailureMetricTypeObject()
// defer sof.ObserveDuration()
// ... do the work of SomeFunction here...
//
// If a function doesn't support returning an error, but you still need to report a failure,
// call Inc() directly to increment the failure counter:
//
// func SomeFunction(...) (...) {
// sof := GetSuccessOrFailureMetricTypeObject()
// defer func() { if (somethingBadHappened) { sof.Inc() } else { sof.ObserveDuration() }}()
// ... do the work of SomeFunction here...
// }
type SuccessOrFailureMetricType struct {
*prometheus.Timer
prometheus.Counter
}
// ObserveNow will observe a duration unless *err is not nil
// in which case the error counter will be incremented instead.
// We use a pointer to err because this function is normally
// invoked via 'defer' and so the actual value of the error
// won't be set until the actual invocation of this function.
// (read the docs on 'defer' if you don't get it).
func (sof *SuccessOrFailureMetricType) ObserveNow(err *error) {
if *err == nil {
sof.ObserveDuration()
} else {
sof.Inc()
}
}
// RegisterInternalMetrics must be called at startup to prepare the Prometheus scrape endpoint.
func RegisterInternalMetrics() {
prometheus.MustRegister(
Metrics.GraphNodes,
Metrics.GraphGenerationTime,
Metrics.GraphAppenderTime,
Metrics.GraphMarshalTime,
Metrics.APIProcessingTime,
Metrics.PrometheusProcessingTime,
Metrics.GoFunctionProcessingTime,
Metrics.GoFunctionFailures,
)
}
//
// The following are utility functions that can be used to update the internal metrics.
//
// SetGraphNodes sets the node count metric
func SetGraphNodes(graphKind string, graphType string, withServiceNodes bool, nodeCount int) {
Metrics.GraphNodes.With(prometheus.Labels{
labelGraphKind: graphKind,
labelGraphType: graphType,
labelWithServiceNodes: strconv.FormatBool(withServiceNodes),
}).Set(float64(nodeCount))
}
// GetGraphGenerationTimePrometheusTimer returns a timer that can be used to store
// a value for the graph generation time metric. The timer is ticking immediately
// when this function returns.
// Typical usage is as follows:
// promtimer := GetGraphGenerationTimePrometheusTimer(...)
// defer promtimer.ObserveDuration()
func GetGraphGenerationTimePrometheusTimer(graphKind string, graphType string, withServiceNodes bool) *prometheus.Timer {
timer := prometheus.NewTimer(Metrics.GraphGenerationTime.With(prometheus.Labels{
labelGraphKind: graphKind,
labelGraphType: graphType,
labelWithServiceNodes: strconv.FormatBool(withServiceNodes),
}))
return timer
}
// GetGraphAppenderTimePrometheusTimer returns a timer that can be used to store
// a value for the graph appender time metric. The timer is ticking immediately
// when this function returns.
// Typical usage is as follows:
// promtimer := GetGraphAppenderTimePrometheusTimer(...)
// ... run the appender ...
// promtimer.ObserveDuration()
func GetGraphAppenderTimePrometheusTimer(appenderName string) *prometheus.Timer {
timer := prometheus.NewTimer(Metrics.GraphAppenderTime.With(prometheus.Labels{
labelAppender: appenderName,
}))
return timer
}
// GetGraphMarshalTimePrometheusTimer returns a timer that can be used to store
// a value for the graph marshal time metric. The timer is ticking immediately
// when this function returns.
// Typical usage is as follows:
// promtimer := GetGraphMarshalTimePrometheusTimer(...)
// defer promtimer.ObserveDuration()
func GetGraphMarshalTimePrometheusTimer(graphKind string, graphType string, withServiceNodes bool) *prometheus.Timer {
timer := prometheus.NewTimer(Metrics.GraphMarshalTime.With(prometheus.Labels{
labelGraphKind: graphKind,
labelGraphType: graphType,
labelWithServiceNodes: strconv.FormatBool(withServiceNodes),
}))
return timer
}
// GetAPIProcessingTimePrometheusTimer returns a timer that can be used to store
// a value for the API processing time metric. The timer is ticking immediately
// when this function returns.
// Typical usage is as follows:
// promtimer := GetAPIProcessingTimePrometheusTimer(...)
// defer promtimer.ObserveDuration()
func GetAPIProcessingTimePrometheusTimer(apiRouteName string) *prometheus.Timer {
timer := prometheus.NewTimer(Metrics.APIProcessingTime.With(prometheus.Labels{
labelRoute: apiRouteName,
}))
return timer
}
// GetPrometheusProcessingTimePrometheusTimer returns a timer that can be used to store
// a value for the Prometheus query processing time metric. The timer is ticking immediately
// when this function returns.
//
// Note that the queryGroup parameter is simply some string that can be used to
// identify a particular set of Prometheus queries. This queryGroup does not necessarily have to
// identify a unique query (indeed, if you do that, that might cause too many timeseries to
// be collected), but it only needs to identify a set of queries. For example, perhaps there
// is a group of similar Prometheus queries used to generate a graph - in this case,
// the processing time for all of those queries can be combined into a single metric timeseries
// by passing in a queryGroup of "Graph-Generation".
//
// Typical usage is as follows:
// promtimer := GetPrometheusProcessingTimePrometheusTimer(...)
// ... execute the query ...
// promtimer.ObserveDuration()
func GetPrometheusProcessingTimePrometheusTimer(queryGroup string) *prometheus.Timer {
timer := prometheus.NewTimer(Metrics.PrometheusProcessingTime.With(prometheus.Labels{
labelQueryGroup: queryGroup,
}))
return timer
}
// GetGoFunctionMetric returns a SuccessOrFailureMetricType object that can be used to store
// a duration value for the Go Function processing time metric when the function is successful,
// or increments the failure counter if not successful.
// If the Go Function is not on a type (i.e. is a global function), pass in an empty string for goType.
// The timer is ticking immediately when this function returns.
// See the comments for SuccessOrFailureMetricType for documentation on how to use the returned object.
func GetGoFunctionMetric(goPkg string, goType string, goFunc string) SuccessOrFailureMetricType {
return SuccessOrFailureMetricType{
prometheus.NewTimer(Metrics.GoFunctionProcessingTime.With(prometheus.Labels{
labelPackage: goPkg,
labelType: goType,
labelFunction: goFunc,
})),
Metrics.GoFunctionFailures.With(prometheus.Labels{
labelPackage: goPkg,
labelType: goType,
labelFunction: goFunc,
}),
}
}

Some files were not shown because too many files have changed in this diff Show More