fix: upgrade github.com/open-policy-agent/opa v0.70.0 => v1.4.0 (#6510)
Signed-off-by: peng wu <2030047311@qq.com>
This commit is contained in:
191
LICENSES/vendor/github.com/OneOfOne/xxhash/LICENSE
generated
vendored
191
LICENSES/vendor/github.com/OneOfOne/xxhash/LICENSE
generated
vendored
@@ -1,191 +0,0 @@
|
||||
= vendor/github.com/OneOfOne/xxhash licensed under: =
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
= vendor/github.com/OneOfOne/xxhash/LICENSE 7b8ee9b837e70f675fc861ec7e3af440
|
||||
@@ -1,5 +1,5 @@
|
||||
# Build
|
||||
FROM golang:1.23.7 AS build_context
|
||||
FROM golang:1.23.8 AS build_context
|
||||
|
||||
ENV OUTDIR=/out
|
||||
RUN mkdir -p ${OUTDIR}/usr/local/bin/
|
||||
|
||||
@@ -15,7 +15,7 @@ RUN curl -LO https://github.com/kubesphere/telemetry/releases/download/v${TELEME
|
||||
COPY config/ks-core ${OUTDIR}/var/helm-charts/ks-core
|
||||
|
||||
# Build
|
||||
FROM golang:1.23.7 AS build_context
|
||||
FROM golang:1.23.8 AS build_context
|
||||
|
||||
ENV OUTDIR=/out
|
||||
RUN mkdir -p ${OUTDIR}/usr/local/bin/
|
||||
|
||||
77
go.mod
77
go.mod
@@ -6,9 +6,7 @@
|
||||
|
||||
module kubesphere.io/kubesphere
|
||||
|
||||
go 1.23.0
|
||||
|
||||
toolchain go1.23.7
|
||||
go 1.23.8
|
||||
|
||||
require (
|
||||
code.cloudfoundry.org/bytefmt v0.0.0-20190710193110-1eb035ffe2b6
|
||||
@@ -36,7 +34,7 @@ require (
|
||||
github.com/go-redis/redis v6.15.2+incompatible
|
||||
github.com/golang-jwt/jwt/v4 v4.5.2
|
||||
github.com/golang/example v0.0.0-20170904185048-46695d81d1fa
|
||||
github.com/google/go-cmp v0.6.0
|
||||
github.com/google/go-cmp v0.7.0
|
||||
github.com/google/go-containerregistry v0.14.0
|
||||
github.com/google/gops v0.3.23
|
||||
github.com/google/uuid v1.6.0
|
||||
@@ -48,21 +46,21 @@ require (
|
||||
github.com/oliveagle/jsonpath v0.0.0-20180606110733-2e52cf6e6852
|
||||
github.com/onsi/ginkgo/v2 v2.20.1
|
||||
github.com/onsi/gomega v1.34.2
|
||||
github.com/open-policy-agent/opa v0.70.0
|
||||
github.com/open-policy-agent/opa v1.4.0
|
||||
github.com/opencontainers/go-digest v1.0.0
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/prometheus/client_golang v1.20.5
|
||||
github.com/prometheus/client_golang v1.21.1
|
||||
github.com/robfig/cron/v3 v3.0.1
|
||||
github.com/sony/sonyflake v1.2.0
|
||||
github.com/speps/go-hashids v2.0.0+incompatible
|
||||
github.com/spf13/cobra v1.8.1
|
||||
github.com/spf13/pflag v1.0.5
|
||||
github.com/spf13/viper v1.18.2
|
||||
github.com/spf13/cobra v1.9.1
|
||||
github.com/spf13/pflag v1.0.6
|
||||
github.com/spf13/viper v1.20.1
|
||||
github.com/stretchr/testify v1.10.0
|
||||
golang.org/x/crypto v0.36.0
|
||||
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56
|
||||
golang.org/x/net v0.37.0
|
||||
golang.org/x/oauth2 v0.22.0
|
||||
golang.org/x/net v0.38.0
|
||||
golang.org/x/oauth2 v0.26.0
|
||||
gopkg.in/cas.v2 v2.2.0
|
||||
gopkg.in/natefinch/lumberjack.v2 v2.2.1
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
@@ -99,9 +97,8 @@ require (
|
||||
github.com/Masterminds/sprig/v3 v3.3.0 // indirect
|
||||
github.com/Masterminds/squirrel v1.5.4 // indirect
|
||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||
github.com/OneOfOne/xxhash v1.2.8 // indirect
|
||||
github.com/ProtonMail/go-crypto v1.1.3 // indirect
|
||||
github.com/agnivade/levenshtein v1.2.0 // indirect
|
||||
github.com/agnivade/levenshtein v1.2.1 // indirect
|
||||
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||
@@ -109,7 +106,7 @@ require (
|
||||
github.com/chai2010/gettext-go v1.0.2 // indirect
|
||||
github.com/cloudflare/circl v1.3.7 // indirect
|
||||
github.com/containerd/containerd v1.7.27 // indirect
|
||||
github.com/containerd/errdefs v0.3.0 // indirect
|
||||
github.com/containerd/errdefs v1.0.0 // indirect
|
||||
github.com/containerd/log v0.1.0 // indirect
|
||||
github.com/containerd/platforms v0.2.1 // indirect
|
||||
github.com/coreos/go-semver v0.3.1 // indirect
|
||||
@@ -125,7 +122,7 @@ require (
|
||||
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
|
||||
github.com/fatih/color v1.18.0 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
github.com/fsnotify/fsnotify v1.7.0 // indirect
|
||||
github.com/fsnotify/fsnotify v1.8.0 // indirect
|
||||
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
|
||||
github.com/go-errors/errors v1.4.2 // indirect
|
||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
|
||||
@@ -155,7 +152,7 @@ require (
|
||||
github.com/gosuri/uitable v0.0.4 // indirect
|
||||
github.com/gregjones/httpcache v0.0.0-20181110185634-c63ab54fda8f // indirect
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 // indirect
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1 // indirect
|
||||
github.com/hashicorp/errwrap v1.1.0 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||
github.com/hashicorp/hcl v1.0.0 // indirect
|
||||
@@ -167,7 +164,7 @@ require (
|
||||
github.com/jmoiron/sqlx v1.4.0 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/kevinburke/ssh_config v1.2.0 // indirect
|
||||
github.com/klauspost/compress v1.17.9 // indirect
|
||||
github.com/klauspost/compress v1.18.0 // indirect
|
||||
github.com/kylelemons/godebug v1.1.0 // indirect
|
||||
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
|
||||
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
|
||||
@@ -190,29 +187,29 @@ require (
|
||||
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
|
||||
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
|
||||
github.com/oklog/ulid v1.3.1 // indirect
|
||||
github.com/opencontainers/image-spec v1.1.0 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.1.0 // indirect
|
||||
github.com/opencontainers/image-spec v1.1.1 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.2.3 // indirect
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
|
||||
github.com/pjbgf/sha1cd v0.3.0 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/prometheus/client_model v0.6.1 // indirect
|
||||
github.com/prometheus/common v0.55.0 // indirect
|
||||
github.com/prometheus/common v0.62.0 // indirect
|
||||
github.com/prometheus/procfs v0.15.1 // indirect
|
||||
github.com/rcrowley/go-metrics v0.0.0-20200313005456-10cdbea86bc0 // indirect
|
||||
github.com/rubenv/sql-migrate v1.7.0 // indirect
|
||||
github.com/russross/blackfriday/v2 v2.1.0 // indirect
|
||||
github.com/sagikazarmark/locafero v0.4.0 // indirect
|
||||
github.com/sagikazarmark/locafero v0.7.0 // indirect
|
||||
github.com/sagikazarmark/slog-shim v0.1.0 // indirect
|
||||
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 // indirect
|
||||
github.com/shopspring/decimal v1.4.0 // indirect
|
||||
github.com/sirupsen/logrus v1.9.3 // indirect
|
||||
github.com/skeema/knownhosts v1.3.0 // indirect
|
||||
github.com/sourcegraph/conc v0.3.0 // indirect
|
||||
github.com/spf13/afero v1.11.0 // indirect
|
||||
github.com/spf13/cast v1.7.0 // indirect
|
||||
github.com/spf13/afero v1.12.0 // indirect
|
||||
github.com/spf13/cast v1.7.1 // indirect
|
||||
github.com/stoewer/go-strcase v1.2.0 // indirect
|
||||
github.com/subosito/gotenv v1.6.0 // indirect
|
||||
github.com/tchap/go-patricia/v2 v2.3.1 // indirect
|
||||
github.com/tchap/go-patricia/v2 v2.3.2 // indirect
|
||||
github.com/x448/float16 v0.8.4 // indirect
|
||||
github.com/xanzy/ssh-agent v0.3.3 // indirect
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
|
||||
@@ -224,30 +221,30 @@ require (
|
||||
go.etcd.io/etcd/client/pkg/v3 v3.5.14 // indirect
|
||||
go.etcd.io/etcd/client/v3 v3.5.14 // indirect
|
||||
go.mongodb.org/mongo-driver v1.17.1 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect
|
||||
go.opentelemetry.io/otel v1.28.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.28.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.28.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.28.0 // indirect
|
||||
go.opentelemetry.io/proto/otlp v1.3.1 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.54.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0 // indirect
|
||||
go.opentelemetry.io/otel v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.35.0 // indirect
|
||||
go.opentelemetry.io/proto/otlp v1.5.0 // indirect
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
|
||||
go.uber.org/multierr v1.11.0 // indirect
|
||||
go.uber.org/zap v1.26.0 // indirect
|
||||
golang.org/x/mod v0.21.0 // indirect
|
||||
golang.org/x/sync v0.10.0 // indirect
|
||||
golang.org/x/sync v0.12.0 // indirect
|
||||
golang.org/x/sys v0.31.0 // indirect
|
||||
golang.org/x/term v0.30.0 // indirect
|
||||
golang.org/x/text v0.23.0 // indirect
|
||||
golang.org/x/time v0.7.0 // indirect
|
||||
golang.org/x/time v0.11.0 // indirect
|
||||
golang.org/x/tools v0.26.0 // indirect
|
||||
gomodules.xyz/jsonpatch/v2 v2.4.0 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20240814211410-ddb44dafa142 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142 // indirect
|
||||
google.golang.org/grpc v1.67.1 // indirect
|
||||
google.golang.org/protobuf v1.35.2 // indirect
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a // indirect
|
||||
google.golang.org/grpc v1.71.1 // indirect
|
||||
google.golang.org/protobuf v1.36.6 // indirect
|
||||
gopkg.in/asn1-ber.v1 v1.0.0-20181015200546-f715ec2f112d // indirect
|
||||
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
@@ -321,7 +318,7 @@ replace (
|
||||
github.com/oliveagle/jsonpath => github.com/oliveagle/jsonpath v0.0.0-20180606110733-2e52cf6e6852
|
||||
github.com/onsi/ginkgo/v2 => github.com/onsi/ginkgo/v2 v2.20.1
|
||||
github.com/onsi/gomega => github.com/onsi/gomega v1.34.2
|
||||
github.com/open-policy-agent/opa => github.com/open-policy-agent/opa v0.70.0
|
||||
github.com/open-policy-agent/opa => github.com/open-policy-agent/opa v1.4.0
|
||||
github.com/opencontainers/go-digest => github.com/opencontainers/go-digest v1.0.0
|
||||
github.com/opencontainers/image-spec => github.com/opencontainers/image-spec v1.1.0
|
||||
github.com/pkg/errors => github.com/pkg/errors v0.9.1
|
||||
|
||||
124
go.sum
124
go.sum
@@ -1,5 +1,6 @@
|
||||
cel.dev/expr v0.15.0/go.mod h1:TRSuuV7DlVCE/uwv5QbAiW/v8l5O8C4eEPHeu7gf7Sg=
|
||||
cel.dev/expr v0.16.0/go.mod h1:TRSuuV7DlVCE/uwv5QbAiW/v8l5O8C4eEPHeu7gf7Sg=
|
||||
cel.dev/expr v0.16.1/go.mod h1:AsGA5zb3WruAEQeQng1RZdGEXmBj0jvMWh6l5SnNuC8=
|
||||
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
|
||||
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
|
||||
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
|
||||
@@ -40,7 +41,7 @@ cloud.google.com/go v0.110.2/go.mod h1:k04UEeEtb6ZBRTv3dZz4CeJC3jKGxyhl0sAiVVqux
|
||||
cloud.google.com/go v0.110.4/go.mod h1:+EYjdK8e5RME/VY/qLCAtuyALQ9q67dvuum8i+H5xsI=
|
||||
cloud.google.com/go v0.110.6/go.mod h1:+EYjdK8e5RME/VY/qLCAtuyALQ9q67dvuum8i+H5xsI=
|
||||
cloud.google.com/go v0.110.7/go.mod h1:+EYjdK8e5RME/VY/qLCAtuyALQ9q67dvuum8i+H5xsI=
|
||||
cloud.google.com/go v0.110.10/go.mod h1:v1OoFqYxiBkUrruItNM3eT4lLByNjxmJSV/xDKJNnic=
|
||||
cloud.google.com/go v0.116.0/go.mod h1:cEPSRWPzZEswwdr9BxE6ChEn01dWlTaF05LiC2Xs70U=
|
||||
cloud.google.com/go/accessapproval v1.4.0/go.mod h1:zybIuC3KpDOvotz59lFe5qxRZx6C75OtwbisN56xYB4=
|
||||
cloud.google.com/go/accessapproval v1.5.0/go.mod h1:HFy3tuiGvMdcd/u+Cu5b9NkO1pEICJ46IR82PoUdplw=
|
||||
cloud.google.com/go/accessapproval v1.6.0/go.mod h1:R0EiYnwV5fsRFiKZkPHr6mwyk2wxUJ30nL4j2pcFY2E=
|
||||
@@ -117,6 +118,8 @@ cloud.google.com/go/assuredworkloads v1.8.0/go.mod h1:AsX2cqyNCOvEQC8RMPnoc0yEar
|
||||
cloud.google.com/go/assuredworkloads v1.9.0/go.mod h1:kFuI1P78bplYtT77Tb1hi0FMxM0vVpRC7VVoJC3ZoT0=
|
||||
cloud.google.com/go/assuredworkloads v1.10.0/go.mod h1:kwdUQuXcedVdsIaKgKTp9t0UJkE5+PAVNhdQm4ZVq2E=
|
||||
cloud.google.com/go/assuredworkloads v1.11.1/go.mod h1:+F04I52Pgn5nmPG36CWFtxmav6+7Q+c5QyJoL18Lry0=
|
||||
cloud.google.com/go/auth v0.13.0/go.mod h1:COOjD9gwfKNKz+IIduatIhYJQIc0mG3H102r/EMxX6Q=
|
||||
cloud.google.com/go/auth/oauth2adapt v0.2.6/go.mod h1:AlmsELtlEBnaNTL7jCj8VQFLy6mbZv0s4Q7NGBeQ5E8=
|
||||
cloud.google.com/go/automl v1.5.0/go.mod h1:34EjfoFGMZ5sgJ9EoLsRtdPSNZLcfflJR39VbVNS2M0=
|
||||
cloud.google.com/go/automl v1.6.0/go.mod h1:ugf8a6Fx+zP0D59WLhqgTDsQI9w07o64uf/Is3Nh5p8=
|
||||
cloud.google.com/go/automl v1.7.0/go.mod h1:RL9MYCCsJEOmt0Wf3z9uzG0a7adTT1fe+aObgSpkCt8=
|
||||
@@ -215,6 +218,7 @@ cloud.google.com/go/compute/metadata v0.2.1/go.mod h1:jgHgmJd2RKBGzXqF5LR2EZMGxB
|
||||
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
|
||||
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
|
||||
cloud.google.com/go/compute/metadata v0.5.0/go.mod h1:aHnloV2TPI38yx4s9+wAZhHykWvVCfu7hQbF+9CWoiY=
|
||||
cloud.google.com/go/compute/metadata v0.6.0/go.mod h1:FjyFAW1MW0C203CEOMDTu3Dk1FlqW3Rga40jzHL4hfg=
|
||||
cloud.google.com/go/contactcenterinsights v1.3.0/go.mod h1:Eu2oemoePuEFc/xKFPjbTuPSj0fYJcPls9TFlPNnHHY=
|
||||
cloud.google.com/go/contactcenterinsights v1.4.0/go.mod h1:L2YzkGbPsv+vMQMCADxJoT9YiTTnSEd6fEvCeHTYVck=
|
||||
cloud.google.com/go/contactcenterinsights v1.6.0/go.mod h1:IIDlT6CLcDoyv79kDv8iWxMSTZhLxSCofVV5W6YFM/w=
|
||||
@@ -397,7 +401,7 @@ cloud.google.com/go/iam v0.13.0/go.mod h1:ljOg+rcNfzZ5d6f1nAUJ8ZIxOaZUVoS14bKCta
|
||||
cloud.google.com/go/iam v1.0.1/go.mod h1:yR3tmSL8BcZB4bxByRv2jkSIahVmCtfKZwLYGBalRE8=
|
||||
cloud.google.com/go/iam v1.1.0/go.mod h1:nxdHjaKfCr7fNYx/HJMM8LgiMugmveWlkatear5gVyk=
|
||||
cloud.google.com/go/iam v1.1.1/go.mod h1:A5avdyVL2tCppe4unb0951eI9jreack+RJ0/d+KUZOU=
|
||||
cloud.google.com/go/iam v1.1.5/go.mod h1:rB6P/Ic3mykPbFio+vo7403drjlgvoWfYpJhMXEbzv8=
|
||||
cloud.google.com/go/iam v1.2.2/go.mod h1:0Ys8ccaZHdI1dEUilwzqng/6ps2YB6vRsjIe00/+6JY=
|
||||
cloud.google.com/go/iap v1.4.0/go.mod h1:RGFwRJdihTINIe4wZ2iCP0zF/qu18ZwyKxrhMhygBEc=
|
||||
cloud.google.com/go/iap v1.5.0/go.mod h1:UH/CGgKd4KyohZL5Pt0jSKE4m3FR51qg6FKQ/z/Ix9A=
|
||||
cloud.google.com/go/iap v1.6.0/go.mod h1:NSuvI9C/j7UdjGjIde7t7HBz+QTwBcapPE07+sSRcLk=
|
||||
@@ -473,6 +477,7 @@ cloud.google.com/go/monitoring v1.8.0/go.mod h1:E7PtoMJ1kQXWxPjB6mv2fhC5/15jInuu
|
||||
cloud.google.com/go/monitoring v1.12.0/go.mod h1:yx8Jj2fZNEkL/GYZyTLS4ZtZEZN8WtDEiEqG4kLK50w=
|
||||
cloud.google.com/go/monitoring v1.13.0/go.mod h1:k2yMBAB1H9JT/QETjNkgdCGD9bPF712XiLTVr+cBrpw=
|
||||
cloud.google.com/go/monitoring v1.15.1/go.mod h1:lADlSAlFdbqQuwwpaImhsJXu1QSdd3ojypXrFSMr2rM=
|
||||
cloud.google.com/go/monitoring v1.21.2/go.mod h1:hS3pXvaG8KgWTSz+dAdyzPrGUYmi2Q+WFX8g2hqVEZU=
|
||||
cloud.google.com/go/networkconnectivity v1.4.0/go.mod h1:nOl7YL8odKyAOtzNX73/M5/mGZgqqMeryi6UPZTk/rA=
|
||||
cloud.google.com/go/networkconnectivity v1.5.0/go.mod h1:3GzqJx7uhtlM3kln0+x5wyFvuVH1pIBJjhCpjzSt75o=
|
||||
cloud.google.com/go/networkconnectivity v1.6.0/go.mod h1:OJOoEXW+0LAxHh89nXd64uGG+FbQoeH8DtxCHVOMlaM=
|
||||
@@ -673,7 +678,7 @@ cloud.google.com/go/storage v1.27.0/go.mod h1:x9DOL8TK/ygDUMieqwfhdpQryTeEkhGKMi
|
||||
cloud.google.com/go/storage v1.28.1/go.mod h1:Qnisd4CqDdo6BGs2AD5LLnEsmSQ80wQ5ogcBBKhU86Y=
|
||||
cloud.google.com/go/storage v1.29.0/go.mod h1:4puEjyTKnku6gfKoTfNOU/W+a9JyuVNxjpS5GBrB8h4=
|
||||
cloud.google.com/go/storage v1.30.1/go.mod h1:NfxhC0UJE1aXSx7CIIbCf7y9HKT7BiccwkR7+P7gN8E=
|
||||
cloud.google.com/go/storage v1.35.1/go.mod h1:M6M/3V/D3KpzMTJyPOR/HU6n2Si5QdaXYEsng2xgOs8=
|
||||
cloud.google.com/go/storage v1.49.0/go.mod h1:k1eHhhpLvrPjVGfo0mOUPEJ4Y2+a/Hv5PiwehZI9qGU=
|
||||
cloud.google.com/go/storagetransfer v1.5.0/go.mod h1:dxNzUopWy7RQevYFHewchb29POFv3/AaBgnhqzqiK0w=
|
||||
cloud.google.com/go/storagetransfer v1.6.0/go.mod h1:y77xm4CQV/ZhFZH75PLEXY0ROiS7Gh6pSKrM8dJyg6I=
|
||||
cloud.google.com/go/storagetransfer v1.7.0/go.mod h1:8Giuj1QNb1kfLAiWM1bN6dHzfdlDAVC9rv9abHot2W4=
|
||||
@@ -777,6 +782,9 @@ github.com/BurntSushi/toml v1.3.2/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbi
|
||||
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.25.0/go.mod h1:obipzmGjfSjam60XLwGfqUkJsfiheAl+TUjG+4yzyPM=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric v0.48.1/go.mod h1:jyqM3eLpJ3IbIFDTKVz2rF9T/xWGW0rIriGwnz8l9Tk=
|
||||
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.48.1/go.mod h1:viRWSEhtMZqz1rhwmOVKkWl6SwmVowfL9O2YR5gI2PE=
|
||||
github.com/JohnCGriffin/overflow v0.0.0-20211019200055-46fa312c352c/go.mod h1:X0CRv0ky0k6m906ixxpzmDRLvX58TFUKS2eePweuyxk=
|
||||
github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
|
||||
github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE=
|
||||
@@ -796,14 +804,12 @@ github.com/Microsoft/hcsshim v0.11.7/go.mod h1:MV8xMfmECjl5HdO7U/3/hFVnkmSBjAjmA
|
||||
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
|
||||
github.com/NYTimes/gziphandler v1.1.1 h1:ZUDjpQae29j0ryrS0u/B8HZfJBtBQHjqw2rQ2cqUQ3I=
|
||||
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
|
||||
github.com/OneOfOne/xxhash v1.2.8 h1:31czK/TI9sNkxIKfaUfGlU47BAxQ0ztGgd9vPyqimf8=
|
||||
github.com/OneOfOne/xxhash v1.2.8/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
|
||||
github.com/ProtonMail/go-crypto v1.1.3 h1:nRBOetoydLeUb4nHajyO2bKqMLfWQ/ZPwkXqXxPxCFk=
|
||||
github.com/ProtonMail/go-crypto v1.1.3/go.mod h1:rA3QumHc/FZ8pAHreoekgiAbzpNsfQAosU5td4SnOrE=
|
||||
github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
|
||||
github.com/StackExchange/wmi v1.2.1/go.mod h1:rcmrprowKIVzvc+NUiLncP2uuArMWLCbu9SBzvHz7e8=
|
||||
github.com/agnivade/levenshtein v1.2.0 h1:U9L4IOT0Y3i0TIlUIDJ7rVUziKi/zPbrJGaFrtYH3SY=
|
||||
github.com/agnivade/levenshtein v1.2.0/go.mod h1:QVVI16kDrtSuwcpd0p1+xMC6Z/VfhtCyDIjcwga4/DU=
|
||||
github.com/agnivade/levenshtein v1.2.1 h1:EHBY3UOn1gwdy/VbFwgo4cxecRznFk7fKWN1KOX7eoM=
|
||||
github.com/agnivade/levenshtein v1.2.1/go.mod h1:QVVI16kDrtSuwcpd0p1+xMC6Z/VfhtCyDIjcwga4/DU=
|
||||
github.com/ajstarks/deck v0.0.0-20200831202436-30c9fc6549a9/go.mod h1:JynElWSGnm/4RlzPXRlREEwqTHAN3T56Bv2ITsFT3gY=
|
||||
github.com/ajstarks/deck/generate v0.0.0-20210309230005-c3f852c02e19/go.mod h1:T13YZdzov6OU0A1+RfKZiZN9ca6VeKdBdyDV+BY97Tk=
|
||||
github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw=
|
||||
@@ -861,8 +867,6 @@ github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyY
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/census-instrumentation/opencensus-proto v0.4.1/go.mod h1:4T9NM4+4Vw91VeyqjLS6ao50K5bOcLKN6Q42XnYaRYw=
|
||||
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
|
||||
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
|
||||
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
@@ -889,6 +893,7 @@ github.com/cncf/xds/go v0.0.0-20230105202645-06c439db220b/go.mod h1:eXthEFrGJvWH
|
||||
github.com/cncf/xds/go v0.0.0-20230310173818-32f1caf87195/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
|
||||
github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
||||
github.com/cncf/xds/go v0.0.0-20240723142845-024c85f92f20/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
||||
github.com/cncf/xds/go v0.0.0-20240905190251-b4127c9b8d78/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
||||
github.com/cockroachdb/datadriven v1.0.2/go.mod h1:a9RdTaap04u637JoCzcUoIcDmvwSUtcUFtT/C3kJlTU=
|
||||
github.com/containerd/aufs v1.0.0/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
|
||||
github.com/containerd/btrfs/v2 v2.0.0/go.mod h1:swkD/7j9HApWpzl8OHfrHNxppPd9l44DFZdF94BUj9k=
|
||||
@@ -901,8 +906,8 @@ github.com/containerd/containerd v1.7.27/go.mod h1:xZmPnl75Vc+BLGt4MIfu6bp+fy03g
|
||||
github.com/containerd/containerd/api v1.8.0/go.mod h1:dFv4lt6S20wTu/hMcP4350RL87qPWLVa/OHOwmmdnYc=
|
||||
github.com/containerd/continuity v0.4.4 h1:/fNVfTJ7wIl/YPMHjf+5H32uFhl63JucB34PlCpMKII=
|
||||
github.com/containerd/continuity v0.4.4/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE=
|
||||
github.com/containerd/errdefs v0.3.0 h1:FSZgGOeK4yuT/+DnF07/Olde/q4KBoMsaamhXxIMDp4=
|
||||
github.com/containerd/errdefs v0.3.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
|
||||
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
|
||||
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
|
||||
github.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o=
|
||||
github.com/containerd/go-cni v1.1.9/go.mod h1:XYrZJ1d5W6E2VOvjffL3IZq0Dz6bsVlERHbekNK90PM=
|
||||
github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
|
||||
@@ -931,6 +936,7 @@ github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSV
|
||||
github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
|
||||
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY=
|
||||
@@ -942,11 +948,10 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/daviddengcn/go-colortext v1.0.0/go.mod h1:zDqEI5NVUop5QPpVJUxE9UO10hRnmkD5G4Pmri9+m4c=
|
||||
github.com/denisenkom/go-mssqldb v0.9.0/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU=
|
||||
github.com/dgraph-io/badger/v3 v3.2103.5 h1:ylPa6qzbjYRQMU6jokoj4wzcaweHylt//CH0AKt0akg=
|
||||
github.com/dgraph-io/badger/v3 v3.2103.5/go.mod h1:4MPiseMeDQ3FNCYwRbbcBOGJLf5jsE0PPFzRiKjtcdw=
|
||||
github.com/dgraph-io/ristretto v0.1.1 h1:6CWw5tJNgpegArSHpNHJKldNeq03FQCwYvfMVWajOK8=
|
||||
github.com/dgraph-io/ristretto v0.1.1/go.mod h1:S1GPSBCYCIhmVNfcth17y2zZtQT6wzkzgwUve0VDWWA=
|
||||
github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
|
||||
github.com/dgraph-io/badger/v4 v4.7.0 h1:Q+J8HApYAY7UMpL8d9owqiB+odzEc0zn/aqOD9jhc6Y=
|
||||
github.com/dgraph-io/badger/v4 v4.7.0/go.mod h1:He7TzG3YBy3j4f5baj5B7Zl2XyfNe5bl4Udl0aPemVA=
|
||||
github.com/dgraph-io/ristretto/v2 v2.2.0 h1:bkY3XzJcXoMuELV8F+vS8kzNgicwQFAaGINAEJdWGOM=
|
||||
github.com/dgraph-io/ristretto/v2 v2.2.0/go.mod h1:RZrm63UmcBAaYWC1DotLYBmTvgkrs0+XhBd7Npn7/zI=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||
github.com/dgryski/trifles v0.0.0-20230903005119-f50d829f2e54 h1:SG7nF6SRlWhcT7cNTs5R6Hk4V2lcmLz2NsG2VnInyNo=
|
||||
@@ -989,6 +994,7 @@ github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.
|
||||
github.com/envoyproxy/go-control-plane v0.10.3/go.mod h1:fJJn/j26vwOu972OllsvAgJJM//w9BV6Fxbg2LuVd34=
|
||||
github.com/envoyproxy/go-control-plane v0.11.0/go.mod h1:VnHyVMpzcLvCFt9yUz1UnCwHLhwx1WguiVDV7pTG/tI=
|
||||
github.com/envoyproxy/go-control-plane v0.13.0/go.mod h1:GRaKG3dwvFoTg4nj7aXdZnvMg4d7nvT/wl9WgVXn3Q8=
|
||||
github.com/envoyproxy/go-control-plane v0.13.1/go.mod h1:X45hY0mufo6Fd0KW3rqsGvQMw58jvjymeCzBU3mWyHw=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.6.7/go.mod h1:dyJXwwfPK2VSqiB9Klm1J6romD608Ba7Hij42vrOBCo=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.9.1/go.mod h1:OKNgG7TCp5pF4d6XftA0++PMirau2/yoOwVac3AbF2w=
|
||||
@@ -1096,6 +1102,7 @@ github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqw
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
|
||||
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
|
||||
github.com/go-viper/mapstructure/v2 v2.2.1/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
|
||||
github.com/gobuffalo/flect v1.0.3 h1:xeWBM2nui+qnVvNM4S3foBhCAL2XgPU+a7FdpelbTq4=
|
||||
github.com/gobuffalo/flect v1.0.3/go.mod h1:A5msMlrHtLqh9umBSnvabjsMrCcCpAyzglnDvkbYKHs=
|
||||
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
|
||||
@@ -1133,7 +1140,6 @@ github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+Licev
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
|
||||
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/gomodule/redigo v1.8.2/go.mod h1:P9dn9mFrCBvWhGE1wpxx6fgq7BAeLBk+UUUzlpkBYO0=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
@@ -1143,8 +1149,9 @@ github.com/google/btree v1.1.2 h1:xf4v41cLI2Z6FxbKm+8Bu+m8ifhj15JuZ9sa0jZCMUU=
|
||||
github.com/google/btree v1.1.2/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
|
||||
github.com/google/cel-go v0.20.1 h1:nDx9r8S3L4pE61eDdt8igGj8rf5kjYR3ILxWIpWNi84=
|
||||
github.com/google/cel-go v0.20.1/go.mod h1:kWcIzTsPX0zmQ+H3TirHstLLf9ep5QTsZBN9u4dOYLg=
|
||||
github.com/google/flatbuffers v2.0.8+incompatible h1:ivUb1cGomAB101ZM1T0nOiWz9pSrTMoa9+EiY7igmkM=
|
||||
github.com/google/flatbuffers v2.0.8+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
|
||||
github.com/google/flatbuffers v25.2.10+incompatible h1:F3vclr7C3HpB1k9mxCGRMXq6FdUalZ6H/pNX4FP1v0Q=
|
||||
github.com/google/flatbuffers v25.2.10+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8=
|
||||
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
|
||||
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
|
||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
||||
@@ -1183,7 +1190,7 @@ github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm4
|
||||
github.com/google/s2a-go v0.1.0/go.mod h1:OJpEgntRZo8ugHpF9hkoLJbS5dSI20XZeXJ9JVywLlM=
|
||||
github.com/google/s2a-go v0.1.3/go.mod h1:Ej+mSEMGRnqRzjc7VtF+jdBwYG5fuJfiZ8ELkjEwM0A=
|
||||
github.com/google/s2a-go v0.1.4/go.mod h1:Ej+mSEMGRnqRzjc7VtF+jdBwYG5fuJfiZ8ELkjEwM0A=
|
||||
github.com/google/s2a-go v0.1.7/go.mod h1:50CgR4k1jNlWBu4UfS4AcfhVe1r6pdZPygJ3R8F0Qdw=
|
||||
github.com/google/s2a-go v0.1.8/go.mod h1:6iNWHTpQ+nfNRN5E00MSdfDwVesa8hhS32PhPO8deJA=
|
||||
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
|
||||
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
@@ -1193,7 +1200,7 @@ github.com/googleapis/enterprise-certificate-proxy v0.1.0/go.mod h1:17drOmN3MwGY
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.2.0/go.mod h1:8C0jb7/mgJe/9KK8Lm7X9ctZC2t60YyIpYEI16jx0Qg=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.2.1/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.2.3/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0=
|
||||
github.com/googleapis/enterprise-certificate-proxy v0.3.4/go.mod h1:YKe7cfqYXjKGpGvmSg28/fFvhNzinZQm8DGnaburhGA=
|
||||
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
|
||||
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
|
||||
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
|
||||
@@ -1208,10 +1215,9 @@ github.com/googleapis/gax-go/v2 v2.7.1/go.mod h1:4orTrqY6hXxxaUL4LHIPl6lGo8vAE38
|
||||
github.com/googleapis/gax-go/v2 v2.8.0/go.mod h1:4orTrqY6hXxxaUL4LHIPl6lGo8vAE38/qKbhSAKP6QI=
|
||||
github.com/googleapis/gax-go/v2 v2.10.0/go.mod h1:4UOEnMCrxsSqQ940WnTiD6qJ63le2ev3xfyagutxiPw=
|
||||
github.com/googleapis/gax-go/v2 v2.11.0/go.mod h1:DxmR61SGKkGLa2xigwuZIQpkCI2S5iydzRfb3peWZJI=
|
||||
github.com/googleapis/gax-go/v2 v2.12.0/go.mod h1:y+aIqrI5eb1YGMVJfuV3185Ts/D7qKpsEkdD5+I6QGU=
|
||||
github.com/googleapis/gax-go/v2 v2.14.1/go.mod h1:Hb/NubMaVM88SrNkvl8X/o8XWwDJEPqouaLeN2IUxoA=
|
||||
github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4=
|
||||
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
|
||||
github.com/googleapis/google-cloud-go-testing v0.0.0-20210719221736-1c9a4c676720/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
|
||||
github.com/gorilla/handlers v1.5.2 h1:cLTUSsNkgcwhgRqvCNmdbRWG0A3N4F+M2nWKdScwyEE=
|
||||
github.com/gorilla/handlers v1.5.2/go.mod h1:dX+xVpaxdSw+q0Qek8SSsl3dfMk3jNddUkMzo0GtH0w=
|
||||
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
|
||||
@@ -1232,8 +1238,9 @@ github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFb
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0/go.mod h1:hgWBS7lorOAVIJEQMi4ZsPv9hVvWI6+ch50m39Pf2Ks=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.3/go.mod h1:o//XUCC/F+yRGJoPO/VU0GSB0f8Nhgmxx0VIRUvaC0w=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0/go.mod h1:YN5jB8ie0yfIUg6VvR9Kz84aCaG7AsGZnLjhHbUqwPg=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1 h1:e9Rjr40Z98/clHv5Yg79Is0NtosR5LXRvdr7o/6NwbA=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1/go.mod h1:tIxuGz/9mpox++sgp9fJjHO0+q1X9/UOWd798aAm22M=
|
||||
github.com/hashicorp/consul/api v1.25.1/go.mod h1:iiLVwR/htV7mas/sy0O+XSuEnrdBUUydemjxcUrAt4g=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
|
||||
@@ -1299,8 +1306,8 @@ github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+o
|
||||
github.com/klauspost/asmfmt v1.3.2/go.mod h1:AG8TuvYojzulgDAMCnYn50l/5QV3Bs/tp6j0HLHbNSE=
|
||||
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
|
||||
github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU=
|
||||
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
|
||||
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
|
||||
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
||||
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
|
||||
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||
github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
@@ -1421,8 +1428,8 @@ github.com/onsi/ginkgo/v2 v2.20.1 h1:YlVIbqct+ZmnEph770q9Q7NVAz4wwIiVNahee6JyUzo
|
||||
github.com/onsi/ginkgo/v2 v2.20.1/go.mod h1:lG9ey2Z29hR41WMVthyJBGUBcBhGOtoPF2VFMvBXFCI=
|
||||
github.com/onsi/gomega v1.34.2 h1:pNCwDkzrsv7MS9kpaQvVb1aVLahQXyJ/Tv5oAZMI3i8=
|
||||
github.com/onsi/gomega v1.34.2/go.mod h1:v1xfxRgk0KIsG+QOdm7p8UosrOzPYRo60fd3B/1Dukc=
|
||||
github.com/open-policy-agent/opa v0.70.0 h1:B3cqCN2iQAyKxK6+GI+N40uqkin+wzIrM7YA60t9x1U=
|
||||
github.com/open-policy-agent/opa v0.70.0/go.mod h1:Y/nm5NY0BX0BqjBriKUiV81sCl8XOjjvqQG7dXrggtI=
|
||||
github.com/open-policy-agent/opa v1.4.0 h1:IGO3xt5HhQKQq2axfa9memIFx5lCyaBlG+fXcgHpd3A=
|
||||
github.com/open-policy-agent/opa v1.4.0/go.mod h1:DNzZPKqKh4U0n0ANxcCVlw8lCSv2c+h5G/3QvSYdWZ8=
|
||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opencontainers/image-spec v1.1.0 h1:8SG7/vwALn54lVB/0yZ/MMwhFrPYtpEHQb2IpWsCzug=
|
||||
@@ -1432,8 +1439,8 @@ github.com/opencontainers/runtime-tools v0.9.1-0.20221107090550-2e043c6bd626/go.
|
||||
github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec=
|
||||
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
|
||||
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
|
||||
github.com/pelletier/go-toml/v2 v2.1.0 h1:FnwAJ4oYMvbT/34k9zzHuZNrhlz48GB3/s6at6/MHO4=
|
||||
github.com/pelletier/go-toml/v2 v2.1.0/go.mod h1:tJU2Z3ZkXwnxa4DPO899bsyIoywizdUvyaeZurnPPDc=
|
||||
github.com/pelletier/go-toml/v2 v2.2.3 h1:YmeHyLY8mFWbdkNWwpr+qIL2bEqT0o95WSdkNHvL12M=
|
||||
github.com/pelletier/go-toml/v2 v2.2.3/go.mod h1:MfCQTFTvCcUyyvvwm1+G6H/jORL20Xlb6rzQu9GuUkc=
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
|
||||
github.com/peterh/liner v1.2.2/go.mod h1:xFwJyiKIXJZUKItq5dGHZSTBRAuG/CpeNpWLyiNRNwI=
|
||||
@@ -1450,7 +1457,7 @@ github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
|
||||
github.com/pkg/sftp v1.13.1/go.mod h1:3HaPG6Dq1ILlpPZRO0HVMrsydcdLt6HRDccSgb87qRg=
|
||||
github.com/pkg/sftp v1.13.6/go.mod h1:tz1ryNURKu77RL+GuCzmoJYxQczL3wLNNpPWagdg4Qk=
|
||||
github.com/pkg/sftp v1.13.7/go.mod h1:KMKI0t3T6hfA+lTR/ssZdunHo+uwq7ghoN09/FSu3DY=
|
||||
github.com/planetscale/vtprotobuf v0.6.1-0.20240319094008-0393e58bdf10/go.mod h1:t/avpk3KcrXxUnYOhZhMXJlSEyie6gQbtLq5NM3loB8=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
@@ -1470,8 +1477,8 @@ github.com/prometheus/client_golang v1.15.1/go.mod h1:e9yaBhRPU2pPNsZwE+JdQl0KEt
|
||||
github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY=
|
||||
github.com/prometheus/client_golang v1.18.0/go.mod h1:T+GXkCk5wSJyOqMIzVgvvjFDlkOQntgjkJWKrN5txjA=
|
||||
github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho=
|
||||
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
|
||||
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
|
||||
github.com/prometheus/client_golang v1.21.1 h1:DOvXXTqVzvkIewV/CDPFdejpMCGeMcbGCQ8YOmu+Ibk=
|
||||
github.com/prometheus/client_golang v1.21.1/go.mod h1:U9NM32ykUErtVBxdvD3zfi+EuFkkaBvMb09mIfe0Zgg=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
@@ -1493,8 +1500,9 @@ github.com/prometheus/common v0.42.0/go.mod h1:xBwqVerjNdUDjgODMpudtOMwlOwf2SaTr
|
||||
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
|
||||
github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGyv9MZjVOJsY=
|
||||
github.com/prometheus/common v0.48.0/go.mod h1:0/KsvlIEfPQCQ5I2iNSAWKPZziNCvRs5EC6ILDTlAPc=
|
||||
github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc=
|
||||
github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8=
|
||||
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
|
||||
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
|
||||
@@ -1524,8 +1532,9 @@ github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFR
|
||||
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
|
||||
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
|
||||
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
|
||||
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
|
||||
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/rubenv/sql-migrate v1.7.0 h1:HtQq1xyTN2ISmQDggnh0c9U3JlP8apWh8YO2jzlXpTI=
|
||||
github.com/rubenv/sql-migrate v1.7.0/go.mod h1:S4wtDEG1CKn+0ShpTtzWhFpHHI5PvCUtiGI+C+Z2THE=
|
||||
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
|
||||
@@ -1534,8 +1543,8 @@ github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQD
|
||||
github.com/ruudk/golang-pdf417 v0.0.0-20181029194003-1af4ab5afa58/go.mod h1:6lfFZQK844Gfx8o5WFuvpxWRwnSoipWe/p622j1v06w=
|
||||
github.com/ruudk/golang-pdf417 v0.0.0-20201230142125-a7e3863a1245/go.mod h1:pQAZKsJ8yyVxGRWYNEm9oFB8ieLgKFnamEyDmSA0BRk=
|
||||
github.com/sagikazarmark/crypt v0.17.0/go.mod h1:SMtHTvdmsZMuY/bpZoqokSoChIrcJ/epOxZN58PbZDg=
|
||||
github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6keLGt6kNQ=
|
||||
github.com/sagikazarmark/locafero v0.4.0/go.mod h1:Pe1W6UlPYUk/+wc/6KFhbORCfqzgYEpgQ3O5fPuL3H4=
|
||||
github.com/sagikazarmark/locafero v0.7.0 h1:5MqpDsTGNDhY8sGp0Aowyf0qKsPrhewaLSsFaodPcyo=
|
||||
github.com/sagikazarmark/locafero v0.7.0/go.mod h1:2za3Cg5rMaTMoG/2Ulr9AwtFaIppKXTRYnozin4aB5k=
|
||||
github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
|
||||
github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ=
|
||||
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
|
||||
@@ -1559,10 +1568,10 @@ github.com/spf13/afero v1.3.3/go.mod h1:5KUK8ByomD5Ti5Artl0RtHeI5pTF7MIDuXL3yY52
|
||||
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
|
||||
github.com/spf13/afero v1.9.2/go.mod h1:iUV7ddyEEZPO5gA3zD4fJt6iStLlL+Lg4m2cihcDf8Y=
|
||||
github.com/spf13/afero v1.10.0/go.mod h1:UBogFpq8E9Hx+xc5CNTTEpTnuHVmXDwZcZcE1eb/UhQ=
|
||||
github.com/spf13/afero v1.11.0 h1:WJQKhtpdm3v2IzqG8VMqrr6Rf3UYpEF239Jy9wNepM8=
|
||||
github.com/spf13/afero v1.11.0/go.mod h1:GH9Y3pIexgf1MTIWtNGyogA5MwRIDXGUr+hbWNoBjkY=
|
||||
github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=
|
||||
github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
|
||||
github.com/spf13/afero v1.12.0 h1:UcOPyRBYczmFn6yvphxkn9ZEOY65cpwGKb5mL36mrqs=
|
||||
github.com/spf13/afero v1.12.0/go.mod h1:ZTlWwG4/ahT8W7T0WQ5uYmjI9duaLQGy3Q2OAl4sk/4=
|
||||
github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y=
|
||||
github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
|
||||
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
|
||||
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
@@ -1581,8 +1590,8 @@ github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf
|
||||
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
|
||||
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
|
||||
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
|
||||
github.com/tchap/go-patricia/v2 v2.3.1 h1:6rQp39lgIYZ+MHmdEq4xzuk1t7OdC35z/xm0BGhTkes=
|
||||
github.com/tchap/go-patricia/v2 v2.3.1/go.mod h1:VZRHKAb53DLaG+nA9EaYYiaEx6YztwDlLElMsnSHD4k=
|
||||
github.com/tchap/go-patricia/v2 v2.3.2 h1:xTHFutuitO2zqKAQ5rCROYgUb7Or/+IC3fts9/Yc7nM=
|
||||
github.com/tchap/go-patricia/v2 v2.3.2/go.mod h1:VZRHKAb53DLaG+nA9EaYYiaEx6YztwDlLElMsnSHD4k=
|
||||
github.com/tklauser/go-sysconf v0.3.9/go.mod h1:11DU/5sG7UexIrp/O6g35hrWzu0JxlwQ3LSFUzyeuhs=
|
||||
github.com/tklauser/numcpus v0.3.0/go.mod h1:yFGUr7TUHQRAhyqBcEg0Ge34zDBAsIvJJcyE6boqnA8=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
@@ -1656,6 +1665,8 @@ go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
|
||||
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
|
||||
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
|
||||
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||
go.opentelemetry.io/contrib/detectors/gcp v1.29.0/go.mod h1:GW2aWZNwR2ZxDLdv8OyC2G8zkRoQBuURgV7RPQgcPoU=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.46.1 h1:ysCfPZB9AjUlMa1UHYup3c9dAOCMQX/6sxSfPBUoxHw=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.46.1/go.mod h1:ha0aiYm+DOPsLHjh0zoQ8W8sLT+LJ58J3j47lGpSLrU=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0 h1:9G6E0TXzGFVfTnawRzrPl83iHOAV7L8NJiR8RSGYV1g=
|
||||
@@ -1672,8 +1683,8 @@ go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9RO
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 h1:R3X6ZXmNPRR8ul6i3WgFURCHzaXjHdm0karRG/+dj3s=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0/go.mod h1:QWFXnDavXWwMx2EEcZsf3yxgEKAqsxQ+Syjp+seyInw=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 h1:digkEZCJWobwBqMwC0cwCq8/wkkRy/OowZg5OArWZrM=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0/go.mod h1:/OpE/y70qVkndM0TrxT4KBoN3RsFZP0QaofcfYrj76I=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.35.0 h1:xJ2qHD0C1BeYVTLLR9sX12+Qb95kfeD/byKj6Ky1pXg=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.35.0/go.mod h1:u5BF1xyjstDowA1R5QAO9JHzqK+ublenEW/dyqTjBVk=
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.44.0 h1:08qeJgaPC0YEBu2PQMbqU3rogTlyzpjhCI2b58Yn00w=
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.44.0/go.mod h1:ERL2uIeBtg4TxZdojHUwzZfIFlUIjZtxubT5p4h1Gjg=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v0.44.0 h1:dEZWPjVN22urgYCza3PXRUGEyCB++y1sAqm6guWFesk=
|
||||
@@ -1684,16 +1695,17 @@ go.opentelemetry.io/otel/metric v1.28.0 h1:f0HGvSl1KRAU1DLgLGFjrwVyismPlnuU6JD6b
|
||||
go.opentelemetry.io/otel/metric v1.28.0/go.mod h1:Fb1eVBFZmLVTMb6PPohq3TO9IIhUisDsbJoL/+uQW4s=
|
||||
go.opentelemetry.io/otel/sdk v1.28.0 h1:b9d7hIry8yZsgtbmM0DKyPWMMUMlK9NEKuIG4aBqWyE=
|
||||
go.opentelemetry.io/otel/sdk v1.28.0/go.mod h1:oYj7ClPUA7Iw3m+r7GeEjz0qckQRJK2B8zjcZEfu7Pg=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.21.0 h1:smhI5oD714d6jHE6Tie36fPx4WDFIg+Y6RfAY4ICcR0=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.21.0/go.mod h1:FJ8RAsoPGv/wYMgBdUJXOm+6pzFY3YdljnXtv1SBE8Q=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.29.0 h1:K2CfmJohnRgvZ9UAj2/FhIf/okdWcNdBwe1m8xFXiSY=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.29.0/go.mod h1:6zZLdCl2fkauYoZIOn/soQIDSWFmNSRcICarHfuhNJQ=
|
||||
go.opentelemetry.io/otel/trace v1.28.0 h1:GhQ9cUuQGmNDd5BTCP2dAvv75RdMxEfTmYejp+lkx9g=
|
||||
go.opentelemetry.io/otel/trace v1.28.0/go.mod h1:jPyXzNPg6da9+38HEwElrQiHlVMTnVfM3/yv2OlIHaI=
|
||||
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
|
||||
go.opentelemetry.io/proto/otlp v0.15.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
|
||||
go.opentelemetry.io/proto/otlp v0.19.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
|
||||
go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM=
|
||||
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
|
||||
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
|
||||
go.opentelemetry.io/proto/otlp v1.5.0 h1:xJvq7gMzB31/d406fB8U5CBdyQGw4P399D1aQWU/3i4=
|
||||
go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4=
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca h1:VdD38733bfYv5tUZwEIskMM93VanwNIi5bIKnDrJdEY=
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca/go.mod h1:jxU+3+j+71eXOW14274+SmmuW82qJzl6iZSeqEtTGds=
|
||||
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
@@ -1804,8 +1816,8 @@ golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxb
|
||||
golang.org/x/time v0.0.0-20220922220347-f3bd1da661af/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.1.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
|
||||
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
|
||||
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
|
||||
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||
golang.org/x/tools v0.26.0 h1:v/60pFQmzmT9ExmjDv2gGIfi3OqfKoEP6I5+umXlbnQ=
|
||||
golang.org/x/tools v0.26.0/go.mod h1:TPVVj70c7JJ3WCazhD8OdXcZg/og+b9+tH/KxylGwH0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
@@ -1887,7 +1899,7 @@ google.golang.org/api v0.122.0/go.mod h1:gcitW0lvnyWjSp9nKxAbdHKIZ6vF4aajGueeslZ
|
||||
google.golang.org/api v0.124.0/go.mod h1:xu2HQurE5gi/3t1aFCvhPD781p0a3p11sdunTJ2BlP4=
|
||||
google.golang.org/api v0.125.0/go.mod h1:mBwVAtz+87bEN6CbA1GtZPDOqY2R5ONPqJeIlvyo4Aw=
|
||||
google.golang.org/api v0.126.0/go.mod h1:mBwVAtz+87bEN6CbA1GtZPDOqY2R5ONPqJeIlvyo4Aw=
|
||||
google.golang.org/api v0.153.0/go.mod h1:3qNJX5eOmhiWYc67jRA/3GsDw97UFb5ivv7Y2PrriAY=
|
||||
google.golang.org/api v0.215.0/go.mod h1:fta3CVtuJYOEdugLNWm6WodzOS8KdFckABwN4I40hzY=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
|
||||
@@ -2035,8 +2047,8 @@ google.golang.org/genproto v0.0.0-20230706204954-ccb25ca9f130/go.mod h1:O9kGHb51
|
||||
google.golang.org/genproto v0.0.0-20230726155614-23370e0ffb3e/go.mod h1:0ggbjUrZYpy1q+ANUS30SEoGZ53cdfwtbuG7Ptgy108=
|
||||
google.golang.org/genproto v0.0.0-20230803162519-f966b187b2e5/go.mod h1:oH/ZOT02u4kWEp7oYBGYFFkCdKS/uYR9Z7+0/xuuFp8=
|
||||
google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d/go.mod h1:yZTlhN0tQnXo3h00fuXNCxJdLdIdnVFVBaRJ5LWBbw4=
|
||||
google.golang.org/genproto v0.0.0-20231211222908-989df2bf70f3 h1:1hfbdAfFbkmpg41000wDVqr7jUpK/Yo+LPnIxxGzmkg=
|
||||
google.golang.org/genproto v0.0.0-20231211222908-989df2bf70f3/go.mod h1:5RBcpGRxr25RbDzY5w+dmaqpSEvl8Gwl1x2CICf60ic=
|
||||
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 h1:ToEetK57OidYuqD4Q5w+vfEnPvPpuTwedCNVohYJfNk=
|
||||
google.golang.org/genproto v0.0.0-20241118233622-e639e219e697/go.mod h1:JJrvXBWRZaFMxBufik1a4RpFw4HhgVtBBWQeQgUj2cc=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20230525234020-1aefcd67740a/go.mod h1:ts19tUU+Z0ZShN1y3aPyq2+O3d5FUNNgT6FtOzmrNn8=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20230525234035-dd9d682886f9/go.mod h1:vHYtlOoi6TsQ3Uk2yxR7NI5z8uoV+3pZtR4jmHIkRig=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20230526203410-71b5a4ffd15e/go.mod h1:vHYtlOoi6TsQ3Uk2yxR7NI5z8uoV+3pZtR4jmHIkRig=
|
||||
@@ -2049,8 +2061,9 @@ google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d/go.
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20240513163218-0867130af1f8/go.mod h1:vPrPUTsDCYxXWjP7clS81mZ6/803D8K4iM9Ma27VKas=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20240528184218-531527333157/go.mod h1:99sLkeliLXfdj2J75X3Ho+rrVCaJze0uwN7zDDkjPVU=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094/go.mod h1:fJ/e3If/Q67Mj99hin0hMhiNyCRmt6BQ2aWIJshUSJw=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20240814211410-ddb44dafa142 h1:wKguEg1hsxI2/L3hUYrpo1RVi48K+uTyzKqprwLXsb8=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20240814211410-ddb44dafa142/go.mod h1:d6be+8HhtEtucleCbxpPW9PA9XwISACu8nvpPqF0BVo=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a h1:nwKuGPlUAt+aR+pcrkfFRrTU1BVrSmYyYMxYbUIVHr0=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a/go.mod h1:3kWAYMk1I75K4vykHtKt2ycnOgpA6974V7bREqbsenU=
|
||||
google.golang.org/genproto/googleapis/bytestream v0.0.0-20230530153820-e85fd2cbaebc/go.mod h1:ylj+BE99M198VPbBh6A8d9n3w8fChvyLK3wwBOjXBFA=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20230525234015-3fc162c6f38a/go.mod h1:xURIpW9ES5+/GZhnV6beoEtxQrnkRGIfP5VQG2tCBLc=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20230525234030-28d5490b6b19/go.mod h1:66JfowdXAEgad5O9NnYcsNPLCPZJD++2L9X0PCMODrA=
|
||||
@@ -2069,8 +2082,9 @@ google.golang.org/genproto/googleapis/rpc v0.0.0-20240528184218-531527333157/go.
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240624140628-dc46fd24d27d/go.mod h1:Ue6ibwXGpU+dqIcODieyLOcgj7z8+IcskoNIgZxtrFY=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240701130421-f6361c86f094/go.mod h1:Ue6ibwXGpU+dqIcODieyLOcgj7z8+IcskoNIgZxtrFY=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240730163845-b1a4ccb954bf/go.mod h1:Ue6ibwXGpU+dqIcODieyLOcgj7z8+IcskoNIgZxtrFY=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142 h1:e7S5W7MGGLaSu8j3YjdezkZ+m1/Nm0uRVRMEMGk26Xs=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a h1:51aaUVRocpvUOSQKM6Q7VuoaktNIaMCLuhZB6DKksq4=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a/go.mod h1:uRxBH1mhmO8PGhU89cMcHaXKZqO+OfakD8QQO0oYwlQ=
|
||||
google.golang.org/grpc v1.67.1 h1:zWnc1Vrcno+lHZCOofnIMvycFcc0QRGIzm9dhnDX68E=
|
||||
google.golang.org/grpc v1.67.1/go.mod h1:1gLDyUQU7CTLJI90u3nXZ9ekeghjeM7pTDZlqFNg2AA=
|
||||
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
|
||||
@@ -2202,7 +2216,7 @@ modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
|
||||
modernc.org/z v1.5.1/go.mod h1:eWFB510QWW5Th9YGZT81s+LwvaAs3Q2yr4sP0rmLkv8=
|
||||
oras.land/oras-go v1.2.6 h1:z8cmxQXBU8yZ4mkytWqXfo6tZcamPwjsuxYU81xJ8Lk=
|
||||
oras.land/oras-go v1.2.6/go.mod h1:OVPc1PegSEe/K8YiLfosrlqlqTN9PUyFvOw5Y9gwrT8=
|
||||
oras.land/oras-go/v2 v2.3.1/go.mod h1:5AQXVEu1X/FKp1F9DMOb5ZItZBOa0y5dha0yCm4NR9c=
|
||||
oras.land/oras-go/v2 v2.5.0/go.mod h1:z4eisnLP530vwIOUOJeBIj0aGI0L1C3d53atvCBqZHg=
|
||||
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
|
||||
rsc.io/goversion v1.2.0/go.mod h1:Eih9y/uIBS3ulggl7KNJ09xGSLcuNaLgmvvqa07sgfo=
|
||||
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
|
||||
|
||||
@@ -2,9 +2,7 @@
|
||||
|
||||
module kubesphere.io/api
|
||||
|
||||
go 1.23.0
|
||||
|
||||
toolchain go1.23.7
|
||||
go 1.23.8
|
||||
|
||||
require (
|
||||
k8s.io/api v0.31.2
|
||||
@@ -22,17 +20,20 @@ require (
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang/protobuf v1.5.4 // indirect
|
||||
github.com/google/gnostic-models v0.6.8 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
github.com/google/gofuzz v1.2.0 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/mailru/easyjson v0.7.7 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/rogpeppe/go-internal v1.13.1 // indirect
|
||||
github.com/spf13/pflag v1.0.6 // indirect
|
||||
github.com/stretchr/testify v1.10.0 // indirect
|
||||
github.com/x448/float16 v0.8.4 // indirect
|
||||
golang.org/x/net v0.37.0 // indirect
|
||||
golang.org/x/net v0.38.0 // indirect
|
||||
golang.org/x/text v0.23.0 // indirect
|
||||
google.golang.org/protobuf v1.35.2 // indirect
|
||||
google.golang.org/protobuf v1.36.6 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
@@ -43,9 +44,13 @@ require (
|
||||
)
|
||||
|
||||
replace (
|
||||
github.com/google/go-cmp => github.com/google/go-cmp v0.6.0
|
||||
github.com/spf13/pflag => github.com/spf13/pflag v1.0.5
|
||||
golang.org/x/crypto => golang.org/x/crypto v0.32.0
|
||||
golang.org/x/net => golang.org/x/net v0.37.0
|
||||
golang.org/x/sync => golang.org/x/sync v0.1.0
|
||||
golang.org/x/sys => golang.org/x/sys v0.26.0
|
||||
golang.org/x/text => golang.org/x/text v0.19.0
|
||||
google.golang.org/protobuf => google.golang.org/protobuf v1.35.2
|
||||
kubesphere.io/api => ../api
|
||||
)
|
||||
|
||||
@@ -26,7 +26,6 @@ github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
|
||||
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
@@ -63,8 +62,8 @@ github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
|
||||
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
@@ -89,22 +88,12 @@ golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91
|
||||
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
|
||||
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
|
||||
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
|
||||
golang.org/x/mod v0.18.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/net v0.37.0 h1:1zLorHbz+LYj7MQlSf1+2tPIIgibq2eL5xkrGk6f+2c=
|
||||
golang.org/x/net v0.37.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
|
||||
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
|
||||
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
|
||||
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
|
||||
golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
|
||||
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
|
||||
@@ -116,6 +105,7 @@ golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4f
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
|
||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
|
||||
golang.org/x/tools v0.22.0/go.mod h1:aCwcsjqvq7Yqt6TNyX7QMU2enbQ/Gt0bo6krSeEri+c=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
|
||||
@@ -2,15 +2,13 @@
|
||||
|
||||
module kubesphere.io/client-go
|
||||
|
||||
go 1.23.0
|
||||
|
||||
toolchain go1.23.7
|
||||
go 1.23.8
|
||||
|
||||
require (
|
||||
github.com/google/go-cmp v0.6.0
|
||||
github.com/google/go-cmp v0.7.0
|
||||
github.com/google/gofuzz v1.2.0
|
||||
github.com/stretchr/testify v1.10.0
|
||||
golang.org/x/net v0.37.0
|
||||
golang.org/x/net v0.38.0
|
||||
k8s.io/api v0.31.2
|
||||
k8s.io/apiextensions-apiserver v0.31.2
|
||||
k8s.io/apimachinery v0.31.2
|
||||
@@ -44,11 +42,11 @@ require (
|
||||
github.com/onsi/gomega v1.34.2 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
||||
github.com/x448/float16 v0.8.4 // indirect
|
||||
golang.org/x/oauth2 v0.22.0 // indirect
|
||||
golang.org/x/oauth2 v0.26.0 // indirect
|
||||
golang.org/x/text v0.23.0 // indirect
|
||||
golang.org/x/time v0.7.0 // indirect
|
||||
golang.org/x/time v0.11.0 // indirect
|
||||
golang.org/x/tools v0.26.0 // indirect
|
||||
google.golang.org/protobuf v1.35.2 // indirect
|
||||
google.golang.org/protobuf v1.36.6 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
@@ -60,11 +58,17 @@ require (
|
||||
)
|
||||
|
||||
replace (
|
||||
github.com/google/go-cmp => github.com/google/go-cmp v0.6.0
|
||||
github.com/spf13/pflag => github.com/spf13/pflag v1.0.5
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0
|
||||
golang.org/x/crypto => golang.org/x/crypto v0.32.0
|
||||
golang.org/x/net => golang.org/x/net v0.37.0
|
||||
golang.org/x/oauth2 => golang.org/x/oauth2 v0.21.0
|
||||
golang.org/x/sync => golang.org/x/sync v0.1.0
|
||||
golang.org/x/sys => golang.org/x/sys v0.26.0
|
||||
golang.org/x/text => golang.org/x/text v0.19.0
|
||||
google.golang.org/grpc => google.golang.org/grpc v1.67.1
|
||||
google.golang.org/protobuf => google.golang.org/protobuf v1.35.2
|
||||
kubesphere.io/api => ../api
|
||||
kubesphere.io/client-go => ../client-go
|
||||
)
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -2,9 +2,7 @@
|
||||
|
||||
module kubesphere.io/utils
|
||||
|
||||
go 1.23.0
|
||||
|
||||
toolchain go1.23.7
|
||||
go 1.23.8
|
||||
|
||||
require (
|
||||
github.com/aws/aws-sdk-go v1.55.5
|
||||
@@ -37,7 +35,7 @@ require (
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/chai2010/gettext-go v1.0.2 // indirect
|
||||
github.com/containerd/containerd v1.7.27 // indirect
|
||||
github.com/containerd/errdefs v0.3.0 // indirect
|
||||
github.com/containerd/errdefs v1.0.0 // indirect
|
||||
github.com/containerd/log v0.1.0 // indirect
|
||||
github.com/containerd/platforms v0.2.1 // indirect
|
||||
github.com/cyphar/filepath-securejoin v0.3.1 // indirect
|
||||
@@ -67,7 +65,7 @@ require (
|
||||
github.com/golang/protobuf v1.5.4 // indirect
|
||||
github.com/google/btree v1.1.2 // indirect
|
||||
github.com/google/gnostic-models v0.6.8 // indirect
|
||||
github.com/google/go-cmp v0.6.0 // indirect
|
||||
github.com/google/go-cmp v0.7.0 // indirect
|
||||
github.com/google/gofuzz v1.2.0 // indirect
|
||||
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
|
||||
github.com/google/uuid v1.6.0 // indirect
|
||||
@@ -84,7 +82,7 @@ require (
|
||||
github.com/jmoiron/sqlx v1.4.0 // indirect
|
||||
github.com/josharian/intern v1.0.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/compress v1.17.9 // indirect
|
||||
github.com/klauspost/compress v1.18.0 // indirect
|
||||
github.com/lann/builder v0.0.0-20180802200727-47ae307949d0 // indirect
|
||||
github.com/lann/ps v0.0.0-20150810152359-62de8c46ede0 // indirect
|
||||
github.com/lib/pq v1.10.9 // indirect
|
||||
@@ -107,44 +105,48 @@ require (
|
||||
github.com/onsi/ginkgo/v2 v2.20.1 // indirect
|
||||
github.com/onsi/gomega v1.34.2 // indirect
|
||||
github.com/opencontainers/go-digest v1.0.0 // indirect
|
||||
github.com/opencontainers/image-spec v1.1.0 // indirect
|
||||
github.com/opencontainers/image-spec v1.1.1 // indirect
|
||||
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
|
||||
github.com/prometheus/client_golang v1.20.5 // indirect
|
||||
github.com/prometheus/client_golang v1.21.1 // indirect
|
||||
github.com/prometheus/client_model v0.6.1 // indirect
|
||||
github.com/prometheus/common v0.55.0 // indirect
|
||||
github.com/prometheus/common v0.62.0 // indirect
|
||||
github.com/prometheus/procfs v0.15.1 // indirect
|
||||
github.com/rogpeppe/go-internal v1.13.1 // indirect
|
||||
github.com/rubenv/sql-migrate v1.7.0 // indirect
|
||||
github.com/russross/blackfriday/v2 v2.1.0 // indirect
|
||||
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 // indirect
|
||||
github.com/shopspring/decimal v1.4.0 // indirect
|
||||
github.com/sirupsen/logrus v1.9.3 // indirect
|
||||
github.com/spf13/cast v1.7.0 // indirect
|
||||
github.com/spf13/cobra v1.8.1 // indirect
|
||||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
github.com/stretchr/testify v1.10.0 // indirect
|
||||
github.com/spf13/cast v1.7.1 // indirect
|
||||
github.com/spf13/cobra v1.9.1 // indirect
|
||||
github.com/spf13/pflag v1.0.6 // indirect
|
||||
github.com/stretchr/objx v0.5.2 // indirect
|
||||
github.com/x448/float16 v0.8.4 // indirect
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
|
||||
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
|
||||
github.com/xlab/treeprint v1.2.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0 // indirect
|
||||
go.opentelemetry.io/otel v1.28.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.28.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.28.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.60.0 // indirect
|
||||
go.opentelemetry.io/otel v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk v1.35.0 // indirect
|
||||
go.opentelemetry.io/otel/sdk/metric v1.29.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.35.0 // indirect
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
|
||||
golang.org/x/crypto v0.36.0 // indirect
|
||||
golang.org/x/net v0.37.0 // indirect
|
||||
golang.org/x/oauth2 v0.22.0 // indirect
|
||||
golang.org/x/sync v0.10.0 // indirect
|
||||
golang.org/x/net v0.38.0 // indirect
|
||||
golang.org/x/oauth2 v0.26.0 // indirect
|
||||
golang.org/x/sync v0.12.0 // indirect
|
||||
golang.org/x/sys v0.31.0 // indirect
|
||||
golang.org/x/term v0.30.0 // indirect
|
||||
golang.org/x/text v0.23.0 // indirect
|
||||
golang.org/x/time v0.7.0 // indirect
|
||||
golang.org/x/time v0.11.0 // indirect
|
||||
golang.org/x/tools v0.26.0 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142 // indirect
|
||||
google.golang.org/grpc v1.67.1 // indirect
|
||||
google.golang.org/protobuf v1.35.2 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a // indirect
|
||||
google.golang.org/grpc v1.71.1 // indirect
|
||||
google.golang.org/protobuf v1.36.6 // indirect
|
||||
gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect
|
||||
gopkg.in/inf.v0 v0.9.1 // indirect
|
||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||
@@ -161,10 +163,24 @@ require (
|
||||
)
|
||||
|
||||
replace (
|
||||
github.com/google/go-cmp => github.com/google/go-cmp v0.6.0
|
||||
github.com/opencontainers/image-spec => github.com/opencontainers/image-spec v1.1.0
|
||||
github.com/spf13/cobra => github.com/spf13/cobra v1.8.1
|
||||
github.com/spf13/pflag => github.com/spf13/pflag v1.0.5
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp => go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.53.0
|
||||
go.opentelemetry.io/otel => go.opentelemetry.io/otel v1.28.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace => go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0
|
||||
go.opentelemetry.io/otel/metric => go.opentelemetry.io/otel/metric v1.28.0
|
||||
go.opentelemetry.io/otel/sdk => go.opentelemetry.io/otel/sdk v1.28.0
|
||||
go.opentelemetry.io/otel/trace => go.opentelemetry.io/otel/trace v1.28.0
|
||||
golang.org/x/crypto => golang.org/x/crypto v0.32.0
|
||||
golang.org/x/net => golang.org/x/net v0.37.0
|
||||
golang.org/x/oauth2 => golang.org/x/oauth2 v0.21.0
|
||||
golang.org/x/sync => golang.org/x/sync v0.1.0
|
||||
golang.org/x/sys => golang.org/x/sys v0.26.0
|
||||
golang.org/x/text => golang.org/x/text v0.19.0
|
||||
google.golang.org/grpc => google.golang.org/grpc v1.67.1
|
||||
google.golang.org/protobuf => google.golang.org/protobuf v1.35.2
|
||||
kubesphere.io/utils => ../utils
|
||||
)
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
cel.dev/expr v0.16.0/go.mod h1:TRSuuV7DlVCE/uwv5QbAiW/v8l5O8C4eEPHeu7gf7Sg=
|
||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
|
||||
cloud.google.com/go/compute/metadata v0.5.0/go.mod h1:aHnloV2TPI38yx4s9+wAZhHykWvVCfu7hQbF+9CWoiY=
|
||||
dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s=
|
||||
dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
|
||||
@@ -11,7 +9,6 @@ github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24/go.mod h
|
||||
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0/go.mod h1:OahwfttHWG6eJ0clwcfBAHoDI6X/LV/15hx/wlMZSrU=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/BurntSushi/toml v1.3.2 h1:o7IhLm0Msx3BaB+n3Ag7L8EVlByGnpq14C4YWiu/gL8=
|
||||
github.com/BurntSushi/toml v1.3.2/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
|
||||
@@ -59,7 +56,6 @@ github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0Bsq
|
||||
github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
|
||||
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
|
||||
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
|
||||
github.com/census-instrumentation/opencensus-proto v0.4.1/go.mod h1:4T9NM4+4Vw91VeyqjLS6ao50K5bOcLKN6Q42XnYaRYw=
|
||||
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
|
||||
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
|
||||
@@ -69,7 +65,6 @@ github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWR
|
||||
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
|
||||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||
github.com/cilium/ebpf v0.9.1/go.mod h1:+OhNOIXx/Fnu1IE8bJz2dzOA+VSfyTfdNUVdlQnxUFY=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/cncf/xds/go v0.0.0-20240723142845-024c85f92f20/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8=
|
||||
github.com/containerd/aufs v1.0.0/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
|
||||
github.com/containerd/btrfs/v2 v2.0.0/go.mod h1:swkD/7j9HApWpzl8OHfrHNxppPd9l44DFZdF94BUj9k=
|
||||
@@ -82,8 +77,8 @@ github.com/containerd/containerd v1.7.27/go.mod h1:xZmPnl75Vc+BLGt4MIfu6bp+fy03g
|
||||
github.com/containerd/containerd/api v1.8.0/go.mod h1:dFv4lt6S20wTu/hMcP4350RL87qPWLVa/OHOwmmdnYc=
|
||||
github.com/containerd/continuity v0.4.4 h1:/fNVfTJ7wIl/YPMHjf+5H32uFhl63JucB34PlCpMKII=
|
||||
github.com/containerd/continuity v0.4.4/go.mod h1:/lNJvtJKUQStBzpVQ1+rasXO1LAWtUQssk28EZvJ3nE=
|
||||
github.com/containerd/errdefs v0.3.0 h1:FSZgGOeK4yuT/+DnF07/Olde/q4KBoMsaamhXxIMDp4=
|
||||
github.com/containerd/errdefs v0.3.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
|
||||
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
|
||||
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
|
||||
github.com/containerd/fifo v1.1.0/go.mod h1:bmC4NWMbXlt2EZ0Hc7Fx7QzTFxgPID13eH0Qu+MAb2o=
|
||||
github.com/containerd/go-cni v1.1.9/go.mod h1:XYrZJ1d5W6E2VOvjffL3IZq0Dz6bsVlERHbekNK90PM=
|
||||
github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
|
||||
@@ -143,9 +138,7 @@ github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNE
|
||||
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
|
||||
github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g=
|
||||
github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
|
||||
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
|
||||
github.com/envoyproxy/go-control-plane v0.13.0/go.mod h1:GRaKG3dwvFoTg4nj7aXdZnvMg4d7nvT/wl9WgVXn3Q8=
|
||||
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
|
||||
github.com/envoyproxy/protoc-gen-validate v1.1.0/go.mod h1:sXRDRVmzEbkM7CVcM06s9shE/m23dg3wzjl0UWqJ2q4=
|
||||
github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls=
|
||||
github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
|
||||
@@ -170,7 +163,6 @@ github.com/go-gorp/gorp/v3 v3.1.0 h1:ItKF/Vbuj31dmV4jxA1qblpSwkl9g1typ24xoe70IGs
|
||||
github.com/go-gorp/gorp/v3 v3.1.0/go.mod h1:dLEjIyyRNiXvNZ8PSmzpt1GsWAUK8kjVhEpjH8TixEw=
|
||||
github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
|
||||
@@ -204,20 +196,13 @@ github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/glog v1.2.2/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w=
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
|
||||
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
|
||||
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
|
||||
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
|
||||
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
|
||||
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
|
||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
|
||||
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
|
||||
github.com/gomodule/redigo v1.8.2/go.mod h1:P9dn9mFrCBvWhGE1wpxx6fgq7BAeLBk+UUUzlpkBYO0=
|
||||
@@ -226,13 +211,6 @@ github.com/google/btree v1.1.2/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl76
|
||||
github.com/google/cel-go v0.20.1/go.mod h1:kWcIzTsPX0zmQ+H3TirHstLLf9ep5QTsZBN9u4dOYLg=
|
||||
github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I=
|
||||
github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
|
||||
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
@@ -258,8 +236,8 @@ github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
|
||||
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0 h1:bkypFPDjIYGfCYD5mRBvpqxfYX1YCS1PXdKYWi8FsN0=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.20.0/go.mod h1:P+Lt/0by1T8bfcF3z737NnSbmxQAppXMRziHUxPOC8k=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1 h1:e9Rjr40Z98/clHv5Yg79Is0NtosR5LXRvdr7o/6NwbA=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.1/go.mod h1:tIxuGz/9mpox++sgp9fJjHO0+q1X9/UOWd798aAm22M=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
|
||||
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
@@ -296,8 +274,8 @@ github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7V
|
||||
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
|
||||
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
|
||||
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
|
||||
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
|
||||
github.com/klauspost/cpuid/v2 v2.0.4/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
@@ -408,17 +386,16 @@ github.com/pquerna/cachecontrol v0.1.0/go.mod h1:NrUG3Z7Rdu85UNR3vm7SOsl1nFIeSiQ
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
|
||||
github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y=
|
||||
github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
|
||||
github.com/prometheus/client_golang v1.21.1 h1:DOvXXTqVzvkIewV/CDPFdejpMCGeMcbGCQ8YOmu+Ibk=
|
||||
github.com/prometheus/client_golang v1.21.1/go.mod h1:U9NM32ykUErtVBxdvD3zfi+EuFkkaBvMb09mIfe0Zgg=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
|
||||
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
|
||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
|
||||
github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc=
|
||||
github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8=
|
||||
github.com/prometheus/common v0.62.0 h1:xasJaQlnWAeyHdUBeGjXmutelfJHWMRr+Fg4QszZ2Io=
|
||||
github.com/prometheus/common v0.62.0/go.mod h1:vyBcEuLSvWos9B1+CyL7JZ2up+uFzXhkqml0W5zIY1I=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
|
||||
@@ -430,8 +407,8 @@ github.com/redis/go-redis/extra/redisotel/v9 v9.0.5 h1:EfpWLLCyXw8PSM2/XNJLjI3Pb
|
||||
github.com/redis/go-redis/extra/redisotel/v9 v9.0.5/go.mod h1:WZjPDy7VNzn77AAfnAfVjZNvfJTYfPetfZk5yoSTLaQ=
|
||||
github.com/redis/go-redis/v9 v9.1.0 h1:137FnGdk+EQdCbye1FW+qOEcY5S+SpY9T0NiuqvtfMY=
|
||||
github.com/redis/go-redis/v9 v9.1.0/go.mod h1:urWj3He21Dj5k4TK1y59xH8Uj6ATueP8AH1cY3lZl4c=
|
||||
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
|
||||
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/rubenv/sql-migrate v1.7.0 h1:HtQq1xyTN2ISmQDggnh0c9U3JlP8apWh8YO2jzlXpTI=
|
||||
github.com/rubenv/sql-migrate v1.7.0/go.mod h1:S4wtDEG1CKn+0ShpTtzWhFpHHI5PvCUtiGI+C+Z2THE=
|
||||
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
|
||||
@@ -445,8 +422,8 @@ github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPx
|
||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE1GqG0=
|
||||
github.com/spf13/cast v1.7.0 h1:ntdiHjuueXFgm5nzDRdOS4yfT43P5Fnud6DH50rz/7w=
|
||||
github.com/spf13/cast v1.7.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
|
||||
github.com/spf13/cast v1.7.1 h1:cuNEagBQEHWN1FnbGEjCXL2szYEXqfJPbP2HNUaca9Y=
|
||||
github.com/spf13/cast v1.7.1/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
|
||||
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
|
||||
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
@@ -505,6 +482,7 @@ go.etcd.io/etcd/server/v3 v3.5.13/go.mod h1:K/8nbsGupHqmr5MkgaZpLlH1QdX1pcNQLAkO
|
||||
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
|
||||
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
|
||||
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.46.1 h1:ysCfPZB9AjUlMa1UHYup3c9dAOCMQX/6sxSfPBUoxHw=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.46.1/go.mod h1:ha0aiYm+DOPsLHjh0zoQ8W8sLT+LJ58J3j47lGpSLrU=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.53.0/go.mod h1:azvtTADFQJA8mX80jIH/akaE7h+dbm/sVuaHqN13w74=
|
||||
@@ -520,8 +498,8 @@ go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9RO
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0 h1:R3X6ZXmNPRR8ul6i3WgFURCHzaXjHdm0karRG/+dj3s=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.28.0/go.mod h1:QWFXnDavXWwMx2EEcZsf3yxgEKAqsxQ+Syjp+seyInw=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 h1:digkEZCJWobwBqMwC0cwCq8/wkkRy/OowZg5OArWZrM=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0/go.mod h1:/OpE/y70qVkndM0TrxT4KBoN3RsFZP0QaofcfYrj76I=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.35.0 h1:xJ2qHD0C1BeYVTLLR9sX12+Qb95kfeD/byKj6Ky1pXg=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.35.0/go.mod h1:u5BF1xyjstDowA1R5QAO9JHzqK+ublenEW/dyqTjBVk=
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.44.0 h1:08qeJgaPC0YEBu2PQMbqU3rogTlyzpjhCI2b58Yn00w=
|
||||
go.opentelemetry.io/otel/exporters/prometheus v0.44.0/go.mod h1:ERL2uIeBtg4TxZdojHUwzZfIFlUIjZtxubT5p4h1Gjg=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v0.44.0 h1:dEZWPjVN22urgYCza3PXRUGEyCB++y1sAqm6guWFesk=
|
||||
@@ -532,12 +510,12 @@ go.opentelemetry.io/otel/metric v1.28.0 h1:f0HGvSl1KRAU1DLgLGFjrwVyismPlnuU6JD6b
|
||||
go.opentelemetry.io/otel/metric v1.28.0/go.mod h1:Fb1eVBFZmLVTMb6PPohq3TO9IIhUisDsbJoL/+uQW4s=
|
||||
go.opentelemetry.io/otel/sdk v1.28.0 h1:b9d7hIry8yZsgtbmM0DKyPWMMUMlK9NEKuIG4aBqWyE=
|
||||
go.opentelemetry.io/otel/sdk v1.28.0/go.mod h1:oYj7ClPUA7Iw3m+r7GeEjz0qckQRJK2B8zjcZEfu7Pg=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.21.0 h1:smhI5oD714d6jHE6Tie36fPx4WDFIg+Y6RfAY4ICcR0=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.21.0/go.mod h1:FJ8RAsoPGv/wYMgBdUJXOm+6pzFY3YdljnXtv1SBE8Q=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.29.0 h1:K2CfmJohnRgvZ9UAj2/FhIf/okdWcNdBwe1m8xFXiSY=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.29.0/go.mod h1:6zZLdCl2fkauYoZIOn/soQIDSWFmNSRcICarHfuhNJQ=
|
||||
go.opentelemetry.io/otel/trace v1.28.0 h1:GhQ9cUuQGmNDd5BTCP2dAvv75RdMxEfTmYejp+lkx9g=
|
||||
go.opentelemetry.io/otel/trace v1.28.0/go.mod h1:jPyXzNPg6da9+38HEwElrQiHlVMTnVfM3/yv2OlIHaI=
|
||||
go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
|
||||
go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
|
||||
go.opentelemetry.io/proto/otlp v1.5.0 h1:xJvq7gMzB31/d406fB8U5CBdyQGw4P399D1aQWU/3i4=
|
||||
go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4=
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca h1:VdD38733bfYv5tUZwEIskMM93VanwNIi5bIKnDrJdEY=
|
||||
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca/go.mod h1:jxU+3+j+71eXOW14274+SmmuW82qJzl6iZSeqEtTGds=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
@@ -548,11 +526,7 @@ go.uber.org/zap v1.26.0 h1:sI7k6L95XOKS281NhVKOFCUNIvv9e0w4BF8N3u+tCRo=
|
||||
go.uber.org/zap v1.26.0/go.mod h1:dtElttAiwGvoJ/vj4IwHBS/gXsEu/pZ50mUIRWuG0so=
|
||||
golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc=
|
||||
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
@@ -561,19 +535,6 @@ golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
|
||||
golang.org/x/mod v0.21.0 h1:vvrHzRwRfVKSiLrG+d4FMl/Qi4ukBCE6kZlTUkDYRT0=
|
||||
golang.org/x/mod v0.21.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
|
||||
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
|
||||
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
|
||||
golang.org/x/net v0.37.0 h1:1zLorHbz+LYj7MQlSf1+2tPIIgibq2eL5xkrGk6f+2c=
|
||||
golang.org/x/net v0.37.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8=
|
||||
golang.org/x/oauth2 v0.21.0 h1:tsimM75w1tF/uws5rbeHzIWxEqElMehnc+iW793zsZs=
|
||||
@@ -584,22 +545,14 @@ golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo=
|
||||
golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
|
||||
golang.org/x/telemetry v0.0.0-20240521205824-bda55230c457/go.mod h1:pRgIJT+bRLFKnoM1ldnzKoxTIn14Yxz928LQRYYgIN0=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
|
||||
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
|
||||
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
|
||||
golang.org/x/term v0.30.0 h1:PQ39fJZ+mfadBm0y5WlL4vlM7Sx1Hgf13sMIY2+QS9Y=
|
||||
golang.org/x/term v0.30.0/go.mod h1:NYYFdzHoI5wRh/h5tDMdMqCqPJZEuNqVR5xJLd/n67g=
|
||||
golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM=
|
||||
golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
|
||||
golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ=
|
||||
golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
|
||||
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
@@ -614,31 +567,15 @@ golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8T
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
|
||||
gomodules.xyz/jsonpatch/v2 v2.4.0/go.mod h1:AH3dM2RI6uoBZxn3LVrfvJ3E0/9dG4cSrbuBJT4moAY=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/genproto v0.0.0-20231211222908-989df2bf70f3 h1:1hfbdAfFbkmpg41000wDVqr7jUpK/Yo+LPnIxxGzmkg=
|
||||
google.golang.org/genproto v0.0.0-20231211222908-989df2bf70f3/go.mod h1:5RBcpGRxr25RbDzY5w+dmaqpSEvl8Gwl1x2CICf60ic=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20240814211410-ddb44dafa142 h1:wKguEg1hsxI2/L3hUYrpo1RVi48K+uTyzKqprwLXsb8=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20240814211410-ddb44dafa142/go.mod h1:d6be+8HhtEtucleCbxpPW9PA9XwISACu8nvpPqF0BVo=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142 h1:e7S5W7MGGLaSu8j3YjdezkZ+m1/Nm0uRVRMEMGk26Xs=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20240814211410-ddb44dafa142/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a h1:nwKuGPlUAt+aR+pcrkfFRrTU1BVrSmYyYMxYbUIVHr0=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250218202821-56aae31c358a/go.mod h1:3kWAYMk1I75K4vykHtKt2ycnOgpA6974V7bREqbsenU=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a h1:51aaUVRocpvUOSQKM6Q7VuoaktNIaMCLuhZB6DKksq4=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250218202821-56aae31c358a/go.mod h1:uRxBH1mhmO8PGhU89cMcHaXKZqO+OfakD8QQO0oYwlQ=
|
||||
google.golang.org/grpc v1.67.1 h1:zWnc1Vrcno+lHZCOofnIMvycFcc0QRGIzm9dhnDX68E=
|
||||
google.golang.org/grpc v1.67.1/go.mod h1:1gLDyUQU7CTLJI90u3nXZ9ekeghjeM7pTDZlqFNg2AA=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
|
||||
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
|
||||
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
|
||||
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
|
||||
google.golang.org/protobuf v1.35.2 h1:8Ar7bF+apOIoThw1EdZl0p1oWvMqTHmpA2fRTyZO8io=
|
||||
google.golang.org/protobuf v1.35.2/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
@@ -664,8 +601,6 @@ gotest.tools/v3 v3.4.0 h1:ZazjZUfuVeZGLAmlKKuyv3IKP5orXcwtOwDQH6YVr6o=
|
||||
gotest.tools/v3 v3.4.0/go.mod h1:CtbdzLSsqVhDgMtKsx03ird5YTGB3ar27v0u/yKBW5g=
|
||||
helm.sh/helm/v3 v3.16.2 h1:Y9v7ry+ubQmi+cb5zw1Llx8OKHU9Hk9NQ/+P+LGBe2o=
|
||||
helm.sh/helm/v3 v3.16.2/go.mod h1:SyTXgKBjNqi2NPsHCW5dDAsHqvGIu0kdNYNH9gQaw70=
|
||||
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
k8s.io/api v0.31.2 h1:3wLBbL5Uom/8Zy98GRPXpJ254nEFpl+hwndmk9RwmL0=
|
||||
k8s.io/api v0.31.2/go.mod h1:bWmGvrGPssSK1ljmLzd3pwCQ9MgoTsRCuK35u6SygUk=
|
||||
k8s.io/apiextensions-apiserver v0.31.2 h1:W8EwUb8+WXBLu56ser5IudT2cOho0gAKeTOnywBLxd0=
|
||||
|
||||
4
vendor/github.com/OneOfOne/xxhash/.gitignore
generated
vendored
4
vendor/github.com/OneOfOne/xxhash/.gitignore
generated
vendored
@@ -1,4 +0,0 @@
|
||||
*.txt
|
||||
*.pprof
|
||||
cmap2/
|
||||
cache/
|
||||
13
vendor/github.com/OneOfOne/xxhash/.travis.yml
generated
vendored
13
vendor/github.com/OneOfOne/xxhash/.travis.yml
generated
vendored
@@ -1,13 +0,0 @@
|
||||
language: go
|
||||
sudo: false
|
||||
|
||||
go:
|
||||
- "1.10"
|
||||
- "1.11"
|
||||
- "1.12"
|
||||
- master
|
||||
|
||||
script:
|
||||
- go test -tags safe ./...
|
||||
- go test ./...
|
||||
-
|
||||
187
vendor/github.com/OneOfOne/xxhash/LICENSE
generated
vendored
187
vendor/github.com/OneOfOne/xxhash/LICENSE
generated
vendored
@@ -1,187 +0,0 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
74
vendor/github.com/OneOfOne/xxhash/README.md
generated
vendored
74
vendor/github.com/OneOfOne/xxhash/README.md
generated
vendored
@@ -1,74 +0,0 @@
|
||||
# xxhash [](https://godoc.org/github.com/OneOfOne/xxhash) [](https://travis-ci.org/OneOfOne/xxhash) [](https://gocover.io/github.com/OneOfOne/xxhash)
|
||||
|
||||
This is a native Go implementation of the excellent [xxhash](https://github.com/Cyan4973/xxHash)* algorithm, an extremely fast non-cryptographic Hash algorithm, working at speeds close to RAM limits.
|
||||
|
||||
* The C implementation is ([Copyright](https://github.com/Cyan4973/xxHash/blob/master/LICENSE) (c) 2012-2014, Yann Collet)
|
||||
|
||||
## Install
|
||||
|
||||
go get github.com/OneOfOne/xxhash
|
||||
|
||||
## Features
|
||||
|
||||
* On Go 1.7+ the pure go version is faster than CGO for all inputs.
|
||||
* Supports ChecksumString{32,64} xxhash{32,64}.WriteString, which uses no copies when it can, falls back to copy on appengine.
|
||||
* The native version falls back to a less optimized version on appengine due to the lack of unsafe.
|
||||
* Almost as fast as the mostly pure assembly version written by the brilliant [cespare](https://github.com/cespare/xxhash), while also supporting seeds.
|
||||
* To manually toggle the appengine version build with `-tags safe`.
|
||||
|
||||
## Benchmark
|
||||
|
||||
### Core i7-4790 @ 3.60GHz, Linux 4.12.6-1-ARCH (64bit), Go tip (+ff90f4af66 2017-08-19)
|
||||
|
||||
```bash
|
||||
➤ go test -bench '64' -count 5 -tags cespare | benchstat /dev/stdin
|
||||
name time/op
|
||||
|
||||
# https://github.com/cespare/xxhash
|
||||
XXSum64Cespare/Func-8 160ns ± 2%
|
||||
XXSum64Cespare/Struct-8 173ns ± 1%
|
||||
XXSum64ShortCespare/Func-8 6.78ns ± 1%
|
||||
XXSum64ShortCespare/Struct-8 19.6ns ± 2%
|
||||
|
||||
# this package (default mode, using unsafe)
|
||||
XXSum64/Func-8 170ns ± 1%
|
||||
XXSum64/Struct-8 182ns ± 1%
|
||||
XXSum64Short/Func-8 13.5ns ± 3%
|
||||
XXSum64Short/Struct-8 20.4ns ± 0%
|
||||
|
||||
# this package (appengine, *not* using unsafe)
|
||||
XXSum64/Func-8 241ns ± 5%
|
||||
XXSum64/Struct-8 243ns ± 6%
|
||||
XXSum64Short/Func-8 15.2ns ± 2%
|
||||
XXSum64Short/Struct-8 23.7ns ± 5%
|
||||
|
||||
CRC64ISO-8 1.23µs ± 1%
|
||||
CRC64ISOString-8 2.71µs ± 4%
|
||||
CRC64ISOShort-8 22.2ns ± 3%
|
||||
|
||||
Fnv64-8 2.34µs ± 1%
|
||||
Fnv64Short-8 74.7ns ± 8%
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```go
|
||||
h := xxhash.New64()
|
||||
// r, err := os.Open("......")
|
||||
// defer f.Close()
|
||||
r := strings.NewReader(F)
|
||||
io.Copy(h, r)
|
||||
fmt.Println("xxhash.Backend:", xxhash.Backend)
|
||||
fmt.Println("File checksum:", h.Sum64())
|
||||
```
|
||||
|
||||
[<kbd>playground</kbd>](https://play.golang.org/p/wHKBwfu6CPV)
|
||||
|
||||
## TODO
|
||||
|
||||
* Rewrite the 32bit version to be more optimized.
|
||||
* General cleanup as the Go inliner gets smarter.
|
||||
|
||||
## License
|
||||
|
||||
This project is released under the Apache v2. license. See [LICENSE](LICENSE) for more details.
|
||||
294
vendor/github.com/OneOfOne/xxhash/xxhash.go
generated
vendored
294
vendor/github.com/OneOfOne/xxhash/xxhash.go
generated
vendored
@@ -1,294 +0,0 @@
|
||||
package xxhash
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"hash"
|
||||
)
|
||||
|
||||
const (
|
||||
prime32x1 uint32 = 2654435761
|
||||
prime32x2 uint32 = 2246822519
|
||||
prime32x3 uint32 = 3266489917
|
||||
prime32x4 uint32 = 668265263
|
||||
prime32x5 uint32 = 374761393
|
||||
|
||||
prime64x1 uint64 = 11400714785074694791
|
||||
prime64x2 uint64 = 14029467366897019727
|
||||
prime64x3 uint64 = 1609587929392839161
|
||||
prime64x4 uint64 = 9650029242287828579
|
||||
prime64x5 uint64 = 2870177450012600261
|
||||
|
||||
maxInt32 int32 = (1<<31 - 1)
|
||||
|
||||
// precomputed zero Vs for seed 0
|
||||
zero64x1 = 0x60ea27eeadc0b5d6
|
||||
zero64x2 = 0xc2b2ae3d27d4eb4f
|
||||
zero64x3 = 0x0
|
||||
zero64x4 = 0x61c8864e7a143579
|
||||
)
|
||||
|
||||
const (
|
||||
magic32 = "xxh\x07"
|
||||
magic64 = "xxh\x08"
|
||||
marshaled32Size = len(magic32) + 4*7 + 16
|
||||
marshaled64Size = len(magic64) + 8*6 + 32 + 1
|
||||
)
|
||||
|
||||
func NewHash32() hash.Hash { return New32() }
|
||||
func NewHash64() hash.Hash { return New64() }
|
||||
|
||||
// Checksum32 returns the checksum of the input data with the seed set to 0.
|
||||
func Checksum32(in []byte) uint32 {
|
||||
return Checksum32S(in, 0)
|
||||
}
|
||||
|
||||
// ChecksumString32 returns the checksum of the input data, without creating a copy, with the seed set to 0.
|
||||
func ChecksumString32(s string) uint32 {
|
||||
return ChecksumString32S(s, 0)
|
||||
}
|
||||
|
||||
type XXHash32 struct {
|
||||
mem [16]byte
|
||||
ln, memIdx int32
|
||||
v1, v2, v3, v4 uint32
|
||||
seed uint32
|
||||
}
|
||||
|
||||
// Size returns the number of bytes Sum will return.
|
||||
func (xx *XXHash32) Size() int {
|
||||
return 4
|
||||
}
|
||||
|
||||
// BlockSize returns the hash's underlying block size.
|
||||
// The Write method must be able to accept any amount
|
||||
// of data, but it may operate more efficiently if all writes
|
||||
// are a multiple of the block size.
|
||||
func (xx *XXHash32) BlockSize() int {
|
||||
return 16
|
||||
}
|
||||
|
||||
// NewS32 creates a new hash.Hash32 computing the 32bit xxHash checksum starting with the specific seed.
|
||||
func NewS32(seed uint32) (xx *XXHash32) {
|
||||
xx = &XXHash32{
|
||||
seed: seed,
|
||||
}
|
||||
xx.Reset()
|
||||
return
|
||||
}
|
||||
|
||||
// New32 creates a new hash.Hash32 computing the 32bit xxHash checksum starting with the seed set to 0.
|
||||
func New32() *XXHash32 {
|
||||
return NewS32(0)
|
||||
}
|
||||
|
||||
func (xx *XXHash32) Reset() {
|
||||
xx.v1 = xx.seed + prime32x1 + prime32x2
|
||||
xx.v2 = xx.seed + prime32x2
|
||||
xx.v3 = xx.seed
|
||||
xx.v4 = xx.seed - prime32x1
|
||||
xx.ln, xx.memIdx = 0, 0
|
||||
}
|
||||
|
||||
// Sum appends the current hash to b and returns the resulting slice.
|
||||
// It does not change the underlying hash state.
|
||||
func (xx *XXHash32) Sum(in []byte) []byte {
|
||||
s := xx.Sum32()
|
||||
return append(in, byte(s>>24), byte(s>>16), byte(s>>8), byte(s))
|
||||
}
|
||||
|
||||
// MarshalBinary implements the encoding.BinaryMarshaler interface.
|
||||
func (xx *XXHash32) MarshalBinary() ([]byte, error) {
|
||||
b := make([]byte, 0, marshaled32Size)
|
||||
b = append(b, magic32...)
|
||||
b = appendUint32(b, xx.v1)
|
||||
b = appendUint32(b, xx.v2)
|
||||
b = appendUint32(b, xx.v3)
|
||||
b = appendUint32(b, xx.v4)
|
||||
b = appendUint32(b, xx.seed)
|
||||
b = appendInt32(b, xx.ln)
|
||||
b = appendInt32(b, xx.memIdx)
|
||||
b = append(b, xx.mem[:]...)
|
||||
return b, nil
|
||||
}
|
||||
|
||||
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
|
||||
func (xx *XXHash32) UnmarshalBinary(b []byte) error {
|
||||
if len(b) < len(magic32) || string(b[:len(magic32)]) != magic32 {
|
||||
return errors.New("xxhash: invalid hash state identifier")
|
||||
}
|
||||
if len(b) != marshaled32Size {
|
||||
return errors.New("xxhash: invalid hash state size")
|
||||
}
|
||||
b = b[len(magic32):]
|
||||
b, xx.v1 = consumeUint32(b)
|
||||
b, xx.v2 = consumeUint32(b)
|
||||
b, xx.v3 = consumeUint32(b)
|
||||
b, xx.v4 = consumeUint32(b)
|
||||
b, xx.seed = consumeUint32(b)
|
||||
b, xx.ln = consumeInt32(b)
|
||||
b, xx.memIdx = consumeInt32(b)
|
||||
copy(xx.mem[:], b)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Checksum64 an alias for Checksum64S(in, 0)
|
||||
func Checksum64(in []byte) uint64 {
|
||||
return Checksum64S(in, 0)
|
||||
}
|
||||
|
||||
// ChecksumString64 returns the checksum of the input data, without creating a copy, with the seed set to 0.
|
||||
func ChecksumString64(s string) uint64 {
|
||||
return ChecksumString64S(s, 0)
|
||||
}
|
||||
|
||||
type XXHash64 struct {
|
||||
v1, v2, v3, v4 uint64
|
||||
seed uint64
|
||||
ln uint64
|
||||
mem [32]byte
|
||||
memIdx int8
|
||||
}
|
||||
|
||||
// Size returns the number of bytes Sum will return.
|
||||
func (xx *XXHash64) Size() int {
|
||||
return 8
|
||||
}
|
||||
|
||||
// BlockSize returns the hash's underlying block size.
|
||||
// The Write method must be able to accept any amount
|
||||
// of data, but it may operate more efficiently if all writes
|
||||
// are a multiple of the block size.
|
||||
func (xx *XXHash64) BlockSize() int {
|
||||
return 32
|
||||
}
|
||||
|
||||
// NewS64 creates a new hash.Hash64 computing the 64bit xxHash checksum starting with the specific seed.
|
||||
func NewS64(seed uint64) (xx *XXHash64) {
|
||||
xx = &XXHash64{
|
||||
seed: seed,
|
||||
}
|
||||
xx.Reset()
|
||||
return
|
||||
}
|
||||
|
||||
// New64 creates a new hash.Hash64 computing the 64bit xxHash checksum starting with the seed set to 0x0.
|
||||
func New64() *XXHash64 {
|
||||
return NewS64(0)
|
||||
}
|
||||
|
||||
func (xx *XXHash64) Reset() {
|
||||
xx.ln, xx.memIdx = 0, 0
|
||||
xx.v1, xx.v2, xx.v3, xx.v4 = resetVs64(xx.seed)
|
||||
}
|
||||
|
||||
// Sum appends the current hash to b and returns the resulting slice.
|
||||
// It does not change the underlying hash state.
|
||||
func (xx *XXHash64) Sum(in []byte) []byte {
|
||||
s := xx.Sum64()
|
||||
return append(in, byte(s>>56), byte(s>>48), byte(s>>40), byte(s>>32), byte(s>>24), byte(s>>16), byte(s>>8), byte(s))
|
||||
}
|
||||
|
||||
// MarshalBinary implements the encoding.BinaryMarshaler interface.
|
||||
func (xx *XXHash64) MarshalBinary() ([]byte, error) {
|
||||
b := make([]byte, 0, marshaled64Size)
|
||||
b = append(b, magic64...)
|
||||
b = appendUint64(b, xx.v1)
|
||||
b = appendUint64(b, xx.v2)
|
||||
b = appendUint64(b, xx.v3)
|
||||
b = appendUint64(b, xx.v4)
|
||||
b = appendUint64(b, xx.seed)
|
||||
b = appendUint64(b, xx.ln)
|
||||
b = append(b, byte(xx.memIdx))
|
||||
b = append(b, xx.mem[:]...)
|
||||
return b, nil
|
||||
}
|
||||
|
||||
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
|
||||
func (xx *XXHash64) UnmarshalBinary(b []byte) error {
|
||||
if len(b) < len(magic64) || string(b[:len(magic64)]) != magic64 {
|
||||
return errors.New("xxhash: invalid hash state identifier")
|
||||
}
|
||||
if len(b) != marshaled64Size {
|
||||
return errors.New("xxhash: invalid hash state size")
|
||||
}
|
||||
b = b[len(magic64):]
|
||||
b, xx.v1 = consumeUint64(b)
|
||||
b, xx.v2 = consumeUint64(b)
|
||||
b, xx.v3 = consumeUint64(b)
|
||||
b, xx.v4 = consumeUint64(b)
|
||||
b, xx.seed = consumeUint64(b)
|
||||
b, xx.ln = consumeUint64(b)
|
||||
xx.memIdx = int8(b[0])
|
||||
b = b[1:]
|
||||
copy(xx.mem[:], b)
|
||||
return nil
|
||||
}
|
||||
|
||||
func appendInt32(b []byte, x int32) []byte { return appendUint32(b, uint32(x)) }
|
||||
|
||||
func appendUint32(b []byte, x uint32) []byte {
|
||||
var a [4]byte
|
||||
binary.LittleEndian.PutUint32(a[:], x)
|
||||
return append(b, a[:]...)
|
||||
}
|
||||
|
||||
func appendUint64(b []byte, x uint64) []byte {
|
||||
var a [8]byte
|
||||
binary.LittleEndian.PutUint64(a[:], x)
|
||||
return append(b, a[:]...)
|
||||
}
|
||||
|
||||
func consumeInt32(b []byte) ([]byte, int32) { bn, x := consumeUint32(b); return bn, int32(x) }
|
||||
func consumeUint32(b []byte) ([]byte, uint32) { x := u32(b); return b[4:], x }
|
||||
func consumeUint64(b []byte) ([]byte, uint64) { x := u64(b); return b[8:], x }
|
||||
|
||||
// force the compiler to use ROTL instructions
|
||||
|
||||
func rotl32_1(x uint32) uint32 { return (x << 1) | (x >> (32 - 1)) }
|
||||
func rotl32_7(x uint32) uint32 { return (x << 7) | (x >> (32 - 7)) }
|
||||
func rotl32_11(x uint32) uint32 { return (x << 11) | (x >> (32 - 11)) }
|
||||
func rotl32_12(x uint32) uint32 { return (x << 12) | (x >> (32 - 12)) }
|
||||
func rotl32_13(x uint32) uint32 { return (x << 13) | (x >> (32 - 13)) }
|
||||
func rotl32_17(x uint32) uint32 { return (x << 17) | (x >> (32 - 17)) }
|
||||
func rotl32_18(x uint32) uint32 { return (x << 18) | (x >> (32 - 18)) }
|
||||
|
||||
func rotl64_1(x uint64) uint64 { return (x << 1) | (x >> (64 - 1)) }
|
||||
func rotl64_7(x uint64) uint64 { return (x << 7) | (x >> (64 - 7)) }
|
||||
func rotl64_11(x uint64) uint64 { return (x << 11) | (x >> (64 - 11)) }
|
||||
func rotl64_12(x uint64) uint64 { return (x << 12) | (x >> (64 - 12)) }
|
||||
func rotl64_18(x uint64) uint64 { return (x << 18) | (x >> (64 - 18)) }
|
||||
func rotl64_23(x uint64) uint64 { return (x << 23) | (x >> (64 - 23)) }
|
||||
func rotl64_27(x uint64) uint64 { return (x << 27) | (x >> (64 - 27)) }
|
||||
func rotl64_31(x uint64) uint64 { return (x << 31) | (x >> (64 - 31)) }
|
||||
|
||||
func mix64(h uint64) uint64 {
|
||||
h ^= h >> 33
|
||||
h *= prime64x2
|
||||
h ^= h >> 29
|
||||
h *= prime64x3
|
||||
h ^= h >> 32
|
||||
return h
|
||||
}
|
||||
|
||||
func resetVs64(seed uint64) (v1, v2, v3, v4 uint64) {
|
||||
if seed == 0 {
|
||||
return zero64x1, zero64x2, zero64x3, zero64x4
|
||||
}
|
||||
return (seed + prime64x1 + prime64x2), (seed + prime64x2), (seed), (seed - prime64x1)
|
||||
}
|
||||
|
||||
// borrowed from cespare
|
||||
func round64(h, v uint64) uint64 {
|
||||
h += v * prime64x2
|
||||
h = rotl64_31(h)
|
||||
h *= prime64x1
|
||||
return h
|
||||
}
|
||||
|
||||
func mergeRound64(h, v uint64) uint64 {
|
||||
v = round64(0, v)
|
||||
h ^= v
|
||||
h = h*prime64x1 + prime64x4
|
||||
return h
|
||||
}
|
||||
161
vendor/github.com/OneOfOne/xxhash/xxhash_go17.go
generated
vendored
161
vendor/github.com/OneOfOne/xxhash/xxhash_go17.go
generated
vendored
@@ -1,161 +0,0 @@
|
||||
package xxhash
|
||||
|
||||
func u32(in []byte) uint32 {
|
||||
return uint32(in[0]) | uint32(in[1])<<8 | uint32(in[2])<<16 | uint32(in[3])<<24
|
||||
}
|
||||
|
||||
func u64(in []byte) uint64 {
|
||||
return uint64(in[0]) | uint64(in[1])<<8 | uint64(in[2])<<16 | uint64(in[3])<<24 | uint64(in[4])<<32 | uint64(in[5])<<40 | uint64(in[6])<<48 | uint64(in[7])<<56
|
||||
}
|
||||
|
||||
// Checksum32S returns the checksum of the input bytes with the specific seed.
|
||||
func Checksum32S(in []byte, seed uint32) (h uint32) {
|
||||
var i int
|
||||
|
||||
if len(in) > 15 {
|
||||
var (
|
||||
v1 = seed + prime32x1 + prime32x2
|
||||
v2 = seed + prime32x2
|
||||
v3 = seed + 0
|
||||
v4 = seed - prime32x1
|
||||
)
|
||||
for ; i < len(in)-15; i += 16 {
|
||||
in := in[i : i+16 : len(in)]
|
||||
v1 += u32(in[0:4:len(in)]) * prime32x2
|
||||
v1 = rotl32_13(v1) * prime32x1
|
||||
|
||||
v2 += u32(in[4:8:len(in)]) * prime32x2
|
||||
v2 = rotl32_13(v2) * prime32x1
|
||||
|
||||
v3 += u32(in[8:12:len(in)]) * prime32x2
|
||||
v3 = rotl32_13(v3) * prime32x1
|
||||
|
||||
v4 += u32(in[12:16:len(in)]) * prime32x2
|
||||
v4 = rotl32_13(v4) * prime32x1
|
||||
}
|
||||
|
||||
h = rotl32_1(v1) + rotl32_7(v2) + rotl32_12(v3) + rotl32_18(v4)
|
||||
|
||||
} else {
|
||||
h = seed + prime32x5
|
||||
}
|
||||
|
||||
h += uint32(len(in))
|
||||
for ; i <= len(in)-4; i += 4 {
|
||||
in := in[i : i+4 : len(in)]
|
||||
h += u32(in[0:4:len(in)]) * prime32x3
|
||||
h = rotl32_17(h) * prime32x4
|
||||
}
|
||||
|
||||
for ; i < len(in); i++ {
|
||||
h += uint32(in[i]) * prime32x5
|
||||
h = rotl32_11(h) * prime32x1
|
||||
}
|
||||
|
||||
h ^= h >> 15
|
||||
h *= prime32x2
|
||||
h ^= h >> 13
|
||||
h *= prime32x3
|
||||
h ^= h >> 16
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (xx *XXHash32) Write(in []byte) (n int, err error) {
|
||||
i, ml := 0, int(xx.memIdx)
|
||||
n = len(in)
|
||||
xx.ln += int32(n)
|
||||
|
||||
if d := 16 - ml; ml > 0 && ml+len(in) > 16 {
|
||||
xx.memIdx += int32(copy(xx.mem[xx.memIdx:], in[:d]))
|
||||
ml, in = 16, in[d:len(in):len(in)]
|
||||
} else if ml+len(in) < 16 {
|
||||
xx.memIdx += int32(copy(xx.mem[xx.memIdx:], in))
|
||||
return
|
||||
}
|
||||
|
||||
if ml > 0 {
|
||||
i += 16 - ml
|
||||
xx.memIdx += int32(copy(xx.mem[xx.memIdx:len(xx.mem):len(xx.mem)], in))
|
||||
in := xx.mem[:16:len(xx.mem)]
|
||||
|
||||
xx.v1 += u32(in[0:4:len(in)]) * prime32x2
|
||||
xx.v1 = rotl32_13(xx.v1) * prime32x1
|
||||
|
||||
xx.v2 += u32(in[4:8:len(in)]) * prime32x2
|
||||
xx.v2 = rotl32_13(xx.v2) * prime32x1
|
||||
|
||||
xx.v3 += u32(in[8:12:len(in)]) * prime32x2
|
||||
xx.v3 = rotl32_13(xx.v3) * prime32x1
|
||||
|
||||
xx.v4 += u32(in[12:16:len(in)]) * prime32x2
|
||||
xx.v4 = rotl32_13(xx.v4) * prime32x1
|
||||
|
||||
xx.memIdx = 0
|
||||
}
|
||||
|
||||
for ; i <= len(in)-16; i += 16 {
|
||||
in := in[i : i+16 : len(in)]
|
||||
xx.v1 += u32(in[0:4:len(in)]) * prime32x2
|
||||
xx.v1 = rotl32_13(xx.v1) * prime32x1
|
||||
|
||||
xx.v2 += u32(in[4:8:len(in)]) * prime32x2
|
||||
xx.v2 = rotl32_13(xx.v2) * prime32x1
|
||||
|
||||
xx.v3 += u32(in[8:12:len(in)]) * prime32x2
|
||||
xx.v3 = rotl32_13(xx.v3) * prime32x1
|
||||
|
||||
xx.v4 += u32(in[12:16:len(in)]) * prime32x2
|
||||
xx.v4 = rotl32_13(xx.v4) * prime32x1
|
||||
}
|
||||
|
||||
if len(in)-i != 0 {
|
||||
xx.memIdx += int32(copy(xx.mem[xx.memIdx:], in[i:len(in):len(in)]))
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (xx *XXHash32) Sum32() (h uint32) {
|
||||
var i int32
|
||||
if xx.ln > 15 {
|
||||
h = rotl32_1(xx.v1) + rotl32_7(xx.v2) + rotl32_12(xx.v3) + rotl32_18(xx.v4)
|
||||
} else {
|
||||
h = xx.seed + prime32x5
|
||||
}
|
||||
|
||||
h += uint32(xx.ln)
|
||||
|
||||
if xx.memIdx > 0 {
|
||||
for ; i < xx.memIdx-3; i += 4 {
|
||||
in := xx.mem[i : i+4 : len(xx.mem)]
|
||||
h += u32(in[0:4:len(in)]) * prime32x3
|
||||
h = rotl32_17(h) * prime32x4
|
||||
}
|
||||
|
||||
for ; i < xx.memIdx; i++ {
|
||||
h += uint32(xx.mem[i]) * prime32x5
|
||||
h = rotl32_11(h) * prime32x1
|
||||
}
|
||||
}
|
||||
h ^= h >> 15
|
||||
h *= prime32x2
|
||||
h ^= h >> 13
|
||||
h *= prime32x3
|
||||
h ^= h >> 16
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Checksum64S returns the 64bit xxhash checksum for a single input
|
||||
func Checksum64S(in []byte, seed uint64) uint64 {
|
||||
if len(in) == 0 && seed == 0 {
|
||||
return 0xef46db3751d8e999
|
||||
}
|
||||
|
||||
if len(in) > 31 {
|
||||
return checksum64(in, seed)
|
||||
}
|
||||
|
||||
return checksum64Short(in, seed)
|
||||
}
|
||||
183
vendor/github.com/OneOfOne/xxhash/xxhash_safe.go
generated
vendored
183
vendor/github.com/OneOfOne/xxhash/xxhash_safe.go
generated
vendored
@@ -1,183 +0,0 @@
|
||||
// +build appengine safe ppc64le ppc64be mipsle mips s390x
|
||||
|
||||
package xxhash
|
||||
|
||||
// Backend returns the current version of xxhash being used.
|
||||
const Backend = "GoSafe"
|
||||
|
||||
func ChecksumString32S(s string, seed uint32) uint32 {
|
||||
return Checksum32S([]byte(s), seed)
|
||||
}
|
||||
|
||||
func (xx *XXHash32) WriteString(s string) (int, error) {
|
||||
if len(s) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
return xx.Write([]byte(s))
|
||||
}
|
||||
|
||||
func ChecksumString64S(s string, seed uint64) uint64 {
|
||||
return Checksum64S([]byte(s), seed)
|
||||
}
|
||||
|
||||
func (xx *XXHash64) WriteString(s string) (int, error) {
|
||||
if len(s) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
return xx.Write([]byte(s))
|
||||
}
|
||||
|
||||
func checksum64(in []byte, seed uint64) (h uint64) {
|
||||
var (
|
||||
v1, v2, v3, v4 = resetVs64(seed)
|
||||
|
||||
i int
|
||||
)
|
||||
|
||||
for ; i < len(in)-31; i += 32 {
|
||||
in := in[i : i+32 : len(in)]
|
||||
v1 = round64(v1, u64(in[0:8:len(in)]))
|
||||
v2 = round64(v2, u64(in[8:16:len(in)]))
|
||||
v3 = round64(v3, u64(in[16:24:len(in)]))
|
||||
v4 = round64(v4, u64(in[24:32:len(in)]))
|
||||
}
|
||||
|
||||
h = rotl64_1(v1) + rotl64_7(v2) + rotl64_12(v3) + rotl64_18(v4)
|
||||
|
||||
h = mergeRound64(h, v1)
|
||||
h = mergeRound64(h, v2)
|
||||
h = mergeRound64(h, v3)
|
||||
h = mergeRound64(h, v4)
|
||||
|
||||
h += uint64(len(in))
|
||||
|
||||
for ; i < len(in)-7; i += 8 {
|
||||
h ^= round64(0, u64(in[i:len(in):len(in)]))
|
||||
h = rotl64_27(h)*prime64x1 + prime64x4
|
||||
}
|
||||
|
||||
for ; i < len(in)-3; i += 4 {
|
||||
h ^= uint64(u32(in[i:len(in):len(in)])) * prime64x1
|
||||
h = rotl64_23(h)*prime64x2 + prime64x3
|
||||
}
|
||||
|
||||
for ; i < len(in); i++ {
|
||||
h ^= uint64(in[i]) * prime64x5
|
||||
h = rotl64_11(h) * prime64x1
|
||||
}
|
||||
|
||||
return mix64(h)
|
||||
}
|
||||
|
||||
func checksum64Short(in []byte, seed uint64) uint64 {
|
||||
var (
|
||||
h = seed + prime64x5 + uint64(len(in))
|
||||
i int
|
||||
)
|
||||
|
||||
for ; i < len(in)-7; i += 8 {
|
||||
k := u64(in[i : i+8 : len(in)])
|
||||
h ^= round64(0, k)
|
||||
h = rotl64_27(h)*prime64x1 + prime64x4
|
||||
}
|
||||
|
||||
for ; i < len(in)-3; i += 4 {
|
||||
h ^= uint64(u32(in[i:i+4:len(in)])) * prime64x1
|
||||
h = rotl64_23(h)*prime64x2 + prime64x3
|
||||
}
|
||||
|
||||
for ; i < len(in); i++ {
|
||||
h ^= uint64(in[i]) * prime64x5
|
||||
h = rotl64_11(h) * prime64x1
|
||||
}
|
||||
|
||||
return mix64(h)
|
||||
}
|
||||
|
||||
func (xx *XXHash64) Write(in []byte) (n int, err error) {
|
||||
var (
|
||||
ml = int(xx.memIdx)
|
||||
d = 32 - ml
|
||||
)
|
||||
|
||||
n = len(in)
|
||||
xx.ln += uint64(n)
|
||||
|
||||
if ml+len(in) < 32 {
|
||||
xx.memIdx += int8(copy(xx.mem[xx.memIdx:len(xx.mem):len(xx.mem)], in))
|
||||
return
|
||||
}
|
||||
|
||||
i, v1, v2, v3, v4 := 0, xx.v1, xx.v2, xx.v3, xx.v4
|
||||
if ml > 0 && ml+len(in) > 32 {
|
||||
xx.memIdx += int8(copy(xx.mem[xx.memIdx:len(xx.mem):len(xx.mem)], in[:d:len(in)]))
|
||||
in = in[d:len(in):len(in)]
|
||||
|
||||
in := xx.mem[0:32:len(xx.mem)]
|
||||
|
||||
v1 = round64(v1, u64(in[0:8:len(in)]))
|
||||
v2 = round64(v2, u64(in[8:16:len(in)]))
|
||||
v3 = round64(v3, u64(in[16:24:len(in)]))
|
||||
v4 = round64(v4, u64(in[24:32:len(in)]))
|
||||
|
||||
xx.memIdx = 0
|
||||
}
|
||||
|
||||
for ; i < len(in)-31; i += 32 {
|
||||
in := in[i : i+32 : len(in)]
|
||||
v1 = round64(v1, u64(in[0:8:len(in)]))
|
||||
v2 = round64(v2, u64(in[8:16:len(in)]))
|
||||
v3 = round64(v3, u64(in[16:24:len(in)]))
|
||||
v4 = round64(v4, u64(in[24:32:len(in)]))
|
||||
}
|
||||
|
||||
if len(in)-i != 0 {
|
||||
xx.memIdx += int8(copy(xx.mem[xx.memIdx:], in[i:len(in):len(in)]))
|
||||
}
|
||||
|
||||
xx.v1, xx.v2, xx.v3, xx.v4 = v1, v2, v3, v4
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (xx *XXHash64) Sum64() (h uint64) {
|
||||
var i int
|
||||
if xx.ln > 31 {
|
||||
v1, v2, v3, v4 := xx.v1, xx.v2, xx.v3, xx.v4
|
||||
h = rotl64_1(v1) + rotl64_7(v2) + rotl64_12(v3) + rotl64_18(v4)
|
||||
|
||||
h = mergeRound64(h, v1)
|
||||
h = mergeRound64(h, v2)
|
||||
h = mergeRound64(h, v3)
|
||||
h = mergeRound64(h, v4)
|
||||
} else {
|
||||
h = xx.seed + prime64x5
|
||||
}
|
||||
|
||||
h += uint64(xx.ln)
|
||||
if xx.memIdx > 0 {
|
||||
in := xx.mem[:xx.memIdx]
|
||||
for ; i < int(xx.memIdx)-7; i += 8 {
|
||||
in := in[i : i+8 : len(in)]
|
||||
k := u64(in[0:8:len(in)])
|
||||
k *= prime64x2
|
||||
k = rotl64_31(k)
|
||||
k *= prime64x1
|
||||
h ^= k
|
||||
h = rotl64_27(h)*prime64x1 + prime64x4
|
||||
}
|
||||
|
||||
for ; i < int(xx.memIdx)-3; i += 4 {
|
||||
in := in[i : i+4 : len(in)]
|
||||
h ^= uint64(u32(in[0:4:len(in)])) * prime64x1
|
||||
h = rotl64_23(h)*prime64x2 + prime64x3
|
||||
}
|
||||
|
||||
for ; i < int(xx.memIdx); i++ {
|
||||
h ^= uint64(in[i]) * prime64x5
|
||||
h = rotl64_11(h) * prime64x1
|
||||
}
|
||||
}
|
||||
|
||||
return mix64(h)
|
||||
}
|
||||
240
vendor/github.com/OneOfOne/xxhash/xxhash_unsafe.go
generated
vendored
240
vendor/github.com/OneOfOne/xxhash/xxhash_unsafe.go
generated
vendored
@@ -1,240 +0,0 @@
|
||||
// +build !safe
|
||||
// +build !appengine
|
||||
// +build !ppc64le
|
||||
// +build !mipsle
|
||||
// +build !ppc64be
|
||||
// +build !mips
|
||||
// +build !s390x
|
||||
|
||||
package xxhash
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// Backend returns the current version of xxhash being used.
|
||||
const Backend = "GoUnsafe"
|
||||
|
||||
// ChecksumString32S returns the checksum of the input data, without creating a copy, with the specific seed.
|
||||
func ChecksumString32S(s string, seed uint32) uint32 {
|
||||
if len(s) == 0 {
|
||||
return Checksum32S(nil, seed)
|
||||
}
|
||||
ss := (*reflect.StringHeader)(unsafe.Pointer(&s))
|
||||
return Checksum32S((*[maxInt32]byte)(unsafe.Pointer(ss.Data))[:len(s):len(s)], seed)
|
||||
}
|
||||
|
||||
func (xx *XXHash32) WriteString(s string) (int, error) {
|
||||
if len(s) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
ss := (*reflect.StringHeader)(unsafe.Pointer(&s))
|
||||
return xx.Write((*[maxInt32]byte)(unsafe.Pointer(ss.Data))[:len(s):len(s)])
|
||||
}
|
||||
|
||||
// ChecksumString64S returns the checksum of the input data, without creating a copy, with the specific seed.
|
||||
func ChecksumString64S(s string, seed uint64) uint64 {
|
||||
if len(s) == 0 {
|
||||
return Checksum64S(nil, seed)
|
||||
}
|
||||
|
||||
ss := (*reflect.StringHeader)(unsafe.Pointer(&s))
|
||||
return Checksum64S((*[maxInt32]byte)(unsafe.Pointer(ss.Data))[:len(s):len(s)], seed)
|
||||
}
|
||||
|
||||
func (xx *XXHash64) WriteString(s string) (int, error) {
|
||||
if len(s) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
ss := (*reflect.StringHeader)(unsafe.Pointer(&s))
|
||||
return xx.Write((*[maxInt32]byte)(unsafe.Pointer(ss.Data))[:len(s):len(s)])
|
||||
}
|
||||
|
||||
//go:nocheckptr
|
||||
func checksum64(in []byte, seed uint64) uint64 {
|
||||
var (
|
||||
wordsLen = len(in) >> 3
|
||||
words = ((*[maxInt32 / 8]uint64)(unsafe.Pointer(&in[0])))[:wordsLen:wordsLen]
|
||||
|
||||
v1, v2, v3, v4 = resetVs64(seed)
|
||||
|
||||
h uint64
|
||||
i int
|
||||
)
|
||||
|
||||
for ; i < len(words)-3; i += 4 {
|
||||
words := (*[4]uint64)(unsafe.Pointer(&words[i]))
|
||||
|
||||
v1 = round64(v1, words[0])
|
||||
v2 = round64(v2, words[1])
|
||||
v3 = round64(v3, words[2])
|
||||
v4 = round64(v4, words[3])
|
||||
}
|
||||
|
||||
h = rotl64_1(v1) + rotl64_7(v2) + rotl64_12(v3) + rotl64_18(v4)
|
||||
|
||||
h = mergeRound64(h, v1)
|
||||
h = mergeRound64(h, v2)
|
||||
h = mergeRound64(h, v3)
|
||||
h = mergeRound64(h, v4)
|
||||
|
||||
h += uint64(len(in))
|
||||
|
||||
for _, k := range words[i:] {
|
||||
h ^= round64(0, k)
|
||||
h = rotl64_27(h)*prime64x1 + prime64x4
|
||||
}
|
||||
|
||||
if in = in[wordsLen<<3 : len(in) : len(in)]; len(in) > 3 {
|
||||
words := (*[1]uint32)(unsafe.Pointer(&in[0]))
|
||||
h ^= uint64(words[0]) * prime64x1
|
||||
h = rotl64_23(h)*prime64x2 + prime64x3
|
||||
|
||||
in = in[4:len(in):len(in)]
|
||||
}
|
||||
|
||||
for _, b := range in {
|
||||
h ^= uint64(b) * prime64x5
|
||||
h = rotl64_11(h) * prime64x1
|
||||
}
|
||||
|
||||
return mix64(h)
|
||||
}
|
||||
|
||||
//go:nocheckptr
|
||||
func checksum64Short(in []byte, seed uint64) uint64 {
|
||||
var (
|
||||
h = seed + prime64x5 + uint64(len(in))
|
||||
i int
|
||||
)
|
||||
|
||||
if len(in) > 7 {
|
||||
var (
|
||||
wordsLen = len(in) >> 3
|
||||
words = ((*[maxInt32 / 8]uint64)(unsafe.Pointer(&in[0])))[:wordsLen:wordsLen]
|
||||
)
|
||||
|
||||
for i := range words {
|
||||
h ^= round64(0, words[i])
|
||||
h = rotl64_27(h)*prime64x1 + prime64x4
|
||||
}
|
||||
|
||||
i = wordsLen << 3
|
||||
}
|
||||
|
||||
if in = in[i:len(in):len(in)]; len(in) > 3 {
|
||||
words := (*[1]uint32)(unsafe.Pointer(&in[0]))
|
||||
h ^= uint64(words[0]) * prime64x1
|
||||
h = rotl64_23(h)*prime64x2 + prime64x3
|
||||
|
||||
in = in[4:len(in):len(in)]
|
||||
}
|
||||
|
||||
for _, b := range in {
|
||||
h ^= uint64(b) * prime64x5
|
||||
h = rotl64_11(h) * prime64x1
|
||||
}
|
||||
|
||||
return mix64(h)
|
||||
}
|
||||
|
||||
func (xx *XXHash64) Write(in []byte) (n int, err error) {
|
||||
mem, idx := xx.mem[:], int(xx.memIdx)
|
||||
|
||||
xx.ln, n = xx.ln+uint64(len(in)), len(in)
|
||||
|
||||
if idx+len(in) < 32 {
|
||||
xx.memIdx += int8(copy(mem[idx:len(mem):len(mem)], in))
|
||||
return
|
||||
}
|
||||
|
||||
var (
|
||||
v1, v2, v3, v4 = xx.v1, xx.v2, xx.v3, xx.v4
|
||||
|
||||
i int
|
||||
)
|
||||
|
||||
if d := 32 - int(idx); d > 0 && int(idx)+len(in) > 31 {
|
||||
copy(mem[idx:len(mem):len(mem)], in[:len(in):len(in)])
|
||||
|
||||
words := (*[4]uint64)(unsafe.Pointer(&mem[0]))
|
||||
|
||||
v1 = round64(v1, words[0])
|
||||
v2 = round64(v2, words[1])
|
||||
v3 = round64(v3, words[2])
|
||||
v4 = round64(v4, words[3])
|
||||
|
||||
if in, xx.memIdx = in[d:len(in):len(in)], 0; len(in) == 0 {
|
||||
goto RET
|
||||
}
|
||||
}
|
||||
|
||||
for ; i < len(in)-31; i += 32 {
|
||||
words := (*[4]uint64)(unsafe.Pointer(&in[i]))
|
||||
|
||||
v1 = round64(v1, words[0])
|
||||
v2 = round64(v2, words[1])
|
||||
v3 = round64(v3, words[2])
|
||||
v4 = round64(v4, words[3])
|
||||
}
|
||||
|
||||
if len(in)-i != 0 {
|
||||
xx.memIdx += int8(copy(mem[xx.memIdx:len(mem):len(mem)], in[i:len(in):len(in)]))
|
||||
}
|
||||
|
||||
RET:
|
||||
xx.v1, xx.v2, xx.v3, xx.v4 = v1, v2, v3, v4
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func (xx *XXHash64) Sum64() (h uint64) {
|
||||
if seed := xx.seed; xx.ln > 31 {
|
||||
v1, v2, v3, v4 := xx.v1, xx.v2, xx.v3, xx.v4
|
||||
h = rotl64_1(v1) + rotl64_7(v2) + rotl64_12(v3) + rotl64_18(v4)
|
||||
|
||||
h = mergeRound64(h, v1)
|
||||
h = mergeRound64(h, v2)
|
||||
h = mergeRound64(h, v3)
|
||||
h = mergeRound64(h, v4)
|
||||
} else if seed == 0 {
|
||||
h = prime64x5
|
||||
} else {
|
||||
h = seed + prime64x5
|
||||
}
|
||||
|
||||
h += uint64(xx.ln)
|
||||
|
||||
if xx.memIdx == 0 {
|
||||
return mix64(h)
|
||||
}
|
||||
|
||||
var (
|
||||
in = xx.mem[:xx.memIdx:xx.memIdx]
|
||||
wordsLen = len(in) >> 3
|
||||
words = ((*[maxInt32 / 8]uint64)(unsafe.Pointer(&in[0])))[:wordsLen:wordsLen]
|
||||
)
|
||||
|
||||
for _, k := range words {
|
||||
h ^= round64(0, k)
|
||||
h = rotl64_27(h)*prime64x1 + prime64x4
|
||||
}
|
||||
|
||||
if in = in[wordsLen<<3 : len(in) : len(in)]; len(in) > 3 {
|
||||
words := (*[1]uint32)(unsafe.Pointer(&in[0]))
|
||||
|
||||
h ^= uint64(words[0]) * prime64x1
|
||||
h = rotl64_23(h)*prime64x2 + prime64x3
|
||||
|
||||
in = in[4:len(in):len(in)]
|
||||
}
|
||||
|
||||
for _, b := range in {
|
||||
h ^= uint64(b) * prime64x5
|
||||
h = rotl64_11(h) * prime64x1
|
||||
}
|
||||
|
||||
return mix64(h)
|
||||
}
|
||||
2
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/BUILD.bazel
generated
vendored
2
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/BUILD.bazel
generated
vendored
@@ -73,7 +73,7 @@ go_test(
|
||||
"@org_golang_google_genproto_googleapis_api//httpbody",
|
||||
"@org_golang_google_genproto_googleapis_rpc//errdetails",
|
||||
"@org_golang_google_genproto_googleapis_rpc//status",
|
||||
"@org_golang_google_grpc//:go_default_library",
|
||||
"@org_golang_google_grpc//:grpc",
|
||||
"@org_golang_google_grpc//codes",
|
||||
"@org_golang_google_grpc//health/grpc_health_v1",
|
||||
"@org_golang_google_grpc//metadata",
|
||||
|
||||
11
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/context.go
generated
vendored
11
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/context.go
generated
vendored
@@ -49,6 +49,7 @@ var malformedHTTPHeaders = map[string]struct{}{
|
||||
type (
|
||||
rpcMethodKey struct{}
|
||||
httpPathPatternKey struct{}
|
||||
httpPatternKey struct{}
|
||||
|
||||
AnnotateContextOption func(ctx context.Context) context.Context
|
||||
)
|
||||
@@ -404,3 +405,13 @@ func HTTPPathPattern(ctx context.Context) (string, bool) {
|
||||
func withHTTPPathPattern(ctx context.Context, httpPathPattern string) context.Context {
|
||||
return context.WithValue(ctx, httpPathPatternKey{}, httpPathPattern)
|
||||
}
|
||||
|
||||
// HTTPPattern returns the HTTP path pattern struct relating to the HTTP handler, if one exists.
|
||||
func HTTPPattern(ctx context.Context) (Pattern, bool) {
|
||||
v, ok := ctx.Value(httpPatternKey{}).(Pattern)
|
||||
return v, ok
|
||||
}
|
||||
|
||||
func withHTTPPattern(ctx context.Context, httpPattern Pattern) context.Context {
|
||||
return context.WithValue(ctx, httpPatternKey{}, httpPattern)
|
||||
}
|
||||
|
||||
6
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/convert.go
generated
vendored
6
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/convert.go
generated
vendored
@@ -94,7 +94,7 @@ func Int64(val string) (int64, error) {
|
||||
}
|
||||
|
||||
// Int64Slice converts 'val' where individual integers are separated by
|
||||
// 'sep' into a int64 slice.
|
||||
// 'sep' into an int64 slice.
|
||||
func Int64Slice(val, sep string) ([]int64, error) {
|
||||
s := strings.Split(val, sep)
|
||||
values := make([]int64, len(s))
|
||||
@@ -118,7 +118,7 @@ func Int32(val string) (int32, error) {
|
||||
}
|
||||
|
||||
// Int32Slice converts 'val' where individual integers are separated by
|
||||
// 'sep' into a int32 slice.
|
||||
// 'sep' into an int32 slice.
|
||||
func Int32Slice(val, sep string) ([]int32, error) {
|
||||
s := strings.Split(val, sep)
|
||||
values := make([]int32, len(s))
|
||||
@@ -190,7 +190,7 @@ func Bytes(val string) ([]byte, error) {
|
||||
}
|
||||
|
||||
// BytesSlice converts 'val' where individual bytes sequences, encoded in URL-safe
|
||||
// base64 without padding, are separated by 'sep' into a slice of bytes slices slice.
|
||||
// base64 without padding, are separated by 'sep' into a slice of byte slices.
|
||||
func BytesSlice(val, sep string) ([][]byte, error) {
|
||||
s := strings.Split(val, sep)
|
||||
values := make([][]byte, len(s))
|
||||
|
||||
31
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/errors.go
generated
vendored
31
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/errors.go
generated
vendored
@@ -81,6 +81,21 @@ func HTTPError(ctx context.Context, mux *ServeMux, marshaler Marshaler, w http.R
|
||||
mux.errorHandler(ctx, mux, marshaler, w, r, err)
|
||||
}
|
||||
|
||||
// HTTPStreamError uses the mux-configured stream error handler to notify error to the client without closing the connection.
|
||||
func HTTPStreamError(ctx context.Context, mux *ServeMux, marshaler Marshaler, w http.ResponseWriter, r *http.Request, err error) {
|
||||
st := mux.streamErrorHandler(ctx, err)
|
||||
msg := errorChunk(st)
|
||||
buf, err := marshaler.Marshal(msg)
|
||||
if err != nil {
|
||||
grpclog.Errorf("Failed to marshal an error: %v", err)
|
||||
return
|
||||
}
|
||||
if _, err := w.Write(buf); err != nil {
|
||||
grpclog.Errorf("Failed to notify error to client: %v", err)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// DefaultHTTPErrorHandler is the default error handler.
|
||||
// If "err" is a gRPC Status, the function replies with the status code mapped by HTTPStatusFromCode.
|
||||
// If "err" is a HTTPStatusError, the function replies with the status code provide by that struct. This is
|
||||
@@ -93,6 +108,7 @@ func HTTPError(ctx context.Context, mux *ServeMux, marshaler Marshaler, w http.R
|
||||
func DefaultHTTPErrorHandler(ctx context.Context, mux *ServeMux, marshaler Marshaler, w http.ResponseWriter, r *http.Request, err error) {
|
||||
// return Internal when Marshal failed
|
||||
const fallback = `{"code": 13, "message": "failed to marshal error message"}`
|
||||
const fallbackRewriter = `{"code": 13, "message": "failed to rewrite error message"}`
|
||||
|
||||
var customStatus *HTTPStatusError
|
||||
if errors.As(err, &customStatus) {
|
||||
@@ -100,19 +116,28 @@ func DefaultHTTPErrorHandler(ctx context.Context, mux *ServeMux, marshaler Marsh
|
||||
}
|
||||
|
||||
s := status.Convert(err)
|
||||
pb := s.Proto()
|
||||
|
||||
w.Header().Del("Trailer")
|
||||
w.Header().Del("Transfer-Encoding")
|
||||
|
||||
contentType := marshaler.ContentType(pb)
|
||||
respRw, err := mux.forwardResponseRewriter(ctx, s.Proto())
|
||||
if err != nil {
|
||||
grpclog.Errorf("Failed to rewrite error message %q: %v", s, err)
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
if _, err := io.WriteString(w, fallbackRewriter); err != nil {
|
||||
grpclog.Errorf("Failed to write response: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
contentType := marshaler.ContentType(respRw)
|
||||
w.Header().Set("Content-Type", contentType)
|
||||
|
||||
if s.Code() == codes.Unauthenticated {
|
||||
w.Header().Set("WWW-Authenticate", s.Message())
|
||||
}
|
||||
|
||||
buf, merr := marshaler.Marshal(pb)
|
||||
buf, merr := marshaler.Marshal(respRw)
|
||||
if merr != nil {
|
||||
grpclog.Errorf("Failed to marshal error message %q: %v", s, merr)
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
|
||||
2
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/fieldmask.go
generated
vendored
2
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/fieldmask.go
generated
vendored
@@ -155,7 +155,7 @@ func buildPathsBlindly(name string, in interface{}) []string {
|
||||
return paths
|
||||
}
|
||||
|
||||
// fieldMaskPathItem stores a in-progress deconstruction of a path for a fieldmask
|
||||
// fieldMaskPathItem stores an in-progress deconstruction of a path for a fieldmask
|
||||
type fieldMaskPathItem struct {
|
||||
// the list of prior fields leading up to node connected by dots
|
||||
path string
|
||||
|
||||
42
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/handler.go
generated
vendored
42
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/handler.go
generated
vendored
@@ -3,6 +3,7 @@ package runtime
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/textproto"
|
||||
@@ -55,20 +56,33 @@ func ForwardResponseStream(ctx context.Context, mux *ServeMux, marshaler Marshal
|
||||
return
|
||||
}
|
||||
|
||||
respRw, err := mux.forwardResponseRewriter(ctx, resp)
|
||||
if err != nil {
|
||||
grpclog.Errorf("Rewrite error: %v", err)
|
||||
handleForwardResponseStreamError(ctx, wroteHeader, marshaler, w, req, mux, err, delimiter)
|
||||
return
|
||||
}
|
||||
|
||||
if !wroteHeader {
|
||||
w.Header().Set("Content-Type", marshaler.ContentType(resp))
|
||||
var contentType string
|
||||
if sct, ok := marshaler.(StreamContentType); ok {
|
||||
contentType = sct.StreamContentType(respRw)
|
||||
} else {
|
||||
contentType = marshaler.ContentType(respRw)
|
||||
}
|
||||
w.Header().Set("Content-Type", contentType)
|
||||
}
|
||||
|
||||
var buf []byte
|
||||
httpBody, isHTTPBody := resp.(*httpbody.HttpBody)
|
||||
httpBody, isHTTPBody := respRw.(*httpbody.HttpBody)
|
||||
switch {
|
||||
case resp == nil:
|
||||
case respRw == nil:
|
||||
buf, err = marshaler.Marshal(errorChunk(status.New(codes.Internal, "empty response")))
|
||||
case isHTTPBody:
|
||||
buf = httpBody.GetData()
|
||||
default:
|
||||
result := map[string]interface{}{"result": resp}
|
||||
if rb, ok := resp.(responseBody); ok {
|
||||
result := map[string]interface{}{"result": respRw}
|
||||
if rb, ok := respRw.(responseBody); ok {
|
||||
result["result"] = rb.XXX_ResponseBody()
|
||||
}
|
||||
|
||||
@@ -164,12 +178,17 @@ func ForwardResponseMessage(ctx context.Context, mux *ServeMux, marshaler Marsha
|
||||
HTTPError(ctx, mux, marshaler, w, req, err)
|
||||
return
|
||||
}
|
||||
respRw, err := mux.forwardResponseRewriter(ctx, resp)
|
||||
if err != nil {
|
||||
grpclog.Errorf("Rewrite error: %v", err)
|
||||
HTTPError(ctx, mux, marshaler, w, req, err)
|
||||
return
|
||||
}
|
||||
var buf []byte
|
||||
var err error
|
||||
if rb, ok := resp.(responseBody); ok {
|
||||
if rb, ok := respRw.(responseBody); ok {
|
||||
buf, err = marshaler.Marshal(rb.XXX_ResponseBody())
|
||||
} else {
|
||||
buf, err = marshaler.Marshal(resp)
|
||||
buf, err = marshaler.Marshal(respRw)
|
||||
}
|
||||
if err != nil {
|
||||
grpclog.Errorf("Marshal error: %v", err)
|
||||
@@ -177,11 +196,11 @@ func ForwardResponseMessage(ctx context.Context, mux *ServeMux, marshaler Marsha
|
||||
return
|
||||
}
|
||||
|
||||
if !doForwardTrailers {
|
||||
if !doForwardTrailers && mux.writeContentLength {
|
||||
w.Header().Set("Content-Length", strconv.Itoa(len(buf)))
|
||||
}
|
||||
|
||||
if _, err = w.Write(buf); err != nil {
|
||||
if _, err = w.Write(buf); err != nil && !errors.Is(err, http.ErrBodyNotAllowed) {
|
||||
grpclog.Errorf("Failed to write response: %v", err)
|
||||
}
|
||||
|
||||
@@ -201,8 +220,7 @@ func handleForwardResponseOptions(ctx context.Context, w http.ResponseWriter, re
|
||||
}
|
||||
for _, opt := range opts {
|
||||
if err := opt(ctx, w, resp); err != nil {
|
||||
grpclog.Errorf("Error handling ForwardResponseOptions: %v", err)
|
||||
return err
|
||||
return fmt.Errorf("error handling ForwardResponseOptions: %w", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
8
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/marshaler.go
generated
vendored
8
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/marshaler.go
generated
vendored
@@ -48,3 +48,11 @@ type Delimited interface {
|
||||
// Delimiter returns the record separator for the stream.
|
||||
Delimiter() []byte
|
||||
}
|
||||
|
||||
// StreamContentType defines the streaming content type.
|
||||
type StreamContentType interface {
|
||||
// StreamContentType returns the content type for a stream. This shares the
|
||||
// same behaviour as for `Marshaler.ContentType`, but is called, if present,
|
||||
// in the case of a streamed response.
|
||||
StreamContentType(v interface{}) string
|
||||
}
|
||||
|
||||
4
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/marshaler_registry.go
generated
vendored
4
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/marshaler_registry.go
generated
vendored
@@ -86,8 +86,8 @@ func (m marshalerRegistry) add(mime string, marshaler Marshaler) error {
|
||||
// It allows for a mapping of case-sensitive Content-Type MIME type string to runtime.Marshaler interfaces.
|
||||
//
|
||||
// For example, you could allow the client to specify the use of the runtime.JSONPb marshaler
|
||||
// with a "application/jsonpb" Content-Type and the use of the runtime.JSONBuiltin marshaler
|
||||
// with a "application/json" Content-Type.
|
||||
// with an "application/jsonpb" Content-Type and the use of the runtime.JSONBuiltin marshaler
|
||||
// with an "application/json" Content-Type.
|
||||
// "*" can be used to match any Content-Type.
|
||||
// This can be attached to a ServerMux with the marshaler option.
|
||||
func makeMarshalerMIMERegistry() marshalerRegistry {
|
||||
|
||||
77
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/mux.go
generated
vendored
77
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/mux.go
generated
vendored
@@ -48,12 +48,19 @@ var encodedPathSplitter = regexp.MustCompile("(/|%2F)")
|
||||
// A HandlerFunc handles a specific pair of path pattern and HTTP method.
|
||||
type HandlerFunc func(w http.ResponseWriter, r *http.Request, pathParams map[string]string)
|
||||
|
||||
// A Middleware handler wraps another HandlerFunc to do some pre- and/or post-processing of the request. This is used as an alternative to gRPC interceptors when using the direct-to-implementation
|
||||
// registration methods. It is generally recommended to use gRPC client or server interceptors instead
|
||||
// where possible.
|
||||
type Middleware func(HandlerFunc) HandlerFunc
|
||||
|
||||
// ServeMux is a request multiplexer for grpc-gateway.
|
||||
// It matches http requests to patterns and invokes the corresponding handler.
|
||||
type ServeMux struct {
|
||||
// handlers maps HTTP method to a list of handlers.
|
||||
handlers map[string][]handler
|
||||
middlewares []Middleware
|
||||
forwardResponseOptions []func(context.Context, http.ResponseWriter, proto.Message) error
|
||||
forwardResponseRewriter ForwardResponseRewriter
|
||||
marshalers marshalerRegistry
|
||||
incomingHeaderMatcher HeaderMatcherFunc
|
||||
outgoingHeaderMatcher HeaderMatcherFunc
|
||||
@@ -64,11 +71,30 @@ type ServeMux struct {
|
||||
routingErrorHandler RoutingErrorHandlerFunc
|
||||
disablePathLengthFallback bool
|
||||
unescapingMode UnescapingMode
|
||||
writeContentLength bool
|
||||
}
|
||||
|
||||
// ServeMuxOption is an option that can be given to a ServeMux on construction.
|
||||
type ServeMuxOption func(*ServeMux)
|
||||
|
||||
// ForwardResponseRewriter is the signature of a function that is capable of rewriting messages
|
||||
// before they are forwarded in a unary, stream, or error response.
|
||||
type ForwardResponseRewriter func(ctx context.Context, response proto.Message) (any, error)
|
||||
|
||||
// WithForwardResponseRewriter returns a ServeMuxOption that allows for implementers to insert logic
|
||||
// that can rewrite the final response before it is forwarded.
|
||||
//
|
||||
// The response rewriter function is called during unary message forwarding, stream message
|
||||
// forwarding and when errors are being forwarded.
|
||||
//
|
||||
// NOTE: Using this option will likely make what is generated by `protoc-gen-openapiv2` incorrect.
|
||||
// Since this option involves making runtime changes to the response shape or type.
|
||||
func WithForwardResponseRewriter(fwdResponseRewriter ForwardResponseRewriter) ServeMuxOption {
|
||||
return func(sm *ServeMux) {
|
||||
sm.forwardResponseRewriter = fwdResponseRewriter
|
||||
}
|
||||
}
|
||||
|
||||
// WithForwardResponseOption returns a ServeMuxOption representing the forwardResponseOption.
|
||||
//
|
||||
// forwardResponseOption is an option that will be called on the relevant context.Context,
|
||||
@@ -89,6 +115,15 @@ func WithUnescapingMode(mode UnescapingMode) ServeMuxOption {
|
||||
}
|
||||
}
|
||||
|
||||
// WithMiddlewares sets server middleware for all handlers. This is useful as an alternative to gRPC
|
||||
// interceptors when using the direct-to-implementation registration methods and cannot rely
|
||||
// on gRPC interceptors. It's recommended to use gRPC interceptors instead if possible.
|
||||
func WithMiddlewares(middlewares ...Middleware) ServeMuxOption {
|
||||
return func(serveMux *ServeMux) {
|
||||
serveMux.middlewares = append(serveMux.middlewares, middlewares...)
|
||||
}
|
||||
}
|
||||
|
||||
// SetQueryParameterParser sets the query parameter parser, used to populate message from query parameters.
|
||||
// Configuring this will mean the generated OpenAPI output is no longer correct, and it should be
|
||||
// done with careful consideration.
|
||||
@@ -224,6 +259,13 @@ func WithDisablePathLengthFallback() ServeMuxOption {
|
||||
}
|
||||
}
|
||||
|
||||
// WithWriteContentLength returns a ServeMuxOption to enable writing content length on non-streaming responses
|
||||
func WithWriteContentLength() ServeMuxOption {
|
||||
return func(serveMux *ServeMux) {
|
||||
serveMux.writeContentLength = true
|
||||
}
|
||||
}
|
||||
|
||||
// WithHealthEndpointAt returns a ServeMuxOption that will add an endpoint to the created ServeMux at the path specified by endpointPath.
|
||||
// When called the handler will forward the request to the upstream grpc service health check (defined in the
|
||||
// gRPC Health Checking Protocol).
|
||||
@@ -277,13 +319,14 @@ func WithHealthzEndpoint(healthCheckClient grpc_health_v1.HealthClient) ServeMux
|
||||
// NewServeMux returns a new ServeMux whose internal mapping is empty.
|
||||
func NewServeMux(opts ...ServeMuxOption) *ServeMux {
|
||||
serveMux := &ServeMux{
|
||||
handlers: make(map[string][]handler),
|
||||
forwardResponseOptions: make([]func(context.Context, http.ResponseWriter, proto.Message) error, 0),
|
||||
marshalers: makeMarshalerMIMERegistry(),
|
||||
errorHandler: DefaultHTTPErrorHandler,
|
||||
streamErrorHandler: DefaultStreamErrorHandler,
|
||||
routingErrorHandler: DefaultRoutingErrorHandler,
|
||||
unescapingMode: UnescapingModeDefault,
|
||||
handlers: make(map[string][]handler),
|
||||
forwardResponseOptions: make([]func(context.Context, http.ResponseWriter, proto.Message) error, 0),
|
||||
forwardResponseRewriter: func(ctx context.Context, response proto.Message) (any, error) { return response, nil },
|
||||
marshalers: makeMarshalerMIMERegistry(),
|
||||
errorHandler: DefaultHTTPErrorHandler,
|
||||
streamErrorHandler: DefaultStreamErrorHandler,
|
||||
routingErrorHandler: DefaultRoutingErrorHandler,
|
||||
unescapingMode: UnescapingModeDefault,
|
||||
}
|
||||
|
||||
for _, opt := range opts {
|
||||
@@ -305,6 +348,9 @@ func NewServeMux(opts ...ServeMuxOption) *ServeMux {
|
||||
|
||||
// Handle associates "h" to the pair of HTTP method and path pattern.
|
||||
func (s *ServeMux) Handle(meth string, pat Pattern, h HandlerFunc) {
|
||||
if len(s.middlewares) > 0 {
|
||||
h = chainMiddlewares(s.middlewares)(h)
|
||||
}
|
||||
s.handlers[meth] = append([]handler{{pat: pat, h: h}}, s.handlers[meth]...)
|
||||
}
|
||||
|
||||
@@ -405,7 +451,7 @@ func (s *ServeMux) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
continue
|
||||
}
|
||||
h.h(w, r, pathParams)
|
||||
s.handleHandler(h, w, r, pathParams)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -458,7 +504,7 @@ func (s *ServeMux) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
s.errorHandler(ctx, s, outboundMarshaler, w, r, sterr)
|
||||
return
|
||||
}
|
||||
h.h(w, r, pathParams)
|
||||
s.handleHandler(h, w, r, pathParams)
|
||||
return
|
||||
}
|
||||
_, outboundMarshaler := MarshalerForRequest(s, r)
|
||||
@@ -484,3 +530,16 @@ type handler struct {
|
||||
pat Pattern
|
||||
h HandlerFunc
|
||||
}
|
||||
|
||||
func (s *ServeMux) handleHandler(h handler, w http.ResponseWriter, r *http.Request, pathParams map[string]string) {
|
||||
h.h(w, r.WithContext(withHTTPPattern(r.Context(), h.pat)), pathParams)
|
||||
}
|
||||
|
||||
func chainMiddlewares(mws []Middleware) Middleware {
|
||||
return func(next HandlerFunc) HandlerFunc {
|
||||
for i := len(mws); i > 0; i-- {
|
||||
next = mws[i-1](next)
|
||||
}
|
||||
return next
|
||||
}
|
||||
}
|
||||
|
||||
4
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/proto2_convert.go
generated
vendored
4
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/proto2_convert.go
generated
vendored
@@ -40,7 +40,7 @@ func Float32P(val string) (*float32, error) {
|
||||
}
|
||||
|
||||
// Int64P parses the given string representation of an integer
|
||||
// and returns a pointer to a int64 whose value is same as the parsed integer.
|
||||
// and returns a pointer to an int64 whose value is same as the parsed integer.
|
||||
func Int64P(val string) (*int64, error) {
|
||||
i, err := Int64(val)
|
||||
if err != nil {
|
||||
@@ -50,7 +50,7 @@ func Int64P(val string) (*int64, error) {
|
||||
}
|
||||
|
||||
// Int32P parses the given string representation of an integer
|
||||
// and returns a pointer to a int32 whose value is same as the parsed integer.
|
||||
// and returns a pointer to an int32 whose value is same as the parsed integer.
|
||||
func Int32P(val string) (*int32, error) {
|
||||
i, err := Int32(val)
|
||||
if err != nil {
|
||||
|
||||
22
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/query.go
generated
vendored
22
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/runtime/query.go
generated
vendored
@@ -126,6 +126,15 @@ func populateFieldValueFromPath(msgValue protoreflect.Message, fieldPath []strin
|
||||
}
|
||||
}
|
||||
|
||||
// Check if oneof already set
|
||||
if of := fieldDescriptor.ContainingOneof(); of != nil && !of.IsSynthetic() {
|
||||
if f := msgValue.WhichOneof(of); f != nil {
|
||||
if fieldDescriptor.Message() == nil || fieldDescriptor.FullName() != f.FullName() {
|
||||
return fmt.Errorf("field already set for oneof %q", of.FullName().Name())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If this is the last element, we're done
|
||||
if i == len(fieldPath)-1 {
|
||||
break
|
||||
@@ -140,13 +149,6 @@ func populateFieldValueFromPath(msgValue protoreflect.Message, fieldPath []strin
|
||||
msgValue = msgValue.Mutable(fieldDescriptor).Message()
|
||||
}
|
||||
|
||||
// Check if oneof already set
|
||||
if of := fieldDescriptor.ContainingOneof(); of != nil {
|
||||
if f := msgValue.WhichOneof(of); f != nil {
|
||||
return fmt.Errorf("field already set for oneof %q", of.FullName().Name())
|
||||
}
|
||||
}
|
||||
|
||||
switch {
|
||||
case fieldDescriptor.IsList():
|
||||
return populateRepeatedField(fieldDescriptor, msgValue.Mutable(fieldDescriptor).List(), values)
|
||||
@@ -291,7 +293,11 @@ func parseMessage(msgDescriptor protoreflect.MessageDescriptor, value string) (p
|
||||
if err != nil {
|
||||
return protoreflect.Value{}, err
|
||||
}
|
||||
msg = timestamppb.New(t)
|
||||
timestamp := timestamppb.New(t)
|
||||
if ok := timestamp.IsValid(); !ok {
|
||||
return protoreflect.Value{}, fmt.Errorf("%s before 0001-01-01", value)
|
||||
}
|
||||
msg = timestamp
|
||||
case "google.protobuf.Duration":
|
||||
d, err := time.ParseDuration(value)
|
||||
if err != nil {
|
||||
|
||||
2
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/utilities/pattern.go
generated
vendored
2
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/utilities/pattern.go
generated
vendored
@@ -1,6 +1,6 @@
|
||||
package utilities
|
||||
|
||||
// An OpCode is a opcode of compiled path patterns.
|
||||
// OpCode is an opcode of compiled path patterns.
|
||||
type OpCode int
|
||||
|
||||
// These constants are the valid values of OpCode.
|
||||
|
||||
2
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/utilities/string_array_flag.go
generated
vendored
2
vendor/github.com/grpc-ecosystem/grpc-gateway/v2/utilities/string_array_flag.go
generated
vendored
@@ -5,7 +5,7 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// flagInterface is an cut down interface to `flag`
|
||||
// flagInterface is a cut down interface to `flag`
|
||||
type flagInterface interface {
|
||||
Var(value flag.Value, name string, usage string)
|
||||
}
|
||||
|
||||
6
vendor/github.com/klauspost/compress/.goreleaser.yml
generated
vendored
6
vendor/github.com/klauspost/compress/.goreleaser.yml
generated
vendored
@@ -1,5 +1,5 @@
|
||||
# This is an example goreleaser.yaml file with some sane defaults.
|
||||
# Make sure to check the documentation at http://goreleaser.com
|
||||
version: 2
|
||||
|
||||
before:
|
||||
hooks:
|
||||
- ./gen.sh
|
||||
@@ -99,7 +99,7 @@ archives:
|
||||
checksum:
|
||||
name_template: 'checksums.txt'
|
||||
snapshot:
|
||||
name_template: "{{ .Tag }}-next"
|
||||
version_template: "{{ .Tag }}-next"
|
||||
changelog:
|
||||
sort: asc
|
||||
filters:
|
||||
|
||||
169
vendor/github.com/klauspost/compress/README.md
generated
vendored
169
vendor/github.com/klauspost/compress/README.md
generated
vendored
@@ -14,8 +14,55 @@ This package provides various compression algorithms.
|
||||
[](https://github.com/klauspost/compress/actions/workflows/go.yml)
|
||||
[](https://sourcegraph.com/github.com/klauspost/compress?badge)
|
||||
|
||||
# package usage
|
||||
|
||||
Use `go get github.com/klauspost/compress@latest` to add it to your project.
|
||||
|
||||
This package will support the current Go version and 2 versions back.
|
||||
|
||||
* Use the `nounsafe` tag to disable all use of the "unsafe" package.
|
||||
* Use the `noasm` tag to disable all assembly across packages.
|
||||
|
||||
Use the links above for more information on each.
|
||||
|
||||
# changelog
|
||||
|
||||
* Feb 19th, 2025 - [1.18.0](https://github.com/klauspost/compress/releases/tag/v1.18.0)
|
||||
* Add unsafe little endian loaders https://github.com/klauspost/compress/pull/1036
|
||||
* fix: check `r.err != nil` but return a nil value error `err` by @alingse in https://github.com/klauspost/compress/pull/1028
|
||||
* flate: Simplify L4-6 loading https://github.com/klauspost/compress/pull/1043
|
||||
* flate: Simplify matchlen (remove asm) https://github.com/klauspost/compress/pull/1045
|
||||
* s2: Improve small block compression speed w/o asm https://github.com/klauspost/compress/pull/1048
|
||||
* flate: Fix matchlen L5+L6 https://github.com/klauspost/compress/pull/1049
|
||||
* flate: Cleanup & reduce casts https://github.com/klauspost/compress/pull/1050
|
||||
|
||||
* Oct 11th, 2024 - [1.17.11](https://github.com/klauspost/compress/releases/tag/v1.17.11)
|
||||
* zstd: Fix extra CRC written with multiple Close calls https://github.com/klauspost/compress/pull/1017
|
||||
* s2: Don't use stack for index tables https://github.com/klauspost/compress/pull/1014
|
||||
* gzhttp: No content-type on no body response code by @juliens in https://github.com/klauspost/compress/pull/1011
|
||||
* gzhttp: Do not set the content-type when response has no body by @kevinpollet in https://github.com/klauspost/compress/pull/1013
|
||||
|
||||
* Sep 23rd, 2024 - [1.17.10](https://github.com/klauspost/compress/releases/tag/v1.17.10)
|
||||
* gzhttp: Add TransportAlwaysDecompress option. https://github.com/klauspost/compress/pull/978
|
||||
* gzhttp: Add supported decompress request body by @mirecl in https://github.com/klauspost/compress/pull/1002
|
||||
* s2: Add EncodeBuffer buffer recycling callback https://github.com/klauspost/compress/pull/982
|
||||
* zstd: Improve memory usage on small streaming encodes https://github.com/klauspost/compress/pull/1007
|
||||
* flate: read data written with partial flush by @vajexal in https://github.com/klauspost/compress/pull/996
|
||||
|
||||
* Jun 12th, 2024 - [1.17.9](https://github.com/klauspost/compress/releases/tag/v1.17.9)
|
||||
* s2: Reduce ReadFrom temporary allocations https://github.com/klauspost/compress/pull/949
|
||||
* flate, zstd: Shave some bytes off amd64 matchLen by @greatroar in https://github.com/klauspost/compress/pull/963
|
||||
* Upgrade zip/zlib to 1.22.4 upstream https://github.com/klauspost/compress/pull/970 https://github.com/klauspost/compress/pull/971
|
||||
* zstd: BuildDict fails with RLE table https://github.com/klauspost/compress/pull/951
|
||||
|
||||
* Apr 9th, 2024 - [1.17.8](https://github.com/klauspost/compress/releases/tag/v1.17.8)
|
||||
* zstd: Reject blocks where reserved values are not 0 https://github.com/klauspost/compress/pull/885
|
||||
* zstd: Add RLE detection+encoding https://github.com/klauspost/compress/pull/938
|
||||
|
||||
* Feb 21st, 2024 - [1.17.7](https://github.com/klauspost/compress/releases/tag/v1.17.7)
|
||||
* s2: Add AsyncFlush method: Complete the block without flushing by @Jille in https://github.com/klauspost/compress/pull/927
|
||||
* s2: Fix literal+repeat exceeds dst crash https://github.com/klauspost/compress/pull/930
|
||||
|
||||
* Feb 5th, 2024 - [1.17.6](https://github.com/klauspost/compress/releases/tag/v1.17.6)
|
||||
* zstd: Fix incorrect repeat coding in best mode https://github.com/klauspost/compress/pull/923
|
||||
* s2: Fix DecodeConcurrent deadlock on errors https://github.com/klauspost/compress/pull/925
|
||||
@@ -44,9 +91,9 @@ https://github.com/klauspost/compress/pull/919 https://github.com/klauspost/comp
|
||||
* zstd: Fix rare *CORRUPTION* output in "best" mode. See https://github.com/klauspost/compress/pull/876
|
||||
|
||||
* Oct 14th, 2023 - [v1.17.1](https://github.com/klauspost/compress/releases/tag/v1.17.1)
|
||||
* s2: Fix S2 "best" dictionary wrong encoding by @klauspost in https://github.com/klauspost/compress/pull/871
|
||||
* s2: Fix S2 "best" dictionary wrong encoding https://github.com/klauspost/compress/pull/871
|
||||
* flate: Reduce allocations in decompressor and minor code improvements by @fakefloordiv in https://github.com/klauspost/compress/pull/869
|
||||
* s2: Fix EstimateBlockSize on 6&7 length input by @klauspost in https://github.com/klauspost/compress/pull/867
|
||||
* s2: Fix EstimateBlockSize on 6&7 length input https://github.com/klauspost/compress/pull/867
|
||||
|
||||
* Sept 19th, 2023 - [v1.17.0](https://github.com/klauspost/compress/releases/tag/v1.17.0)
|
||||
* Add experimental dictionary builder https://github.com/klauspost/compress/pull/853
|
||||
@@ -81,7 +128,7 @@ https://github.com/klauspost/compress/pull/919 https://github.com/klauspost/comp
|
||||
* zstd: Various minor improvements by @greatroar in https://github.com/klauspost/compress/pull/788 https://github.com/klauspost/compress/pull/794 https://github.com/klauspost/compress/pull/795
|
||||
* s2: Fix huge block overflow https://github.com/klauspost/compress/pull/779
|
||||
* s2: Allow CustomEncoder fallback https://github.com/klauspost/compress/pull/780
|
||||
* gzhttp: Suppport ResponseWriter Unwrap() in gzhttp handler by @jgimenez in https://github.com/klauspost/compress/pull/799
|
||||
* gzhttp: Support ResponseWriter Unwrap() in gzhttp handler by @jgimenez in https://github.com/klauspost/compress/pull/799
|
||||
|
||||
* Mar 13, 2023 - [v1.16.1](https://github.com/klauspost/compress/releases/tag/v1.16.1)
|
||||
* zstd: Speed up + improve best encoder by @greatroar in https://github.com/klauspost/compress/pull/776
|
||||
@@ -103,7 +150,7 @@ https://github.com/klauspost/compress/pull/919 https://github.com/klauspost/comp
|
||||
<summary>See changes to v1.15.x</summary>
|
||||
|
||||
* Jan 21st, 2023 (v1.15.15)
|
||||
* deflate: Improve level 7-9 by @klauspost in https://github.com/klauspost/compress/pull/739
|
||||
* deflate: Improve level 7-9 https://github.com/klauspost/compress/pull/739
|
||||
* zstd: Add delta encoding support by @greatroar in https://github.com/klauspost/compress/pull/728
|
||||
* zstd: Various speed improvements by @greatroar https://github.com/klauspost/compress/pull/741 https://github.com/klauspost/compress/pull/734 https://github.com/klauspost/compress/pull/736 https://github.com/klauspost/compress/pull/744 https://github.com/klauspost/compress/pull/743 https://github.com/klauspost/compress/pull/745
|
||||
* gzhttp: Add SuffixETag() and DropETag() options to prevent ETag collisions on compressed responses by @willbicks in https://github.com/klauspost/compress/pull/740
|
||||
@@ -136,7 +183,7 @@ https://github.com/klauspost/compress/pull/919 https://github.com/klauspost/comp
|
||||
* zstd: Add [WithDecodeAllCapLimit](https://pkg.go.dev/github.com/klauspost/compress@v1.15.10/zstd#WithDecodeAllCapLimit) https://github.com/klauspost/compress/pull/649
|
||||
* Add Go 1.19 - deprecate Go 1.16 https://github.com/klauspost/compress/pull/651
|
||||
* flate: Improve level 5+6 compression https://github.com/klauspost/compress/pull/656
|
||||
* zstd: Improve "better" compresssion https://github.com/klauspost/compress/pull/657
|
||||
* zstd: Improve "better" compression https://github.com/klauspost/compress/pull/657
|
||||
* s2: Improve "best" compression https://github.com/klauspost/compress/pull/658
|
||||
* s2: Improve "better" compression. https://github.com/klauspost/compress/pull/635
|
||||
* s2: Slightly faster non-assembly decompression https://github.com/klauspost/compress/pull/646
|
||||
@@ -146,7 +193,7 @@ https://github.com/klauspost/compress/pull/919 https://github.com/klauspost/comp
|
||||
|
||||
* zstd: Fix decoder crash on amd64 (no BMI) on invalid input https://github.com/klauspost/compress/pull/645
|
||||
* zstd: Disable decoder extended memory copies (amd64) due to possible crashes https://github.com/klauspost/compress/pull/644
|
||||
* zstd: Allow single segments up to "max decoded size" by @klauspost in https://github.com/klauspost/compress/pull/643
|
||||
* zstd: Allow single segments up to "max decoded size" https://github.com/klauspost/compress/pull/643
|
||||
|
||||
* July 13, 2022 (v1.15.8)
|
||||
|
||||
@@ -188,7 +235,7 @@ https://github.com/klauspost/compress/pull/919 https://github.com/klauspost/comp
|
||||
* zstd: Speed up when WithDecoderLowmem(false) https://github.com/klauspost/compress/pull/599
|
||||
* zstd: faster next state update in BMI2 version of decode by @WojciechMula in https://github.com/klauspost/compress/pull/593
|
||||
* huff0: Do not check max size when reading table. https://github.com/klauspost/compress/pull/586
|
||||
* flate: Inplace hashing for level 7-9 by @klauspost in https://github.com/klauspost/compress/pull/590
|
||||
* flate: Inplace hashing for level 7-9 https://github.com/klauspost/compress/pull/590
|
||||
|
||||
|
||||
* May 11, 2022 (v1.15.4)
|
||||
@@ -215,12 +262,12 @@ https://github.com/klauspost/compress/pull/919 https://github.com/klauspost/comp
|
||||
* zstd: Add stricter block size checks in [#523](https://github.com/klauspost/compress/pull/523)
|
||||
|
||||
* Mar 3, 2022 (v1.15.0)
|
||||
* zstd: Refactor decoder by @klauspost in [#498](https://github.com/klauspost/compress/pull/498)
|
||||
* zstd: Add stream encoding without goroutines by @klauspost in [#505](https://github.com/klauspost/compress/pull/505)
|
||||
* zstd: Refactor decoder [#498](https://github.com/klauspost/compress/pull/498)
|
||||
* zstd: Add stream encoding without goroutines [#505](https://github.com/klauspost/compress/pull/505)
|
||||
* huff0: Prevent single blocks exceeding 16 bits by @klauspost in[#507](https://github.com/klauspost/compress/pull/507)
|
||||
* flate: Inline literal emission by @klauspost in [#509](https://github.com/klauspost/compress/pull/509)
|
||||
* gzhttp: Add zstd to transport by @klauspost in [#400](https://github.com/klauspost/compress/pull/400)
|
||||
* gzhttp: Make content-type optional by @klauspost in [#510](https://github.com/klauspost/compress/pull/510)
|
||||
* flate: Inline literal emission [#509](https://github.com/klauspost/compress/pull/509)
|
||||
* gzhttp: Add zstd to transport [#400](https://github.com/klauspost/compress/pull/400)
|
||||
* gzhttp: Make content-type optional [#510](https://github.com/klauspost/compress/pull/510)
|
||||
|
||||
Both compression and decompression now supports "synchronous" stream operations. This means that whenever "concurrency" is set to 1, they will operate without spawning goroutines.
|
||||
|
||||
@@ -237,7 +284,7 @@ While the release has been extensively tested, it is recommended to testing when
|
||||
* flate: Fix rare huffman only (-2) corruption. [#503](https://github.com/klauspost/compress/pull/503)
|
||||
* zip: Update deprecated CreateHeaderRaw to correctly call CreateRaw by @saracen in [#502](https://github.com/klauspost/compress/pull/502)
|
||||
* zip: don't read data descriptor early by @saracen in [#501](https://github.com/klauspost/compress/pull/501) #501
|
||||
* huff0: Use static decompression buffer up to 30% faster by @klauspost in [#499](https://github.com/klauspost/compress/pull/499) [#500](https://github.com/klauspost/compress/pull/500)
|
||||
* huff0: Use static decompression buffer up to 30% faster [#499](https://github.com/klauspost/compress/pull/499) [#500](https://github.com/klauspost/compress/pull/500)
|
||||
|
||||
* Feb 17, 2022 (v1.14.3)
|
||||
* flate: Improve fastest levels compression speed ~10% more throughput. [#482](https://github.com/klauspost/compress/pull/482) [#489](https://github.com/klauspost/compress/pull/489) [#490](https://github.com/klauspost/compress/pull/490) [#491](https://github.com/klauspost/compress/pull/491) [#494](https://github.com/klauspost/compress/pull/494) [#478](https://github.com/klauspost/compress/pull/478)
|
||||
@@ -339,7 +386,7 @@ While the release has been extensively tested, it is recommended to testing when
|
||||
* s2: Fix binaries.
|
||||
|
||||
* Feb 25, 2021 (v1.11.8)
|
||||
* s2: Fixed occational out-of-bounds write on amd64. Upgrade recommended.
|
||||
* s2: Fixed occasional out-of-bounds write on amd64. Upgrade recommended.
|
||||
* s2: Add AMD64 assembly for better mode. 25-50% faster. [#315](https://github.com/klauspost/compress/pull/315)
|
||||
* s2: Less upfront decoder allocation. [#322](https://github.com/klauspost/compress/pull/322)
|
||||
* zstd: Faster "compression" of incompressible data. [#314](https://github.com/klauspost/compress/pull/314)
|
||||
@@ -518,7 +565,7 @@ While the release has been extensively tested, it is recommended to testing when
|
||||
* Feb 19, 2016: Faster bit writer, level -2 is 15% faster, level 1 is 4% faster.
|
||||
* Feb 19, 2016: Handle small payloads faster in level 1-3.
|
||||
* Feb 19, 2016: Added faster level 2 + 3 compression modes.
|
||||
* Feb 19, 2016: [Rebalanced compression levels](https://blog.klauspost.com/rebalancing-deflate-compression-levels/), so there is a more even progresssion in terms of compression. New default level is 5.
|
||||
* Feb 19, 2016: [Rebalanced compression levels](https://blog.klauspost.com/rebalancing-deflate-compression-levels/), so there is a more even progression in terms of compression. New default level is 5.
|
||||
* Feb 14, 2016: Snappy: Merge upstream changes.
|
||||
* Feb 14, 2016: Snappy: Fix aggressive skipping.
|
||||
* Feb 14, 2016: Snappy: Update benchmark.
|
||||
@@ -544,12 +591,14 @@ While the release has been extensively tested, it is recommended to testing when
|
||||
|
||||
The packages are drop-in replacements for standard libraries. Simply replace the import path to use them:
|
||||
|
||||
| old import | new import | Documentation
|
||||
|--------------------|-----------------------------------------|--------------------|
|
||||
| `compress/gzip` | `github.com/klauspost/compress/gzip` | [gzip](https://pkg.go.dev/github.com/klauspost/compress/gzip?tab=doc)
|
||||
| `compress/zlib` | `github.com/klauspost/compress/zlib` | [zlib](https://pkg.go.dev/github.com/klauspost/compress/zlib?tab=doc)
|
||||
| `archive/zip` | `github.com/klauspost/compress/zip` | [zip](https://pkg.go.dev/github.com/klauspost/compress/zip?tab=doc)
|
||||
| `compress/flate` | `github.com/klauspost/compress/flate` | [flate](https://pkg.go.dev/github.com/klauspost/compress/flate?tab=doc)
|
||||
Typical speed is about 2x of the standard library packages.
|
||||
|
||||
| old import | new import | Documentation |
|
||||
|------------------|---------------------------------------|-------------------------------------------------------------------------|
|
||||
| `compress/gzip` | `github.com/klauspost/compress/gzip` | [gzip](https://pkg.go.dev/github.com/klauspost/compress/gzip?tab=doc) |
|
||||
| `compress/zlib` | `github.com/klauspost/compress/zlib` | [zlib](https://pkg.go.dev/github.com/klauspost/compress/zlib?tab=doc) |
|
||||
| `archive/zip` | `github.com/klauspost/compress/zip` | [zip](https://pkg.go.dev/github.com/klauspost/compress/zip?tab=doc) |
|
||||
| `compress/flate` | `github.com/klauspost/compress/flate` | [flate](https://pkg.go.dev/github.com/klauspost/compress/flate?tab=doc) |
|
||||
|
||||
* Optimized [deflate](https://godoc.org/github.com/klauspost/compress/flate) packages which can be used as a dropin replacement for [gzip](https://godoc.org/github.com/klauspost/compress/gzip), [zip](https://godoc.org/github.com/klauspost/compress/zip) and [zlib](https://godoc.org/github.com/klauspost/compress/zlib).
|
||||
|
||||
@@ -604,84 +653,6 @@ This will only use up to 4KB in memory when the writer is idle.
|
||||
Compression is almost always worse than the fastest compression level
|
||||
and each write will allocate (a little) memory.
|
||||
|
||||
# Performance Update 2018
|
||||
|
||||
It has been a while since we have been looking at the speed of this package compared to the standard library, so I thought I would re-do my tests and give some overall recommendations based on the current state. All benchmarks have been performed with Go 1.10 on my Desktop Intel(R) Core(TM) i7-2600 CPU @3.40GHz. Since I last ran the tests, I have gotten more RAM, which means tests with big files are no longer limited by my SSD.
|
||||
|
||||
The raw results are in my [updated spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing). Due to cgo changes and upstream updates i could not get the cgo version of gzip to compile. Instead I included the [zstd](https://github.com/datadog/zstd) cgo implementation. If I get cgo gzip to work again, I might replace the results in the sheet.
|
||||
|
||||
The columns to take note of are: *MB/s* - the throughput. *Reduction* - the data size reduction in percent of the original. *Rel Speed* relative speed compared to the standard library at the same level. *Smaller* - how many percent smaller is the compressed output compared to stdlib. Negative means the output was bigger. *Loss* means the loss (or gain) in compression as a percentage difference of the input.
|
||||
|
||||
The `gzstd` (standard library gzip) and `gzkp` (this package gzip) only uses one CPU core. [`pgzip`](https://github.com/klauspost/pgzip), [`bgzf`](https://github.com/biogo/hts/tree/master/bgzf) uses all 4 cores. [`zstd`](https://github.com/DataDog/zstd) uses one core, and is a beast (but not Go, yet).
|
||||
|
||||
|
||||
## Overall differences.
|
||||
|
||||
There appears to be a roughly 5-10% speed advantage over the standard library when comparing at similar compression levels.
|
||||
|
||||
The biggest difference you will see is the result of [re-balancing](https://blog.klauspost.com/rebalancing-deflate-compression-levels/) the compression levels. I wanted by library to give a smoother transition between the compression levels than the standard library.
|
||||
|
||||
This package attempts to provide a more smooth transition, where "1" is taking a lot of shortcuts, "5" is the reasonable trade-off and "9" is the "give me the best compression", and the values in between gives something reasonable in between. The standard library has big differences in levels 1-4, but levels 5-9 having no significant gains - often spending a lot more time than can be justified by the achieved compression.
|
||||
|
||||
There are links to all the test data in the [spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing) in the top left field on each tab.
|
||||
|
||||
## Web Content
|
||||
|
||||
This test set aims to emulate typical use in a web server. The test-set is 4GB data in 53k files, and is a mixture of (mostly) HTML, JS, CSS.
|
||||
|
||||
Since level 1 and 9 are close to being the same code, they are quite close. But looking at the levels in-between the differences are quite big.
|
||||
|
||||
Looking at level 6, this package is 88% faster, but will output about 6% more data. For a web server, this means you can serve 88% more data, but have to pay for 6% more bandwidth. You can draw your own conclusions on what would be the most expensive for your case.
|
||||
|
||||
## Object files
|
||||
|
||||
This test is for typical data files stored on a server. In this case it is a collection of Go precompiled objects. They are very compressible.
|
||||
|
||||
The picture is similar to the web content, but with small differences since this is very compressible. Levels 2-3 offer good speed, but is sacrificing quite a bit of compression.
|
||||
|
||||
The standard library seems suboptimal on level 3 and 4 - offering both worse compression and speed than level 6 & 7 of this package respectively.
|
||||
|
||||
## Highly Compressible File
|
||||
|
||||
This is a JSON file with very high redundancy. The reduction starts at 95% on level 1, so in real life terms we are dealing with something like a highly redundant stream of data, etc.
|
||||
|
||||
It is definitely visible that we are dealing with specialized content here, so the results are very scattered. This package does not do very well at levels 1-4, but picks up significantly at level 5 and levels 7 and 8 offering great speed for the achieved compression.
|
||||
|
||||
So if you know you content is extremely compressible you might want to go slightly higher than the defaults. The standard library has a huge gap between levels 3 and 4 in terms of speed (2.75x slowdown), so it offers little "middle ground".
|
||||
|
||||
## Medium-High Compressible
|
||||
|
||||
This is a pretty common test corpus: [enwik9](http://mattmahoney.net/dc/textdata.html). It contains the first 10^9 bytes of the English Wikipedia dump on Mar. 3, 2006. This is a very good test of typical text based compression and more data heavy streams.
|
||||
|
||||
We see a similar picture here as in "Web Content". On equal levels some compression is sacrificed for more speed. Level 5 seems to be the best trade-off between speed and size, beating stdlib level 3 in both.
|
||||
|
||||
## Medium Compressible
|
||||
|
||||
I will combine two test sets, one [10GB file set](http://mattmahoney.net/dc/10gb.html) and a VM disk image (~8GB). Both contain different data types and represent a typical backup scenario.
|
||||
|
||||
The most notable thing is how quickly the standard library drops to very low compression speeds around level 5-6 without any big gains in compression. Since this type of data is fairly common, this does not seem like good behavior.
|
||||
|
||||
|
||||
## Un-compressible Content
|
||||
|
||||
This is mainly a test of how good the algorithms are at detecting un-compressible input. The standard library only offers this feature with very conservative settings at level 1. Obviously there is no reason for the algorithms to try to compress input that cannot be compressed. The only downside is that it might skip some compressible data on false detections.
|
||||
|
||||
|
||||
## Huffman only compression
|
||||
|
||||
This compression library adds a special compression level, named `HuffmanOnly`, which allows near linear time compression. This is done by completely disabling matching of previous data, and only reduce the number of bits to represent each character.
|
||||
|
||||
This means that often used characters, like 'e' and ' ' (space) in text use the fewest bits to represent, and rare characters like '¤' takes more bits to represent. For more information see [wikipedia](https://en.wikipedia.org/wiki/Huffman_coding) or this nice [video](https://youtu.be/ZdooBTdW5bM).
|
||||
|
||||
Since this type of compression has much less variance, the compression speed is mostly unaffected by the input data, and is usually more than *180MB/s* for a single core.
|
||||
|
||||
The downside is that the compression ratio is usually considerably worse than even the fastest conventional compression. The compression ratio can never be better than 8:1 (12.5%).
|
||||
|
||||
The linear time compression can be used as a "better than nothing" mode, where you cannot risk the encoder to slow down on some content. For comparison, the size of the "Twain" text is *233460 bytes* (+29% vs. level 1) and encode speed is 144MB/s (4.5x level 1). So in this case you trade a 30% size increase for a 4 times speedup.
|
||||
|
||||
For more information see my blog post on [Fast Linear Time Compression](http://blog.klauspost.com/constant-time-gzipzip-compression/).
|
||||
|
||||
This is implemented on Go 1.7 as "Huffman Only" mode, though not exposed for gzip.
|
||||
|
||||
# Other packages
|
||||
|
||||
|
||||
2
vendor/github.com/klauspost/compress/fse/decompress.go
generated
vendored
2
vendor/github.com/klauspost/compress/fse/decompress.go
generated
vendored
@@ -15,7 +15,7 @@ const (
|
||||
// It is possible, but by no way guaranteed that corrupt data will
|
||||
// return an error.
|
||||
// It is up to the caller to verify integrity of the returned data.
|
||||
// Use a predefined Scrach to set maximum acceptable output size.
|
||||
// Use a predefined Scratch to set maximum acceptable output size.
|
||||
func Decompress(b []byte, s *Scratch) ([]byte, error) {
|
||||
s, err := s.prepare(b)
|
||||
if err != nil {
|
||||
|
||||
25
vendor/github.com/klauspost/compress/huff0/bitreader.go
generated
vendored
25
vendor/github.com/klauspost/compress/huff0/bitreader.go
generated
vendored
@@ -6,10 +6,11 @@
|
||||
package huff0
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/klauspost/compress/internal/le"
|
||||
)
|
||||
|
||||
// bitReader reads a bitstream in reverse.
|
||||
@@ -46,7 +47,7 @@ func (b *bitReaderBytes) init(in []byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// peekBitsFast requires that at least one bit is requested every time.
|
||||
// peekByteFast requires that at least one byte is requested every time.
|
||||
// There are no checks if the buffer is filled.
|
||||
func (b *bitReaderBytes) peekByteFast() uint8 {
|
||||
got := uint8(b.value >> 56)
|
||||
@@ -66,8 +67,7 @@ func (b *bitReaderBytes) fillFast() {
|
||||
}
|
||||
|
||||
// 2 bounds checks.
|
||||
v := b.in[b.off-4 : b.off]
|
||||
low := (uint32(v[0])) | (uint32(v[1]) << 8) | (uint32(v[2]) << 16) | (uint32(v[3]) << 24)
|
||||
low := le.Load32(b.in, b.off-4)
|
||||
b.value |= uint64(low) << (b.bitsRead - 32)
|
||||
b.bitsRead -= 32
|
||||
b.off -= 4
|
||||
@@ -76,7 +76,7 @@ func (b *bitReaderBytes) fillFast() {
|
||||
// fillFastStart() assumes the bitReaderBytes is empty and there is at least 8 bytes to read.
|
||||
func (b *bitReaderBytes) fillFastStart() {
|
||||
// Do single re-slice to avoid bounds checks.
|
||||
b.value = binary.LittleEndian.Uint64(b.in[b.off-8:])
|
||||
b.value = le.Load64(b.in, b.off-8)
|
||||
b.bitsRead = 0
|
||||
b.off -= 8
|
||||
}
|
||||
@@ -86,9 +86,8 @@ func (b *bitReaderBytes) fill() {
|
||||
if b.bitsRead < 32 {
|
||||
return
|
||||
}
|
||||
if b.off > 4 {
|
||||
v := b.in[b.off-4 : b.off]
|
||||
low := (uint32(v[0])) | (uint32(v[1]) << 8) | (uint32(v[2]) << 16) | (uint32(v[3]) << 24)
|
||||
if b.off >= 4 {
|
||||
low := le.Load32(b.in, b.off-4)
|
||||
b.value |= uint64(low) << (b.bitsRead - 32)
|
||||
b.bitsRead -= 32
|
||||
b.off -= 4
|
||||
@@ -175,9 +174,7 @@ func (b *bitReaderShifted) fillFast() {
|
||||
return
|
||||
}
|
||||
|
||||
// 2 bounds checks.
|
||||
v := b.in[b.off-4 : b.off]
|
||||
low := (uint32(v[0])) | (uint32(v[1]) << 8) | (uint32(v[2]) << 16) | (uint32(v[3]) << 24)
|
||||
low := le.Load32(b.in, b.off-4)
|
||||
b.value |= uint64(low) << ((b.bitsRead - 32) & 63)
|
||||
b.bitsRead -= 32
|
||||
b.off -= 4
|
||||
@@ -185,8 +182,7 @@ func (b *bitReaderShifted) fillFast() {
|
||||
|
||||
// fillFastStart() assumes the bitReaderShifted is empty and there is at least 8 bytes to read.
|
||||
func (b *bitReaderShifted) fillFastStart() {
|
||||
// Do single re-slice to avoid bounds checks.
|
||||
b.value = binary.LittleEndian.Uint64(b.in[b.off-8:])
|
||||
b.value = le.Load64(b.in, b.off-8)
|
||||
b.bitsRead = 0
|
||||
b.off -= 8
|
||||
}
|
||||
@@ -197,8 +193,7 @@ func (b *bitReaderShifted) fill() {
|
||||
return
|
||||
}
|
||||
if b.off > 4 {
|
||||
v := b.in[b.off-4 : b.off]
|
||||
low := (uint32(v[0])) | (uint32(v[1]) << 8) | (uint32(v[2]) << 16) | (uint32(v[3]) << 24)
|
||||
low := le.Load32(b.in, b.off-4)
|
||||
b.value |= uint64(low) << ((b.bitsRead - 32) & 63)
|
||||
b.bitsRead -= 32
|
||||
b.off -= 4
|
||||
|
||||
4
vendor/github.com/klauspost/compress/huff0/decompress.go
generated
vendored
4
vendor/github.com/klauspost/compress/huff0/decompress.go
generated
vendored
@@ -1136,7 +1136,7 @@ func (s *Scratch) matches(ct cTable, w io.Writer) {
|
||||
errs++
|
||||
}
|
||||
if errs > 0 {
|
||||
fmt.Fprintf(w, "%d errros in base, stopping\n", errs)
|
||||
fmt.Fprintf(w, "%d errors in base, stopping\n", errs)
|
||||
continue
|
||||
}
|
||||
// Ensure that all combinations are covered.
|
||||
@@ -1152,7 +1152,7 @@ func (s *Scratch) matches(ct cTable, w io.Writer) {
|
||||
errs++
|
||||
}
|
||||
if errs > 20 {
|
||||
fmt.Fprintf(w, "%d errros, stopping\n", errs)
|
||||
fmt.Fprintf(w, "%d errors, stopping\n", errs)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
5
vendor/github.com/klauspost/compress/internal/le/le.go
generated
vendored
Normal file
5
vendor/github.com/klauspost/compress/internal/le/le.go
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
package le
|
||||
|
||||
type Indexer interface {
|
||||
int | int8 | int16 | int32 | int64 | uint | uint8 | uint16 | uint32 | uint64
|
||||
}
|
||||
42
vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go
generated
vendored
Normal file
42
vendor/github.com/klauspost/compress/internal/le/unsafe_disabled.go
generated
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
//go:build !(amd64 || arm64 || ppc64le || riscv64) || nounsafe || purego || appengine
|
||||
|
||||
package le
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
)
|
||||
|
||||
// Load8 will load from b at index i.
|
||||
func Load8[I Indexer](b []byte, i I) byte {
|
||||
return b[i]
|
||||
}
|
||||
|
||||
// Load16 will load from b at index i.
|
||||
func Load16[I Indexer](b []byte, i I) uint16 {
|
||||
return binary.LittleEndian.Uint16(b[i:])
|
||||
}
|
||||
|
||||
// Load32 will load from b at index i.
|
||||
func Load32[I Indexer](b []byte, i I) uint32 {
|
||||
return binary.LittleEndian.Uint32(b[i:])
|
||||
}
|
||||
|
||||
// Load64 will load from b at index i.
|
||||
func Load64[I Indexer](b []byte, i I) uint64 {
|
||||
return binary.LittleEndian.Uint64(b[i:])
|
||||
}
|
||||
|
||||
// Store16 will store v at b.
|
||||
func Store16(b []byte, v uint16) {
|
||||
binary.LittleEndian.PutUint16(b, v)
|
||||
}
|
||||
|
||||
// Store32 will store v at b.
|
||||
func Store32(b []byte, v uint32) {
|
||||
binary.LittleEndian.PutUint32(b, v)
|
||||
}
|
||||
|
||||
// Store64 will store v at b.
|
||||
func Store64(b []byte, v uint64) {
|
||||
binary.LittleEndian.PutUint64(b, v)
|
||||
}
|
||||
55
vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go
generated
vendored
Normal file
55
vendor/github.com/klauspost/compress/internal/le/unsafe_enabled.go
generated
vendored
Normal file
@@ -0,0 +1,55 @@
|
||||
// We enable 64 bit LE platforms:
|
||||
|
||||
//go:build (amd64 || arm64 || ppc64le || riscv64) && !nounsafe && !purego && !appengine
|
||||
|
||||
package le
|
||||
|
||||
import (
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// Load8 will load from b at index i.
|
||||
func Load8[I Indexer](b []byte, i I) byte {
|
||||
//return binary.LittleEndian.Uint16(b[i:])
|
||||
//return *(*uint16)(unsafe.Pointer(&b[i]))
|
||||
return *(*byte)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
|
||||
}
|
||||
|
||||
// Load16 will load from b at index i.
|
||||
func Load16[I Indexer](b []byte, i I) uint16 {
|
||||
//return binary.LittleEndian.Uint16(b[i:])
|
||||
//return *(*uint16)(unsafe.Pointer(&b[i]))
|
||||
return *(*uint16)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
|
||||
}
|
||||
|
||||
// Load32 will load from b at index i.
|
||||
func Load32[I Indexer](b []byte, i I) uint32 {
|
||||
//return binary.LittleEndian.Uint32(b[i:])
|
||||
//return *(*uint32)(unsafe.Pointer(&b[i]))
|
||||
return *(*uint32)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
|
||||
}
|
||||
|
||||
// Load64 will load from b at index i.
|
||||
func Load64[I Indexer](b []byte, i I) uint64 {
|
||||
//return binary.LittleEndian.Uint64(b[i:])
|
||||
//return *(*uint64)(unsafe.Pointer(&b[i]))
|
||||
return *(*uint64)(unsafe.Add(unsafe.Pointer(unsafe.SliceData(b)), i))
|
||||
}
|
||||
|
||||
// Store16 will store v at b.
|
||||
func Store16(b []byte, v uint16) {
|
||||
//binary.LittleEndian.PutUint16(b, v)
|
||||
*(*uint16)(unsafe.Pointer(unsafe.SliceData(b))) = v
|
||||
}
|
||||
|
||||
// Store32 will store v at b.
|
||||
func Store32(b []byte, v uint32) {
|
||||
//binary.LittleEndian.PutUint32(b, v)
|
||||
*(*uint32)(unsafe.Pointer(unsafe.SliceData(b))) = v
|
||||
}
|
||||
|
||||
// Store64 will store v at b.
|
||||
func Store64(b []byte, v uint64) {
|
||||
//binary.LittleEndian.PutUint64(b, v)
|
||||
*(*uint64)(unsafe.Pointer(unsafe.SliceData(b))) = v
|
||||
}
|
||||
3
vendor/github.com/klauspost/compress/s2sx.mod
generated
vendored
3
vendor/github.com/klauspost/compress/s2sx.mod
generated
vendored
@@ -1,4 +1,3 @@
|
||||
module github.com/klauspost/compress
|
||||
|
||||
go 1.19
|
||||
|
||||
go 1.22
|
||||
|
||||
2
vendor/github.com/klauspost/compress/zstd/README.md
generated
vendored
2
vendor/github.com/klauspost/compress/zstd/README.md
generated
vendored
@@ -6,7 +6,7 @@ A high performance compression algorithm is implemented. For now focused on spee
|
||||
|
||||
This package provides [compression](#Compressor) to and [decompression](#Decompressor) of Zstandard content.
|
||||
|
||||
This package is pure Go and without use of "unsafe".
|
||||
This package is pure Go. Use `noasm` and `nounsafe` to disable relevant features.
|
||||
|
||||
The `zstd` package is provided as open source software using a Go standard license.
|
||||
|
||||
|
||||
37
vendor/github.com/klauspost/compress/zstd/bitreader.go
generated
vendored
37
vendor/github.com/klauspost/compress/zstd/bitreader.go
generated
vendored
@@ -5,11 +5,12 @@
|
||||
package zstd
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math/bits"
|
||||
|
||||
"github.com/klauspost/compress/internal/le"
|
||||
)
|
||||
|
||||
// bitReader reads a bitstream in reverse.
|
||||
@@ -18,6 +19,7 @@ import (
|
||||
type bitReader struct {
|
||||
in []byte
|
||||
value uint64 // Maybe use [16]byte, but shifting is awkward.
|
||||
cursor int // offset where next read should end
|
||||
bitsRead uint8
|
||||
}
|
||||
|
||||
@@ -32,6 +34,7 @@ func (b *bitReader) init(in []byte) error {
|
||||
if v == 0 {
|
||||
return errors.New("corrupt stream, did not find end of stream")
|
||||
}
|
||||
b.cursor = len(in)
|
||||
b.bitsRead = 64
|
||||
b.value = 0
|
||||
if len(in) >= 8 {
|
||||
@@ -67,18 +70,15 @@ func (b *bitReader) fillFast() {
|
||||
if b.bitsRead < 32 {
|
||||
return
|
||||
}
|
||||
v := b.in[len(b.in)-4:]
|
||||
b.in = b.in[:len(b.in)-4]
|
||||
low := (uint32(v[0])) | (uint32(v[1]) << 8) | (uint32(v[2]) << 16) | (uint32(v[3]) << 24)
|
||||
b.value = (b.value << 32) | uint64(low)
|
||||
b.cursor -= 4
|
||||
b.value = (b.value << 32) | uint64(le.Load32(b.in, b.cursor))
|
||||
b.bitsRead -= 32
|
||||
}
|
||||
|
||||
// fillFastStart() assumes the bitreader is empty and there is at least 8 bytes to read.
|
||||
func (b *bitReader) fillFastStart() {
|
||||
v := b.in[len(b.in)-8:]
|
||||
b.in = b.in[:len(b.in)-8]
|
||||
b.value = binary.LittleEndian.Uint64(v)
|
||||
b.cursor -= 8
|
||||
b.value = le.Load64(b.in, b.cursor)
|
||||
b.bitsRead = 0
|
||||
}
|
||||
|
||||
@@ -87,25 +87,23 @@ func (b *bitReader) fill() {
|
||||
if b.bitsRead < 32 {
|
||||
return
|
||||
}
|
||||
if len(b.in) >= 4 {
|
||||
v := b.in[len(b.in)-4:]
|
||||
b.in = b.in[:len(b.in)-4]
|
||||
low := (uint32(v[0])) | (uint32(v[1]) << 8) | (uint32(v[2]) << 16) | (uint32(v[3]) << 24)
|
||||
b.value = (b.value << 32) | uint64(low)
|
||||
if b.cursor >= 4 {
|
||||
b.cursor -= 4
|
||||
b.value = (b.value << 32) | uint64(le.Load32(b.in, b.cursor))
|
||||
b.bitsRead -= 32
|
||||
return
|
||||
}
|
||||
|
||||
b.bitsRead -= uint8(8 * len(b.in))
|
||||
for len(b.in) > 0 {
|
||||
b.value = (b.value << 8) | uint64(b.in[len(b.in)-1])
|
||||
b.in = b.in[:len(b.in)-1]
|
||||
b.bitsRead -= uint8(8 * b.cursor)
|
||||
for b.cursor > 0 {
|
||||
b.cursor -= 1
|
||||
b.value = (b.value << 8) | uint64(b.in[b.cursor])
|
||||
}
|
||||
}
|
||||
|
||||
// finished returns true if all bits have been read from the bit stream.
|
||||
func (b *bitReader) finished() bool {
|
||||
return len(b.in) == 0 && b.bitsRead >= 64
|
||||
return b.cursor == 0 && b.bitsRead >= 64
|
||||
}
|
||||
|
||||
// overread returns true if more bits have been requested than is on the stream.
|
||||
@@ -115,13 +113,14 @@ func (b *bitReader) overread() bool {
|
||||
|
||||
// remain returns the number of bits remaining.
|
||||
func (b *bitReader) remain() uint {
|
||||
return 8*uint(len(b.in)) + 64 - uint(b.bitsRead)
|
||||
return 8*uint(b.cursor) + 64 - uint(b.bitsRead)
|
||||
}
|
||||
|
||||
// close the bitstream and returns an error if out-of-buffer reads occurred.
|
||||
func (b *bitReader) close() error {
|
||||
// Release reference.
|
||||
b.in = nil
|
||||
b.cursor = 0
|
||||
if !b.finished() {
|
||||
return fmt.Errorf("%d extra bits on block, should be 0", b.remain())
|
||||
}
|
||||
|
||||
23
vendor/github.com/klauspost/compress/zstd/blockdec.go
generated
vendored
23
vendor/github.com/klauspost/compress/zstd/blockdec.go
generated
vendored
@@ -5,14 +5,10 @@
|
||||
package zstd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
|
||||
"github.com/klauspost/compress/huff0"
|
||||
@@ -598,7 +594,9 @@ func (b *blockDec) prepareSequences(in []byte, hist *history) (err error) {
|
||||
printf("RLE set to 0x%x, code: %v", symb, v)
|
||||
}
|
||||
case compModeFSE:
|
||||
println("Reading table for", tableIndex(i))
|
||||
if debugDecoder {
|
||||
println("Reading table for", tableIndex(i))
|
||||
}
|
||||
if seq.fse == nil || seq.fse.preDefined {
|
||||
seq.fse = fseDecoderPool.Get().(*fseDecoder)
|
||||
}
|
||||
@@ -646,21 +644,6 @@ func (b *blockDec) prepareSequences(in []byte, hist *history) (err error) {
|
||||
println("initializing sequences:", err)
|
||||
return err
|
||||
}
|
||||
// Extract blocks...
|
||||
if false && hist.dict == nil {
|
||||
fatalErr := func(err error) {
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
fn := fmt.Sprintf("n-%d-lits-%d-prev-%d-%d-%d-win-%d.blk", hist.decoders.nSeqs, len(hist.decoders.literals), hist.recentOffsets[0], hist.recentOffsets[1], hist.recentOffsets[2], hist.windowSize)
|
||||
var buf bytes.Buffer
|
||||
fatalErr(binary.Write(&buf, binary.LittleEndian, hist.decoders.litLengths.fse))
|
||||
fatalErr(binary.Write(&buf, binary.LittleEndian, hist.decoders.matchLengths.fse))
|
||||
fatalErr(binary.Write(&buf, binary.LittleEndian, hist.decoders.offsets.fse))
|
||||
buf.Write(in)
|
||||
os.WriteFile(filepath.Join("testdata", "seqs", fn), buf.Bytes(), os.ModePerm)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
27
vendor/github.com/klauspost/compress/zstd/blockenc.go
generated
vendored
27
vendor/github.com/klauspost/compress/zstd/blockenc.go
generated
vendored
@@ -9,6 +9,7 @@ import (
|
||||
"fmt"
|
||||
"math"
|
||||
"math/bits"
|
||||
"slices"
|
||||
|
||||
"github.com/klauspost/compress/huff0"
|
||||
)
|
||||
@@ -457,16 +458,7 @@ func fuzzFseEncoder(data []byte) int {
|
||||
// All 0
|
||||
return 0
|
||||
}
|
||||
maxCount := func(a []uint32) int {
|
||||
var max uint32
|
||||
for _, v := range a {
|
||||
if v > max {
|
||||
max = v
|
||||
}
|
||||
}
|
||||
return int(max)
|
||||
}
|
||||
cnt := maxCount(hist[:maxSym])
|
||||
cnt := int(slices.Max(hist[:maxSym]))
|
||||
if cnt == len(data) {
|
||||
// RLE
|
||||
return 0
|
||||
@@ -884,15 +876,6 @@ func (b *blockEnc) genCodes() {
|
||||
}
|
||||
}
|
||||
}
|
||||
maxCount := func(a []uint32) int {
|
||||
var max uint32
|
||||
for _, v := range a {
|
||||
if v > max {
|
||||
max = v
|
||||
}
|
||||
}
|
||||
return int(max)
|
||||
}
|
||||
if debugAsserts && mlMax > maxMatchLengthSymbol {
|
||||
panic(fmt.Errorf("mlMax > maxMatchLengthSymbol (%d)", mlMax))
|
||||
}
|
||||
@@ -903,7 +886,7 @@ func (b *blockEnc) genCodes() {
|
||||
panic(fmt.Errorf("llMax > maxLiteralLengthSymbol (%d)", llMax))
|
||||
}
|
||||
|
||||
b.coders.mlEnc.HistogramFinished(mlMax, maxCount(mlH[:mlMax+1]))
|
||||
b.coders.ofEnc.HistogramFinished(ofMax, maxCount(ofH[:ofMax+1]))
|
||||
b.coders.llEnc.HistogramFinished(llMax, maxCount(llH[:llMax+1]))
|
||||
b.coders.mlEnc.HistogramFinished(mlMax, int(slices.Max(mlH[:mlMax+1])))
|
||||
b.coders.ofEnc.HistogramFinished(ofMax, int(slices.Max(ofH[:ofMax+1])))
|
||||
b.coders.llEnc.HistogramFinished(llMax, int(slices.Max(llH[:llMax+1])))
|
||||
}
|
||||
|
||||
3
vendor/github.com/klauspost/compress/zstd/decoder.go
generated
vendored
3
vendor/github.com/klauspost/compress/zstd/decoder.go
generated
vendored
@@ -123,7 +123,7 @@ func NewReader(r io.Reader, opts ...DOption) (*Decoder, error) {
|
||||
}
|
||||
|
||||
// Read bytes from the decompressed stream into p.
|
||||
// Returns the number of bytes written and any error that occurred.
|
||||
// Returns the number of bytes read and any error that occurred.
|
||||
// When the stream is done, io.EOF will be returned.
|
||||
func (d *Decoder) Read(p []byte) (int, error) {
|
||||
var n int
|
||||
@@ -323,6 +323,7 @@ func (d *Decoder) DecodeAll(input, dst []byte) ([]byte, error) {
|
||||
frame.bBuf = nil
|
||||
if frame.history.decoders.br != nil {
|
||||
frame.history.decoders.br.in = nil
|
||||
frame.history.decoders.br.cursor = 0
|
||||
}
|
||||
d.decoders <- block
|
||||
}()
|
||||
|
||||
2
vendor/github.com/klauspost/compress/zstd/enc_base.go
generated
vendored
2
vendor/github.com/klauspost/compress/zstd/enc_base.go
generated
vendored
@@ -116,7 +116,7 @@ func (e *fastBase) matchlen(s, t int32, src []byte) int32 {
|
||||
panic(err)
|
||||
}
|
||||
if t < 0 {
|
||||
err := fmt.Sprintf("s (%d) < 0", s)
|
||||
err := fmt.Sprintf("t (%d) < 0", t)
|
||||
panic(err)
|
||||
}
|
||||
if s-t > e.maxMatchOff {
|
||||
|
||||
32
vendor/github.com/klauspost/compress/zstd/enc_better.go
generated
vendored
32
vendor/github.com/klauspost/compress/zstd/enc_better.go
generated
vendored
@@ -179,9 +179,9 @@ encodeLoop:
|
||||
if repIndex >= 0 && load3232(src, repIndex) == uint32(cv>>(repOff*8)) {
|
||||
// Consider history as well.
|
||||
var seq seq
|
||||
lenght := 4 + e.matchlen(s+4+repOff, repIndex+4, src)
|
||||
length := 4 + e.matchlen(s+4+repOff, repIndex+4, src)
|
||||
|
||||
seq.matchLen = uint32(lenght - zstdMinMatch)
|
||||
seq.matchLen = uint32(length - zstdMinMatch)
|
||||
|
||||
// We might be able to match backwards.
|
||||
// Extend as long as we can.
|
||||
@@ -210,12 +210,12 @@ encodeLoop:
|
||||
|
||||
// Index match start+1 (long) -> s - 1
|
||||
index0 := s + repOff
|
||||
s += lenght + repOff
|
||||
s += length + repOff
|
||||
|
||||
nextEmit = s
|
||||
if s >= sLimit {
|
||||
if debugEncoder {
|
||||
println("repeat ended", s, lenght)
|
||||
println("repeat ended", s, length)
|
||||
|
||||
}
|
||||
break encodeLoop
|
||||
@@ -241,9 +241,9 @@ encodeLoop:
|
||||
if false && repIndex >= 0 && load6432(src, repIndex) == load6432(src, s+repOff) {
|
||||
// Consider history as well.
|
||||
var seq seq
|
||||
lenght := 8 + e.matchlen(s+8+repOff2, repIndex+8, src)
|
||||
length := 8 + e.matchlen(s+8+repOff2, repIndex+8, src)
|
||||
|
||||
seq.matchLen = uint32(lenght - zstdMinMatch)
|
||||
seq.matchLen = uint32(length - zstdMinMatch)
|
||||
|
||||
// We might be able to match backwards.
|
||||
// Extend as long as we can.
|
||||
@@ -270,11 +270,11 @@ encodeLoop:
|
||||
}
|
||||
blk.sequences = append(blk.sequences, seq)
|
||||
|
||||
s += lenght + repOff2
|
||||
s += length + repOff2
|
||||
nextEmit = s
|
||||
if s >= sLimit {
|
||||
if debugEncoder {
|
||||
println("repeat ended", s, lenght)
|
||||
println("repeat ended", s, length)
|
||||
|
||||
}
|
||||
break encodeLoop
|
||||
@@ -708,9 +708,9 @@ encodeLoop:
|
||||
if repIndex >= 0 && load3232(src, repIndex) == uint32(cv>>(repOff*8)) {
|
||||
// Consider history as well.
|
||||
var seq seq
|
||||
lenght := 4 + e.matchlen(s+4+repOff, repIndex+4, src)
|
||||
length := 4 + e.matchlen(s+4+repOff, repIndex+4, src)
|
||||
|
||||
seq.matchLen = uint32(lenght - zstdMinMatch)
|
||||
seq.matchLen = uint32(length - zstdMinMatch)
|
||||
|
||||
// We might be able to match backwards.
|
||||
// Extend as long as we can.
|
||||
@@ -738,12 +738,12 @@ encodeLoop:
|
||||
blk.sequences = append(blk.sequences, seq)
|
||||
|
||||
// Index match start+1 (long) -> s - 1
|
||||
s += lenght + repOff
|
||||
s += length + repOff
|
||||
|
||||
nextEmit = s
|
||||
if s >= sLimit {
|
||||
if debugEncoder {
|
||||
println("repeat ended", s, lenght)
|
||||
println("repeat ended", s, length)
|
||||
|
||||
}
|
||||
break encodeLoop
|
||||
@@ -772,9 +772,9 @@ encodeLoop:
|
||||
if false && repIndex >= 0 && load6432(src, repIndex) == load6432(src, s+repOff) {
|
||||
// Consider history as well.
|
||||
var seq seq
|
||||
lenght := 8 + e.matchlen(s+8+repOff2, repIndex+8, src)
|
||||
length := 8 + e.matchlen(s+8+repOff2, repIndex+8, src)
|
||||
|
||||
seq.matchLen = uint32(lenght - zstdMinMatch)
|
||||
seq.matchLen = uint32(length - zstdMinMatch)
|
||||
|
||||
// We might be able to match backwards.
|
||||
// Extend as long as we can.
|
||||
@@ -801,11 +801,11 @@ encodeLoop:
|
||||
}
|
||||
blk.sequences = append(blk.sequences, seq)
|
||||
|
||||
s += lenght + repOff2
|
||||
s += length + repOff2
|
||||
nextEmit = s
|
||||
if s >= sLimit {
|
||||
if debugEncoder {
|
||||
println("repeat ended", s, lenght)
|
||||
println("repeat ended", s, length)
|
||||
|
||||
}
|
||||
break encodeLoop
|
||||
|
||||
16
vendor/github.com/klauspost/compress/zstd/enc_dfast.go
generated
vendored
16
vendor/github.com/klauspost/compress/zstd/enc_dfast.go
generated
vendored
@@ -138,9 +138,9 @@ encodeLoop:
|
||||
if repIndex >= 0 && load3232(src, repIndex) == uint32(cv>>(repOff*8)) {
|
||||
// Consider history as well.
|
||||
var seq seq
|
||||
lenght := 4 + e.matchlen(s+4+repOff, repIndex+4, src)
|
||||
length := 4 + e.matchlen(s+4+repOff, repIndex+4, src)
|
||||
|
||||
seq.matchLen = uint32(lenght - zstdMinMatch)
|
||||
seq.matchLen = uint32(length - zstdMinMatch)
|
||||
|
||||
// We might be able to match backwards.
|
||||
// Extend as long as we can.
|
||||
@@ -166,11 +166,11 @@ encodeLoop:
|
||||
println("repeat sequence", seq, "next s:", s)
|
||||
}
|
||||
blk.sequences = append(blk.sequences, seq)
|
||||
s += lenght + repOff
|
||||
s += length + repOff
|
||||
nextEmit = s
|
||||
if s >= sLimit {
|
||||
if debugEncoder {
|
||||
println("repeat ended", s, lenght)
|
||||
println("repeat ended", s, length)
|
||||
|
||||
}
|
||||
break encodeLoop
|
||||
@@ -798,9 +798,9 @@ encodeLoop:
|
||||
if repIndex >= 0 && load3232(src, repIndex) == uint32(cv>>(repOff*8)) {
|
||||
// Consider history as well.
|
||||
var seq seq
|
||||
lenght := 4 + e.matchlen(s+4+repOff, repIndex+4, src)
|
||||
length := 4 + e.matchlen(s+4+repOff, repIndex+4, src)
|
||||
|
||||
seq.matchLen = uint32(lenght - zstdMinMatch)
|
||||
seq.matchLen = uint32(length - zstdMinMatch)
|
||||
|
||||
// We might be able to match backwards.
|
||||
// Extend as long as we can.
|
||||
@@ -826,11 +826,11 @@ encodeLoop:
|
||||
println("repeat sequence", seq, "next s:", s)
|
||||
}
|
||||
blk.sequences = append(blk.sequences, seq)
|
||||
s += lenght + repOff
|
||||
s += length + repOff
|
||||
nextEmit = s
|
||||
if s >= sLimit {
|
||||
if debugEncoder {
|
||||
println("repeat ended", s, lenght)
|
||||
println("repeat ended", s, length)
|
||||
|
||||
}
|
||||
break encodeLoop
|
||||
|
||||
45
vendor/github.com/klauspost/compress/zstd/encoder.go
generated
vendored
45
vendor/github.com/klauspost/compress/zstd/encoder.go
generated
vendored
@@ -6,6 +6,7 @@ package zstd
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
@@ -149,6 +150,9 @@ func (e *Encoder) ResetContentSize(w io.Writer, size int64) {
|
||||
// and write CRC if requested.
|
||||
func (e *Encoder) Write(p []byte) (n int, err error) {
|
||||
s := &e.state
|
||||
if s.eofWritten {
|
||||
return 0, ErrEncoderClosed
|
||||
}
|
||||
for len(p) > 0 {
|
||||
if len(p)+len(s.filling) < e.o.blockSize {
|
||||
if e.o.crc {
|
||||
@@ -202,7 +206,7 @@ func (e *Encoder) nextBlock(final bool) error {
|
||||
return nil
|
||||
}
|
||||
if final && len(s.filling) > 0 {
|
||||
s.current = e.EncodeAll(s.filling, s.current[:0])
|
||||
s.current = e.encodeAll(s.encoder, s.filling, s.current[:0])
|
||||
var n2 int
|
||||
n2, s.err = s.w.Write(s.current)
|
||||
if s.err != nil {
|
||||
@@ -288,6 +292,9 @@ func (e *Encoder) nextBlock(final bool) error {
|
||||
s.filling, s.current, s.previous = s.previous[:0], s.filling, s.current
|
||||
s.nInput += int64(len(s.current))
|
||||
s.wg.Add(1)
|
||||
if final {
|
||||
s.eofWritten = true
|
||||
}
|
||||
go func(src []byte) {
|
||||
if debugEncoder {
|
||||
println("Adding block,", len(src), "bytes, final:", final)
|
||||
@@ -303,9 +310,6 @@ func (e *Encoder) nextBlock(final bool) error {
|
||||
blk := enc.Block()
|
||||
enc.Encode(blk, src)
|
||||
blk.last = final
|
||||
if final {
|
||||
s.eofWritten = true
|
||||
}
|
||||
// Wait for pending writes.
|
||||
s.wWg.Wait()
|
||||
if s.writeErr != nil {
|
||||
@@ -401,12 +405,20 @@ func (e *Encoder) Flush() error {
|
||||
if len(s.filling) > 0 {
|
||||
err := e.nextBlock(false)
|
||||
if err != nil {
|
||||
// Ignore Flush after Close.
|
||||
if errors.Is(s.err, ErrEncoderClosed) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
s.wg.Wait()
|
||||
s.wWg.Wait()
|
||||
if s.err != nil {
|
||||
// Ignore Flush after Close.
|
||||
if errors.Is(s.err, ErrEncoderClosed) {
|
||||
return nil
|
||||
}
|
||||
return s.err
|
||||
}
|
||||
return s.writeErr
|
||||
@@ -422,6 +434,9 @@ func (e *Encoder) Close() error {
|
||||
}
|
||||
err := e.nextBlock(true)
|
||||
if err != nil {
|
||||
if errors.Is(s.err, ErrEncoderClosed) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
if s.frameContentSize > 0 {
|
||||
@@ -459,6 +474,11 @@ func (e *Encoder) Close() error {
|
||||
}
|
||||
_, s.err = s.w.Write(frame)
|
||||
}
|
||||
if s.err == nil {
|
||||
s.err = ErrEncoderClosed
|
||||
return nil
|
||||
}
|
||||
|
||||
return s.err
|
||||
}
|
||||
|
||||
@@ -469,6 +489,15 @@ func (e *Encoder) Close() error {
|
||||
// Data compressed with EncodeAll can be decoded with the Decoder,
|
||||
// using either a stream or DecodeAll.
|
||||
func (e *Encoder) EncodeAll(src, dst []byte) []byte {
|
||||
e.init.Do(e.initialize)
|
||||
enc := <-e.encoders
|
||||
defer func() {
|
||||
e.encoders <- enc
|
||||
}()
|
||||
return e.encodeAll(enc, src, dst)
|
||||
}
|
||||
|
||||
func (e *Encoder) encodeAll(enc encoder, src, dst []byte) []byte {
|
||||
if len(src) == 0 {
|
||||
if e.o.fullZero {
|
||||
// Add frame header.
|
||||
@@ -491,13 +520,7 @@ func (e *Encoder) EncodeAll(src, dst []byte) []byte {
|
||||
}
|
||||
return dst
|
||||
}
|
||||
e.init.Do(e.initialize)
|
||||
enc := <-e.encoders
|
||||
defer func() {
|
||||
// Release encoder reference to last block.
|
||||
// If a non-single block is needed the encoder will reset again.
|
||||
e.encoders <- enc
|
||||
}()
|
||||
|
||||
// Use single segments when above minimum window and below window size.
|
||||
single := len(src) <= e.o.windowSize && len(src) > MinWindowSize
|
||||
if e.o.single != nil {
|
||||
|
||||
4
vendor/github.com/klauspost/compress/zstd/framedec.go
generated
vendored
4
vendor/github.com/klauspost/compress/zstd/framedec.go
generated
vendored
@@ -146,7 +146,9 @@ func (d *frameDec) reset(br byteBuffer) error {
|
||||
}
|
||||
return err
|
||||
}
|
||||
printf("raw: %x, mantissa: %d, exponent: %d\n", wd, wd&7, wd>>3)
|
||||
if debugDecoder {
|
||||
printf("raw: %x, mantissa: %d, exponent: %d\n", wd, wd&7, wd>>3)
|
||||
}
|
||||
windowLog := 10 + (wd >> 3)
|
||||
windowBase := uint64(1) << windowLog
|
||||
windowAdd := (windowBase / 8) * uint64(wd&0x7)
|
||||
|
||||
11
vendor/github.com/klauspost/compress/zstd/matchlen_generic.go
generated
vendored
11
vendor/github.com/klauspost/compress/zstd/matchlen_generic.go
generated
vendored
@@ -7,20 +7,25 @@
|
||||
package zstd
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"math/bits"
|
||||
|
||||
"github.com/klauspost/compress/internal/le"
|
||||
)
|
||||
|
||||
// matchLen returns the maximum common prefix length of a and b.
|
||||
// a must be the shortest of the two.
|
||||
func matchLen(a, b []byte) (n int) {
|
||||
for ; len(a) >= 8 && len(b) >= 8; a, b = a[8:], b[8:] {
|
||||
diff := binary.LittleEndian.Uint64(a) ^ binary.LittleEndian.Uint64(b)
|
||||
left := len(a)
|
||||
for left >= 8 {
|
||||
diff := le.Load64(a, n) ^ le.Load64(b, n)
|
||||
if diff != 0 {
|
||||
return n + bits.TrailingZeros64(diff)>>3
|
||||
}
|
||||
n += 8
|
||||
left -= 8
|
||||
}
|
||||
a = a[n:]
|
||||
b = b[n:]
|
||||
|
||||
for i := range a {
|
||||
if a[i] != b[i] {
|
||||
|
||||
2
vendor/github.com/klauspost/compress/zstd/seqdec.go
generated
vendored
2
vendor/github.com/klauspost/compress/zstd/seqdec.go
generated
vendored
@@ -245,7 +245,7 @@ func (s *sequenceDecs) decodeSync(hist []byte) error {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
var ll, mo, ml int
|
||||
if len(br.in) > 4+((maxOffsetBits+16+16)>>3) {
|
||||
if br.cursor > 4+((maxOffsetBits+16+16)>>3) {
|
||||
// inlined function:
|
||||
// ll, mo, ml = s.nextFast(br, llState, mlState, ofState)
|
||||
|
||||
|
||||
4
vendor/github.com/klauspost/compress/zstd/seqdec_amd64.go
generated
vendored
4
vendor/github.com/klauspost/compress/zstd/seqdec_amd64.go
generated
vendored
@@ -146,7 +146,7 @@ func (s *sequenceDecs) decodeSyncSimple(hist []byte) (bool, error) {
|
||||
return true, fmt.Errorf("output bigger than max block size (%d)", maxBlockSize)
|
||||
|
||||
default:
|
||||
return true, fmt.Errorf("sequenceDecs_decode returned erronous code %d", errCode)
|
||||
return true, fmt.Errorf("sequenceDecs_decode returned erroneous code %d", errCode)
|
||||
}
|
||||
|
||||
s.seqSize += ctx.litRemain
|
||||
@@ -292,7 +292,7 @@ func (s *sequenceDecs) decode(seqs []seqVals) error {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
return fmt.Errorf("sequenceDecs_decode_amd64 returned erronous code %d", errCode)
|
||||
return fmt.Errorf("sequenceDecs_decode_amd64 returned erroneous code %d", errCode)
|
||||
}
|
||||
|
||||
if ctx.litRemain < 0 {
|
||||
|
||||
72
vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s
generated
vendored
72
vendor/github.com/klauspost/compress/zstd/seqdec_amd64.s
generated
vendored
@@ -7,9 +7,9 @@
|
||||
TEXT ·sequenceDecs_decode_amd64(SB), $8-32
|
||||
MOVQ br+8(FP), CX
|
||||
MOVQ 24(CX), DX
|
||||
MOVBQZX 32(CX), BX
|
||||
MOVBQZX 40(CX), BX
|
||||
MOVQ (CX), AX
|
||||
MOVQ 8(CX), SI
|
||||
MOVQ 32(CX), SI
|
||||
ADDQ SI, AX
|
||||
MOVQ AX, (SP)
|
||||
MOVQ ctx+16(FP), AX
|
||||
@@ -299,8 +299,8 @@ sequenceDecs_decode_amd64_match_len_ofs_ok:
|
||||
MOVQ R13, 160(AX)
|
||||
MOVQ br+8(FP), AX
|
||||
MOVQ DX, 24(AX)
|
||||
MOVB BL, 32(AX)
|
||||
MOVQ SI, 8(AX)
|
||||
MOVB BL, 40(AX)
|
||||
MOVQ SI, 32(AX)
|
||||
|
||||
// Return success
|
||||
MOVQ $0x00000000, ret+24(FP)
|
||||
@@ -335,9 +335,9 @@ error_overread:
|
||||
TEXT ·sequenceDecs_decode_56_amd64(SB), $8-32
|
||||
MOVQ br+8(FP), CX
|
||||
MOVQ 24(CX), DX
|
||||
MOVBQZX 32(CX), BX
|
||||
MOVBQZX 40(CX), BX
|
||||
MOVQ (CX), AX
|
||||
MOVQ 8(CX), SI
|
||||
MOVQ 32(CX), SI
|
||||
ADDQ SI, AX
|
||||
MOVQ AX, (SP)
|
||||
MOVQ ctx+16(FP), AX
|
||||
@@ -598,8 +598,8 @@ sequenceDecs_decode_56_amd64_match_len_ofs_ok:
|
||||
MOVQ R13, 160(AX)
|
||||
MOVQ br+8(FP), AX
|
||||
MOVQ DX, 24(AX)
|
||||
MOVB BL, 32(AX)
|
||||
MOVQ SI, 8(AX)
|
||||
MOVB BL, 40(AX)
|
||||
MOVQ SI, 32(AX)
|
||||
|
||||
// Return success
|
||||
MOVQ $0x00000000, ret+24(FP)
|
||||
@@ -634,9 +634,9 @@ error_overread:
|
||||
TEXT ·sequenceDecs_decode_bmi2(SB), $8-32
|
||||
MOVQ br+8(FP), BX
|
||||
MOVQ 24(BX), AX
|
||||
MOVBQZX 32(BX), DX
|
||||
MOVBQZX 40(BX), DX
|
||||
MOVQ (BX), CX
|
||||
MOVQ 8(BX), BX
|
||||
MOVQ 32(BX), BX
|
||||
ADDQ BX, CX
|
||||
MOVQ CX, (SP)
|
||||
MOVQ ctx+16(FP), CX
|
||||
@@ -884,8 +884,8 @@ sequenceDecs_decode_bmi2_match_len_ofs_ok:
|
||||
MOVQ R12, 160(CX)
|
||||
MOVQ br+8(FP), CX
|
||||
MOVQ AX, 24(CX)
|
||||
MOVB DL, 32(CX)
|
||||
MOVQ BX, 8(CX)
|
||||
MOVB DL, 40(CX)
|
||||
MOVQ BX, 32(CX)
|
||||
|
||||
// Return success
|
||||
MOVQ $0x00000000, ret+24(FP)
|
||||
@@ -920,9 +920,9 @@ error_overread:
|
||||
TEXT ·sequenceDecs_decode_56_bmi2(SB), $8-32
|
||||
MOVQ br+8(FP), BX
|
||||
MOVQ 24(BX), AX
|
||||
MOVBQZX 32(BX), DX
|
||||
MOVBQZX 40(BX), DX
|
||||
MOVQ (BX), CX
|
||||
MOVQ 8(BX), BX
|
||||
MOVQ 32(BX), BX
|
||||
ADDQ BX, CX
|
||||
MOVQ CX, (SP)
|
||||
MOVQ ctx+16(FP), CX
|
||||
@@ -1141,8 +1141,8 @@ sequenceDecs_decode_56_bmi2_match_len_ofs_ok:
|
||||
MOVQ R12, 160(CX)
|
||||
MOVQ br+8(FP), CX
|
||||
MOVQ AX, 24(CX)
|
||||
MOVB DL, 32(CX)
|
||||
MOVQ BX, 8(CX)
|
||||
MOVB DL, 40(CX)
|
||||
MOVQ BX, 32(CX)
|
||||
|
||||
// Return success
|
||||
MOVQ $0x00000000, ret+24(FP)
|
||||
@@ -1787,9 +1787,9 @@ empty_seqs:
|
||||
TEXT ·sequenceDecs_decodeSync_amd64(SB), $64-32
|
||||
MOVQ br+8(FP), CX
|
||||
MOVQ 24(CX), DX
|
||||
MOVBQZX 32(CX), BX
|
||||
MOVBQZX 40(CX), BX
|
||||
MOVQ (CX), AX
|
||||
MOVQ 8(CX), SI
|
||||
MOVQ 32(CX), SI
|
||||
ADDQ SI, AX
|
||||
MOVQ AX, (SP)
|
||||
MOVQ ctx+16(FP), AX
|
||||
@@ -1814,7 +1814,7 @@ TEXT ·sequenceDecs_decodeSync_amd64(SB), $64-32
|
||||
MOVQ 40(SP), AX
|
||||
ADDQ AX, 48(SP)
|
||||
|
||||
// Calculate poiter to s.out[cap(s.out)] (a past-end pointer)
|
||||
// Calculate pointer to s.out[cap(s.out)] (a past-end pointer)
|
||||
ADDQ R10, 32(SP)
|
||||
|
||||
// outBase += outPosition
|
||||
@@ -2281,8 +2281,8 @@ handle_loop:
|
||||
loop_finished:
|
||||
MOVQ br+8(FP), AX
|
||||
MOVQ DX, 24(AX)
|
||||
MOVB BL, 32(AX)
|
||||
MOVQ SI, 8(AX)
|
||||
MOVB BL, 40(AX)
|
||||
MOVQ SI, 32(AX)
|
||||
|
||||
// Update the context
|
||||
MOVQ ctx+16(FP), AX
|
||||
@@ -2349,9 +2349,9 @@ error_not_enough_space:
|
||||
TEXT ·sequenceDecs_decodeSync_bmi2(SB), $64-32
|
||||
MOVQ br+8(FP), BX
|
||||
MOVQ 24(BX), AX
|
||||
MOVBQZX 32(BX), DX
|
||||
MOVBQZX 40(BX), DX
|
||||
MOVQ (BX), CX
|
||||
MOVQ 8(BX), BX
|
||||
MOVQ 32(BX), BX
|
||||
ADDQ BX, CX
|
||||
MOVQ CX, (SP)
|
||||
MOVQ ctx+16(FP), CX
|
||||
@@ -2376,7 +2376,7 @@ TEXT ·sequenceDecs_decodeSync_bmi2(SB), $64-32
|
||||
MOVQ 40(SP), CX
|
||||
ADDQ CX, 48(SP)
|
||||
|
||||
// Calculate poiter to s.out[cap(s.out)] (a past-end pointer)
|
||||
// Calculate pointer to s.out[cap(s.out)] (a past-end pointer)
|
||||
ADDQ R9, 32(SP)
|
||||
|
||||
// outBase += outPosition
|
||||
@@ -2801,8 +2801,8 @@ handle_loop:
|
||||
loop_finished:
|
||||
MOVQ br+8(FP), CX
|
||||
MOVQ AX, 24(CX)
|
||||
MOVB DL, 32(CX)
|
||||
MOVQ BX, 8(CX)
|
||||
MOVB DL, 40(CX)
|
||||
MOVQ BX, 32(CX)
|
||||
|
||||
// Update the context
|
||||
MOVQ ctx+16(FP), AX
|
||||
@@ -2869,9 +2869,9 @@ error_not_enough_space:
|
||||
TEXT ·sequenceDecs_decodeSync_safe_amd64(SB), $64-32
|
||||
MOVQ br+8(FP), CX
|
||||
MOVQ 24(CX), DX
|
||||
MOVBQZX 32(CX), BX
|
||||
MOVBQZX 40(CX), BX
|
||||
MOVQ (CX), AX
|
||||
MOVQ 8(CX), SI
|
||||
MOVQ 32(CX), SI
|
||||
ADDQ SI, AX
|
||||
MOVQ AX, (SP)
|
||||
MOVQ ctx+16(FP), AX
|
||||
@@ -2896,7 +2896,7 @@ TEXT ·sequenceDecs_decodeSync_safe_amd64(SB), $64-32
|
||||
MOVQ 40(SP), AX
|
||||
ADDQ AX, 48(SP)
|
||||
|
||||
// Calculate poiter to s.out[cap(s.out)] (a past-end pointer)
|
||||
// Calculate pointer to s.out[cap(s.out)] (a past-end pointer)
|
||||
ADDQ R10, 32(SP)
|
||||
|
||||
// outBase += outPosition
|
||||
@@ -3465,8 +3465,8 @@ handle_loop:
|
||||
loop_finished:
|
||||
MOVQ br+8(FP), AX
|
||||
MOVQ DX, 24(AX)
|
||||
MOVB BL, 32(AX)
|
||||
MOVQ SI, 8(AX)
|
||||
MOVB BL, 40(AX)
|
||||
MOVQ SI, 32(AX)
|
||||
|
||||
// Update the context
|
||||
MOVQ ctx+16(FP), AX
|
||||
@@ -3533,9 +3533,9 @@ error_not_enough_space:
|
||||
TEXT ·sequenceDecs_decodeSync_safe_bmi2(SB), $64-32
|
||||
MOVQ br+8(FP), BX
|
||||
MOVQ 24(BX), AX
|
||||
MOVBQZX 32(BX), DX
|
||||
MOVBQZX 40(BX), DX
|
||||
MOVQ (BX), CX
|
||||
MOVQ 8(BX), BX
|
||||
MOVQ 32(BX), BX
|
||||
ADDQ BX, CX
|
||||
MOVQ CX, (SP)
|
||||
MOVQ ctx+16(FP), CX
|
||||
@@ -3560,7 +3560,7 @@ TEXT ·sequenceDecs_decodeSync_safe_bmi2(SB), $64-32
|
||||
MOVQ 40(SP), CX
|
||||
ADDQ CX, 48(SP)
|
||||
|
||||
// Calculate poiter to s.out[cap(s.out)] (a past-end pointer)
|
||||
// Calculate pointer to s.out[cap(s.out)] (a past-end pointer)
|
||||
ADDQ R9, 32(SP)
|
||||
|
||||
// outBase += outPosition
|
||||
@@ -4087,8 +4087,8 @@ handle_loop:
|
||||
loop_finished:
|
||||
MOVQ br+8(FP), CX
|
||||
MOVQ AX, 24(CX)
|
||||
MOVB DL, 32(CX)
|
||||
MOVQ BX, 8(CX)
|
||||
MOVB DL, 40(CX)
|
||||
MOVQ BX, 32(CX)
|
||||
|
||||
// Update the context
|
||||
MOVQ ctx+16(FP), AX
|
||||
|
||||
2
vendor/github.com/klauspost/compress/zstd/seqdec_generic.go
generated
vendored
2
vendor/github.com/klauspost/compress/zstd/seqdec_generic.go
generated
vendored
@@ -29,7 +29,7 @@ func (s *sequenceDecs) decode(seqs []seqVals) error {
|
||||
}
|
||||
for i := range seqs {
|
||||
var ll, mo, ml int
|
||||
if len(br.in) > 4+((maxOffsetBits+16+16)>>3) {
|
||||
if br.cursor > 4+((maxOffsetBits+16+16)>>3) {
|
||||
// inlined function:
|
||||
// ll, mo, ml = s.nextFast(br, llState, mlState, ofState)
|
||||
|
||||
|
||||
2
vendor/github.com/klauspost/compress/zstd/seqenc.go
generated
vendored
2
vendor/github.com/klauspost/compress/zstd/seqenc.go
generated
vendored
@@ -69,7 +69,6 @@ var llBitsTable = [maxLLCode + 1]byte{
|
||||
func llCode(litLength uint32) uint8 {
|
||||
const llDeltaCode = 19
|
||||
if litLength <= 63 {
|
||||
// Compiler insists on bounds check (Go 1.12)
|
||||
return llCodeTable[litLength&63]
|
||||
}
|
||||
return uint8(highBit(litLength)) + llDeltaCode
|
||||
@@ -102,7 +101,6 @@ var mlBitsTable = [maxMLCode + 1]byte{
|
||||
func mlCode(mlBase uint32) uint8 {
|
||||
const mlDeltaCode = 36
|
||||
if mlBase <= 127 {
|
||||
// Compiler insists on bounds check (Go 1.12)
|
||||
return mlCodeTable[mlBase&127]
|
||||
}
|
||||
return uint8(highBit(mlBase)) + mlDeltaCode
|
||||
|
||||
4
vendor/github.com/klauspost/compress/zstd/snappy.go
generated
vendored
4
vendor/github.com/klauspost/compress/zstd/snappy.go
generated
vendored
@@ -197,7 +197,7 @@ func (r *SnappyConverter) Convert(in io.Reader, w io.Writer) (int64, error) {
|
||||
|
||||
n, r.err = w.Write(r.block.output)
|
||||
if r.err != nil {
|
||||
return written, err
|
||||
return written, r.err
|
||||
}
|
||||
written += int64(n)
|
||||
continue
|
||||
@@ -239,7 +239,7 @@ func (r *SnappyConverter) Convert(in io.Reader, w io.Writer) (int64, error) {
|
||||
}
|
||||
n, r.err = w.Write(r.block.output)
|
||||
if r.err != nil {
|
||||
return written, err
|
||||
return written, r.err
|
||||
}
|
||||
written += int64(n)
|
||||
continue
|
||||
|
||||
11
vendor/github.com/klauspost/compress/zstd/zstd.go
generated
vendored
11
vendor/github.com/klauspost/compress/zstd/zstd.go
generated
vendored
@@ -5,10 +5,11 @@ package zstd
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"log"
|
||||
"math"
|
||||
|
||||
"github.com/klauspost/compress/internal/le"
|
||||
)
|
||||
|
||||
// enable debug printing
|
||||
@@ -88,6 +89,10 @@ var (
|
||||
// Close has been called.
|
||||
ErrDecoderClosed = errors.New("decoder used after Close")
|
||||
|
||||
// ErrEncoderClosed will be returned if the Encoder was used after
|
||||
// Close has been called.
|
||||
ErrEncoderClosed = errors.New("encoder used after Close")
|
||||
|
||||
// ErrDecoderNilInput is returned when a nil Reader was provided
|
||||
// and an operation other than Reset/DecodeAll/Close was attempted.
|
||||
ErrDecoderNilInput = errors.New("nil input provided as reader")
|
||||
@@ -106,11 +111,11 @@ func printf(format string, a ...interface{}) {
|
||||
}
|
||||
|
||||
func load3232(b []byte, i int32) uint32 {
|
||||
return binary.LittleEndian.Uint32(b[:len(b):len(b)][i:])
|
||||
return le.Load32(b, i)
|
||||
}
|
||||
|
||||
func load6432(b []byte, i int32) uint64 {
|
||||
return binary.LittleEndian.Uint64(b[:len(b):len(b)][i:])
|
||||
return le.Load64(b, i)
|
||||
}
|
||||
|
||||
type byter interface {
|
||||
|
||||
962
vendor/github.com/open-policy-agent/opa/ast/annotations.go
generated
vendored
962
vendor/github.com/open-policy-agent/opa/ast/annotations.go
generated
vendored
@@ -5,973 +5,33 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
astJSON "github.com/open-policy-agent/opa/ast/json"
|
||||
"github.com/open-policy-agent/opa/internal/deepcopy"
|
||||
"github.com/open-policy-agent/opa/util"
|
||||
)
|
||||
|
||||
const (
|
||||
annotationScopePackage = "package"
|
||||
annotationScopeImport = "import"
|
||||
annotationScopeRule = "rule"
|
||||
annotationScopeDocument = "document"
|
||||
annotationScopeSubpackages = "subpackages"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
type (
|
||||
// Annotations represents metadata attached to other AST nodes such as rules.
|
||||
Annotations struct {
|
||||
Scope string `json:"scope"`
|
||||
Title string `json:"title,omitempty"`
|
||||
Entrypoint bool `json:"entrypoint,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
Organizations []string `json:"organizations,omitempty"`
|
||||
RelatedResources []*RelatedResourceAnnotation `json:"related_resources,omitempty"`
|
||||
Authors []*AuthorAnnotation `json:"authors,omitempty"`
|
||||
Schemas []*SchemaAnnotation `json:"schemas,omitempty"`
|
||||
Custom map[string]interface{} `json:"custom,omitempty"`
|
||||
Location *Location `json:"location,omitempty"`
|
||||
|
||||
comments []*Comment
|
||||
node Node
|
||||
jsonOptions astJSON.Options
|
||||
}
|
||||
Annotations = v1.Annotations
|
||||
|
||||
// SchemaAnnotation contains a schema declaration for the document identified by the path.
|
||||
SchemaAnnotation struct {
|
||||
Path Ref `json:"path"`
|
||||
Schema Ref `json:"schema,omitempty"`
|
||||
Definition *interface{} `json:"definition,omitempty"`
|
||||
}
|
||||
SchemaAnnotation = v1.SchemaAnnotation
|
||||
|
||||
AuthorAnnotation struct {
|
||||
Name string `json:"name"`
|
||||
Email string `json:"email,omitempty"`
|
||||
}
|
||||
AuthorAnnotation = v1.AuthorAnnotation
|
||||
|
||||
RelatedResourceAnnotation struct {
|
||||
Ref url.URL `json:"ref"`
|
||||
Description string `json:"description,omitempty"`
|
||||
}
|
||||
RelatedResourceAnnotation = v1.RelatedResourceAnnotation
|
||||
|
||||
AnnotationSet struct {
|
||||
byRule map[*Rule][]*Annotations
|
||||
byPackage map[int]*Annotations
|
||||
byPath *annotationTreeNode
|
||||
modules []*Module // Modules this set was constructed from
|
||||
}
|
||||
AnnotationSet = v1.AnnotationSet
|
||||
|
||||
annotationTreeNode struct {
|
||||
Value *Annotations
|
||||
Children map[Value]*annotationTreeNode // we assume key elements are hashable (vars and strings only!)
|
||||
}
|
||||
AnnotationsRef = v1.AnnotationsRef
|
||||
|
||||
AnnotationsRef struct {
|
||||
Path Ref `json:"path"` // The path of the node the annotations are applied to
|
||||
Annotations *Annotations `json:"annotations,omitempty"`
|
||||
Location *Location `json:"location,omitempty"` // The location of the node the annotations are applied to
|
||||
AnnotationsRefSet = v1.AnnotationsRefSet
|
||||
|
||||
jsonOptions astJSON.Options
|
||||
|
||||
node Node // The node the annotations are applied to
|
||||
}
|
||||
|
||||
AnnotationsRefSet []*AnnotationsRef
|
||||
|
||||
FlatAnnotationsRefSet AnnotationsRefSet
|
||||
FlatAnnotationsRefSet = v1.FlatAnnotationsRefSet
|
||||
)
|
||||
|
||||
func (a *Annotations) String() string {
|
||||
bs, _ := a.MarshalJSON()
|
||||
return string(bs)
|
||||
}
|
||||
|
||||
// Loc returns the location of this annotation.
|
||||
func (a *Annotations) Loc() *Location {
|
||||
return a.Location
|
||||
}
|
||||
|
||||
// SetLoc updates the location of this annotation.
|
||||
func (a *Annotations) SetLoc(l *Location) {
|
||||
a.Location = l
|
||||
}
|
||||
|
||||
// EndLoc returns the location of this annotation's last comment line.
|
||||
func (a *Annotations) EndLoc() *Location {
|
||||
count := len(a.comments)
|
||||
if count == 0 {
|
||||
return a.Location
|
||||
}
|
||||
return a.comments[count-1].Location
|
||||
}
|
||||
|
||||
// Compare returns an integer indicating if a is less than, equal to, or greater
|
||||
// than other.
|
||||
func (a *Annotations) Compare(other *Annotations) int {
|
||||
|
||||
if a == nil && other == nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
if a == nil {
|
||||
return -1
|
||||
}
|
||||
|
||||
if other == nil {
|
||||
return 1
|
||||
}
|
||||
|
||||
if cmp := scopeCompare(a.Scope, other.Scope); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if cmp := strings.Compare(a.Title, other.Title); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if cmp := strings.Compare(a.Description, other.Description); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if cmp := compareStringLists(a.Organizations, other.Organizations); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if cmp := compareRelatedResources(a.RelatedResources, other.RelatedResources); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if cmp := compareAuthors(a.Authors, other.Authors); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if cmp := compareSchemas(a.Schemas, other.Schemas); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if a.Entrypoint != other.Entrypoint {
|
||||
if a.Entrypoint {
|
||||
return 1
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
if cmp := util.Compare(a.Custom, other.Custom); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
// GetTargetPath returns the path of the node these Annotations are applied to (the target)
|
||||
func (a *Annotations) GetTargetPath() Ref {
|
||||
switch n := a.node.(type) {
|
||||
case *Package:
|
||||
return n.Path
|
||||
case *Rule:
|
||||
return n.Ref().GroundPrefix()
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (a *Annotations) setJSONOptions(opts astJSON.Options) {
|
||||
a.jsonOptions = opts
|
||||
if a.Location != nil {
|
||||
a.Location.JSONOptions = opts
|
||||
}
|
||||
}
|
||||
|
||||
func (a *Annotations) MarshalJSON() ([]byte, error) {
|
||||
if a == nil {
|
||||
return []byte(`{"scope":""}`), nil
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"scope": a.Scope,
|
||||
}
|
||||
|
||||
if a.Title != "" {
|
||||
data["title"] = a.Title
|
||||
}
|
||||
|
||||
if a.Description != "" {
|
||||
data["description"] = a.Description
|
||||
}
|
||||
|
||||
if a.Entrypoint {
|
||||
data["entrypoint"] = a.Entrypoint
|
||||
}
|
||||
|
||||
if len(a.Organizations) > 0 {
|
||||
data["organizations"] = a.Organizations
|
||||
}
|
||||
|
||||
if len(a.RelatedResources) > 0 {
|
||||
data["related_resources"] = a.RelatedResources
|
||||
}
|
||||
|
||||
if len(a.Authors) > 0 {
|
||||
data["authors"] = a.Authors
|
||||
}
|
||||
|
||||
if len(a.Schemas) > 0 {
|
||||
data["schemas"] = a.Schemas
|
||||
}
|
||||
|
||||
if len(a.Custom) > 0 {
|
||||
data["custom"] = a.Custom
|
||||
}
|
||||
|
||||
if a.jsonOptions.MarshalOptions.IncludeLocation.Annotations {
|
||||
if a.Location != nil {
|
||||
data["location"] = a.Location
|
||||
}
|
||||
}
|
||||
|
||||
return json.Marshal(data)
|
||||
}
|
||||
|
||||
func NewAnnotationsRef(a *Annotations) *AnnotationsRef {
|
||||
var loc *Location
|
||||
if a.node != nil {
|
||||
loc = a.node.Loc()
|
||||
}
|
||||
|
||||
return &AnnotationsRef{
|
||||
Location: loc,
|
||||
Path: a.GetTargetPath(),
|
||||
Annotations: a,
|
||||
node: a.node,
|
||||
jsonOptions: a.jsonOptions,
|
||||
}
|
||||
}
|
||||
|
||||
func (ar *AnnotationsRef) GetPackage() *Package {
|
||||
switch n := ar.node.(type) {
|
||||
case *Package:
|
||||
return n
|
||||
case *Rule:
|
||||
return n.Module.Package
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (ar *AnnotationsRef) GetRule() *Rule {
|
||||
switch n := ar.node.(type) {
|
||||
case *Rule:
|
||||
return n
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (ar *AnnotationsRef) MarshalJSON() ([]byte, error) {
|
||||
data := map[string]interface{}{
|
||||
"path": ar.Path,
|
||||
}
|
||||
|
||||
if ar.Annotations != nil {
|
||||
data["annotations"] = ar.Annotations
|
||||
}
|
||||
|
||||
if ar.jsonOptions.MarshalOptions.IncludeLocation.AnnotationsRef {
|
||||
if ar.Location != nil {
|
||||
data["location"] = ar.Location
|
||||
}
|
||||
}
|
||||
|
||||
return json.Marshal(data)
|
||||
}
|
||||
|
||||
func scopeCompare(s1, s2 string) int {
|
||||
|
||||
o1 := scopeOrder(s1)
|
||||
o2 := scopeOrder(s2)
|
||||
|
||||
if o2 < o1 {
|
||||
return 1
|
||||
} else if o2 > o1 {
|
||||
return -1
|
||||
}
|
||||
|
||||
if s1 < s2 {
|
||||
return -1
|
||||
} else if s2 < s1 {
|
||||
return 1
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func scopeOrder(s string) int {
|
||||
switch s {
|
||||
case annotationScopeRule:
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func compareAuthors(a, b []*AuthorAnnotation) int {
|
||||
if len(a) > len(b) {
|
||||
return 1
|
||||
} else if len(a) < len(b) {
|
||||
return -1
|
||||
}
|
||||
|
||||
for i := 0; i < len(a); i++ {
|
||||
if cmp := a[i].Compare(b[i]); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func compareRelatedResources(a, b []*RelatedResourceAnnotation) int {
|
||||
if len(a) > len(b) {
|
||||
return 1
|
||||
} else if len(a) < len(b) {
|
||||
return -1
|
||||
}
|
||||
|
||||
for i := 0; i < len(a); i++ {
|
||||
if cmp := strings.Compare(a[i].String(), b[i].String()); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func compareSchemas(a, b []*SchemaAnnotation) int {
|
||||
maxLen := len(a)
|
||||
if len(b) < maxLen {
|
||||
maxLen = len(b)
|
||||
}
|
||||
|
||||
for i := 0; i < maxLen; i++ {
|
||||
if cmp := a[i].Compare(b[i]); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
}
|
||||
|
||||
if len(a) > len(b) {
|
||||
return 1
|
||||
} else if len(a) < len(b) {
|
||||
return -1
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func compareStringLists(a, b []string) int {
|
||||
if len(a) > len(b) {
|
||||
return 1
|
||||
} else if len(a) < len(b) {
|
||||
return -1
|
||||
}
|
||||
|
||||
for i := 0; i < len(a); i++ {
|
||||
if cmp := strings.Compare(a[i], b[i]); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
// Copy returns a deep copy of s.
|
||||
func (a *Annotations) Copy(node Node) *Annotations {
|
||||
cpy := *a
|
||||
|
||||
cpy.Organizations = make([]string, len(a.Organizations))
|
||||
copy(cpy.Organizations, a.Organizations)
|
||||
|
||||
cpy.RelatedResources = make([]*RelatedResourceAnnotation, len(a.RelatedResources))
|
||||
for i := range a.RelatedResources {
|
||||
cpy.RelatedResources[i] = a.RelatedResources[i].Copy()
|
||||
}
|
||||
|
||||
cpy.Authors = make([]*AuthorAnnotation, len(a.Authors))
|
||||
for i := range a.Authors {
|
||||
cpy.Authors[i] = a.Authors[i].Copy()
|
||||
}
|
||||
|
||||
cpy.Schemas = make([]*SchemaAnnotation, len(a.Schemas))
|
||||
for i := range a.Schemas {
|
||||
cpy.Schemas[i] = a.Schemas[i].Copy()
|
||||
}
|
||||
|
||||
cpy.Custom = deepcopy.Map(a.Custom)
|
||||
|
||||
cpy.node = node
|
||||
|
||||
return &cpy
|
||||
}
|
||||
|
||||
// toObject constructs an AST Object from the annotation.
|
||||
func (a *Annotations) toObject() (*Object, *Error) {
|
||||
obj := NewObject()
|
||||
|
||||
if a == nil {
|
||||
return &obj, nil
|
||||
}
|
||||
|
||||
if len(a.Scope) > 0 {
|
||||
obj.Insert(StringTerm("scope"), StringTerm(a.Scope))
|
||||
}
|
||||
|
||||
if len(a.Title) > 0 {
|
||||
obj.Insert(StringTerm("title"), StringTerm(a.Title))
|
||||
}
|
||||
|
||||
if a.Entrypoint {
|
||||
obj.Insert(StringTerm("entrypoint"), BooleanTerm(true))
|
||||
}
|
||||
|
||||
if len(a.Description) > 0 {
|
||||
obj.Insert(StringTerm("description"), StringTerm(a.Description))
|
||||
}
|
||||
|
||||
if len(a.Organizations) > 0 {
|
||||
orgs := make([]*Term, 0, len(a.Organizations))
|
||||
for _, org := range a.Organizations {
|
||||
orgs = append(orgs, StringTerm(org))
|
||||
}
|
||||
obj.Insert(StringTerm("organizations"), ArrayTerm(orgs...))
|
||||
}
|
||||
|
||||
if len(a.RelatedResources) > 0 {
|
||||
rrs := make([]*Term, 0, len(a.RelatedResources))
|
||||
for _, rr := range a.RelatedResources {
|
||||
rrObj := NewObject(Item(StringTerm("ref"), StringTerm(rr.Ref.String())))
|
||||
if len(rr.Description) > 0 {
|
||||
rrObj.Insert(StringTerm("description"), StringTerm(rr.Description))
|
||||
}
|
||||
rrs = append(rrs, NewTerm(rrObj))
|
||||
}
|
||||
obj.Insert(StringTerm("related_resources"), ArrayTerm(rrs...))
|
||||
}
|
||||
|
||||
if len(a.Authors) > 0 {
|
||||
as := make([]*Term, 0, len(a.Authors))
|
||||
for _, author := range a.Authors {
|
||||
aObj := NewObject()
|
||||
if len(author.Name) > 0 {
|
||||
aObj.Insert(StringTerm("name"), StringTerm(author.Name))
|
||||
}
|
||||
if len(author.Email) > 0 {
|
||||
aObj.Insert(StringTerm("email"), StringTerm(author.Email))
|
||||
}
|
||||
as = append(as, NewTerm(aObj))
|
||||
}
|
||||
obj.Insert(StringTerm("authors"), ArrayTerm(as...))
|
||||
}
|
||||
|
||||
if len(a.Schemas) > 0 {
|
||||
ss := make([]*Term, 0, len(a.Schemas))
|
||||
for _, s := range a.Schemas {
|
||||
sObj := NewObject()
|
||||
if len(s.Path) > 0 {
|
||||
sObj.Insert(StringTerm("path"), NewTerm(s.Path.toArray()))
|
||||
}
|
||||
if len(s.Schema) > 0 {
|
||||
sObj.Insert(StringTerm("schema"), NewTerm(s.Schema.toArray()))
|
||||
}
|
||||
if s.Definition != nil {
|
||||
def, err := InterfaceToValue(s.Definition)
|
||||
if err != nil {
|
||||
return nil, NewError(CompileErr, a.Location, "invalid definition in schema annotation: %s", err.Error())
|
||||
}
|
||||
sObj.Insert(StringTerm("definition"), NewTerm(def))
|
||||
}
|
||||
ss = append(ss, NewTerm(sObj))
|
||||
}
|
||||
obj.Insert(StringTerm("schemas"), ArrayTerm(ss...))
|
||||
}
|
||||
|
||||
if len(a.Custom) > 0 {
|
||||
c, err := InterfaceToValue(a.Custom)
|
||||
if err != nil {
|
||||
return nil, NewError(CompileErr, a.Location, "invalid custom annotation %s", err.Error())
|
||||
}
|
||||
obj.Insert(StringTerm("custom"), NewTerm(c))
|
||||
}
|
||||
|
||||
return &obj, nil
|
||||
}
|
||||
|
||||
func attachRuleAnnotations(mod *Module) {
|
||||
// make a copy of the annotations
|
||||
cpy := make([]*Annotations, len(mod.Annotations))
|
||||
for i, a := range mod.Annotations {
|
||||
cpy[i] = a.Copy(a.node)
|
||||
}
|
||||
|
||||
for _, rule := range mod.Rules {
|
||||
var j int
|
||||
var found bool
|
||||
for i, a := range cpy {
|
||||
if rule.Ref().GroundPrefix().Equal(a.GetTargetPath()) {
|
||||
if a.Scope == annotationScopeDocument {
|
||||
rule.Annotations = append(rule.Annotations, a)
|
||||
} else if a.Scope == annotationScopeRule && rule.Loc().Row > a.Location.Row {
|
||||
j = i
|
||||
found = true
|
||||
rule.Annotations = append(rule.Annotations, a)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if found && j < len(cpy) {
|
||||
cpy = append(cpy[:j], cpy[j+1:]...)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func attachAnnotationsNodes(mod *Module) Errors {
|
||||
var errs Errors
|
||||
|
||||
// Find first non-annotation statement following each annotation and attach
|
||||
// the annotation to that statement.
|
||||
for _, a := range mod.Annotations {
|
||||
for _, stmt := range mod.stmts {
|
||||
_, ok := stmt.(*Annotations)
|
||||
if !ok {
|
||||
if stmt.Loc().Row > a.Location.Row {
|
||||
a.node = stmt
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if a.Scope == "" {
|
||||
switch a.node.(type) {
|
||||
case *Rule:
|
||||
if a.Entrypoint {
|
||||
a.Scope = annotationScopeDocument
|
||||
} else {
|
||||
a.Scope = annotationScopeRule
|
||||
}
|
||||
case *Package:
|
||||
a.Scope = annotationScopePackage
|
||||
case *Import:
|
||||
a.Scope = annotationScopeImport
|
||||
}
|
||||
}
|
||||
|
||||
if err := validateAnnotationScopeAttachment(a); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
|
||||
if err := validateAnnotationEntrypointAttachment(a); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
|
||||
return errs
|
||||
}
|
||||
|
||||
func validateAnnotationScopeAttachment(a *Annotations) *Error {
|
||||
|
||||
switch a.Scope {
|
||||
case annotationScopeRule, annotationScopeDocument:
|
||||
if _, ok := a.node.(*Rule); ok {
|
||||
return nil
|
||||
}
|
||||
return newScopeAttachmentErr(a, "rule")
|
||||
case annotationScopePackage, annotationScopeSubpackages:
|
||||
if _, ok := a.node.(*Package); ok {
|
||||
return nil
|
||||
}
|
||||
return newScopeAttachmentErr(a, "package")
|
||||
}
|
||||
|
||||
return NewError(ParseErr, a.Loc(), "invalid annotation scope '%v'. Use one of '%s', '%s', '%s', or '%s'",
|
||||
a.Scope, annotationScopeRule, annotationScopeDocument, annotationScopePackage, annotationScopeSubpackages)
|
||||
}
|
||||
|
||||
func validateAnnotationEntrypointAttachment(a *Annotations) *Error {
|
||||
if a.Entrypoint && !(a.Scope == annotationScopeDocument || a.Scope == annotationScopePackage) {
|
||||
return NewError(
|
||||
ParseErr, a.Loc(), "annotation entrypoint applied to non-document or package scope '%v'", a.Scope)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Copy returns a deep copy of a.
|
||||
func (a *AuthorAnnotation) Copy() *AuthorAnnotation {
|
||||
cpy := *a
|
||||
return &cpy
|
||||
}
|
||||
|
||||
// Compare returns an integer indicating if s is less than, equal to, or greater
|
||||
// than other.
|
||||
func (a *AuthorAnnotation) Compare(other *AuthorAnnotation) int {
|
||||
if cmp := strings.Compare(a.Name, other.Name); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if cmp := strings.Compare(a.Email, other.Email); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func (a *AuthorAnnotation) String() string {
|
||||
if len(a.Email) == 0 {
|
||||
return a.Name
|
||||
} else if len(a.Name) == 0 {
|
||||
return fmt.Sprintf("<%s>", a.Email)
|
||||
}
|
||||
return fmt.Sprintf("%s <%s>", a.Name, a.Email)
|
||||
}
|
||||
|
||||
// Copy returns a deep copy of rr.
|
||||
func (rr *RelatedResourceAnnotation) Copy() *RelatedResourceAnnotation {
|
||||
cpy := *rr
|
||||
return &cpy
|
||||
}
|
||||
|
||||
// Compare returns an integer indicating if s is less than, equal to, or greater
|
||||
// than other.
|
||||
func (rr *RelatedResourceAnnotation) Compare(other *RelatedResourceAnnotation) int {
|
||||
if cmp := strings.Compare(rr.Description, other.Description); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if cmp := strings.Compare(rr.Ref.String(), other.Ref.String()); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func (rr *RelatedResourceAnnotation) String() string {
|
||||
bs, _ := json.Marshal(rr)
|
||||
return string(bs)
|
||||
}
|
||||
|
||||
func (rr *RelatedResourceAnnotation) MarshalJSON() ([]byte, error) {
|
||||
d := map[string]interface{}{
|
||||
"ref": rr.Ref.String(),
|
||||
}
|
||||
|
||||
if len(rr.Description) > 0 {
|
||||
d["description"] = rr.Description
|
||||
}
|
||||
|
||||
return json.Marshal(d)
|
||||
}
|
||||
|
||||
// Copy returns a deep copy of s.
|
||||
func (s *SchemaAnnotation) Copy() *SchemaAnnotation {
|
||||
cpy := *s
|
||||
return &cpy
|
||||
}
|
||||
|
||||
// Compare returns an integer indicating if s is less than, equal to, or greater
|
||||
// than other.
|
||||
func (s *SchemaAnnotation) Compare(other *SchemaAnnotation) int {
|
||||
|
||||
if cmp := s.Path.Compare(other.Path); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if cmp := s.Schema.Compare(other.Schema); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
|
||||
if s.Definition != nil && other.Definition == nil {
|
||||
return -1
|
||||
} else if s.Definition == nil && other.Definition != nil {
|
||||
return 1
|
||||
} else if s.Definition != nil && other.Definition != nil {
|
||||
return util.Compare(*s.Definition, *other.Definition)
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func (s *SchemaAnnotation) String() string {
|
||||
bs, _ := json.Marshal(s)
|
||||
return string(bs)
|
||||
}
|
||||
|
||||
func newAnnotationSet() *AnnotationSet {
|
||||
return &AnnotationSet{
|
||||
byRule: map[*Rule][]*Annotations{},
|
||||
byPackage: map[int]*Annotations{},
|
||||
byPath: newAnnotationTree(),
|
||||
}
|
||||
return v1.NewAnnotationsRef(a)
|
||||
}
|
||||
|
||||
func BuildAnnotationSet(modules []*Module) (*AnnotationSet, Errors) {
|
||||
as := newAnnotationSet()
|
||||
var errs Errors
|
||||
for _, m := range modules {
|
||||
for _, a := range m.Annotations {
|
||||
if err := as.add(a); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(errs) > 0 {
|
||||
return nil, errs
|
||||
}
|
||||
as.modules = modules
|
||||
return as, nil
|
||||
}
|
||||
|
||||
// NOTE(philipc): During copy propagation, the underlying Nodes can be
|
||||
// stripped away from the annotations, leading to nil deref panics. We
|
||||
// silently ignore these cases for now, as a workaround.
|
||||
func (as *AnnotationSet) add(a *Annotations) *Error {
|
||||
switch a.Scope {
|
||||
case annotationScopeRule:
|
||||
if rule, ok := a.node.(*Rule); ok {
|
||||
as.byRule[rule] = append(as.byRule[rule], a)
|
||||
}
|
||||
case annotationScopePackage:
|
||||
if pkg, ok := a.node.(*Package); ok {
|
||||
hash := pkg.Path.Hash()
|
||||
if exist, ok := as.byPackage[hash]; ok {
|
||||
return errAnnotationRedeclared(a, exist.Location)
|
||||
}
|
||||
as.byPackage[hash] = a
|
||||
}
|
||||
case annotationScopeDocument:
|
||||
if rule, ok := a.node.(*Rule); ok {
|
||||
path := rule.Ref().GroundPrefix()
|
||||
x := as.byPath.get(path)
|
||||
if x != nil {
|
||||
return errAnnotationRedeclared(a, x.Value.Location)
|
||||
}
|
||||
as.byPath.insert(path, a)
|
||||
}
|
||||
case annotationScopeSubpackages:
|
||||
if pkg, ok := a.node.(*Package); ok {
|
||||
x := as.byPath.get(pkg.Path)
|
||||
if x != nil && x.Value != nil {
|
||||
return errAnnotationRedeclared(a, x.Value.Location)
|
||||
}
|
||||
as.byPath.insert(pkg.Path, a)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (as *AnnotationSet) GetRuleScope(r *Rule) []*Annotations {
|
||||
if as == nil {
|
||||
return nil
|
||||
}
|
||||
return as.byRule[r]
|
||||
}
|
||||
|
||||
func (as *AnnotationSet) GetSubpackagesScope(path Ref) []*Annotations {
|
||||
if as == nil {
|
||||
return nil
|
||||
}
|
||||
return as.byPath.ancestors(path)
|
||||
}
|
||||
|
||||
func (as *AnnotationSet) GetDocumentScope(path Ref) *Annotations {
|
||||
if as == nil {
|
||||
return nil
|
||||
}
|
||||
if node := as.byPath.get(path); node != nil {
|
||||
return node.Value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (as *AnnotationSet) GetPackageScope(pkg *Package) *Annotations {
|
||||
if as == nil {
|
||||
return nil
|
||||
}
|
||||
return as.byPackage[pkg.Path.Hash()]
|
||||
}
|
||||
|
||||
// Flatten returns a flattened list view of this AnnotationSet.
|
||||
// The returned slice is sorted, first by the annotations' target path, then by their target location
|
||||
func (as *AnnotationSet) Flatten() FlatAnnotationsRefSet {
|
||||
// This preallocation often won't be optimal, but it's superior to starting with a nil slice.
|
||||
refs := make([]*AnnotationsRef, 0, len(as.byPath.Children)+len(as.byRule)+len(as.byPackage))
|
||||
|
||||
refs = as.byPath.flatten(refs)
|
||||
|
||||
for _, a := range as.byPackage {
|
||||
refs = append(refs, NewAnnotationsRef(a))
|
||||
}
|
||||
|
||||
for _, as := range as.byRule {
|
||||
for _, a := range as {
|
||||
refs = append(refs, NewAnnotationsRef(a))
|
||||
}
|
||||
}
|
||||
|
||||
// Sort by path, then annotation location, for stable output
|
||||
sort.SliceStable(refs, func(i, j int) bool {
|
||||
return refs[i].Compare(refs[j]) < 0
|
||||
})
|
||||
|
||||
return refs
|
||||
}
|
||||
|
||||
// Chain returns the chain of annotations leading up to the given rule.
|
||||
// The returned slice is ordered as follows
|
||||
// 0. Entries for the given rule, ordered from the METADATA block declared immediately above the rule, to the block declared farthest away (always at least one entry)
|
||||
// 1. The 'document' scope entry, if any
|
||||
// 2. The 'package' scope entry, if any
|
||||
// 3. Entries for the 'subpackages' scope, if any; ordered from the closest package path to the fartest. E.g.: 'do.re.mi', 'do.re', 'do'
|
||||
// The returned slice is guaranteed to always contain at least one entry, corresponding to the given rule.
|
||||
func (as *AnnotationSet) Chain(rule *Rule) AnnotationsRefSet {
|
||||
var refs []*AnnotationsRef
|
||||
|
||||
ruleAnnots := as.GetRuleScope(rule)
|
||||
|
||||
if len(ruleAnnots) >= 1 {
|
||||
for _, a := range ruleAnnots {
|
||||
refs = append(refs, NewAnnotationsRef(a))
|
||||
}
|
||||
} else {
|
||||
// Make sure there is always a leading entry representing the passed rule, even if it has no annotations
|
||||
refs = append(refs, &AnnotationsRef{
|
||||
Location: rule.Location,
|
||||
Path: rule.Ref().GroundPrefix(),
|
||||
node: rule,
|
||||
})
|
||||
}
|
||||
|
||||
if len(refs) > 1 {
|
||||
// Sort by annotation location; chain must start with annotations declared closest to rule, then going outward
|
||||
sort.SliceStable(refs, func(i, j int) bool {
|
||||
return refs[i].Annotations.Location.Compare(refs[j].Annotations.Location) > 0
|
||||
})
|
||||
}
|
||||
|
||||
docAnnots := as.GetDocumentScope(rule.Ref().GroundPrefix())
|
||||
if docAnnots != nil {
|
||||
refs = append(refs, NewAnnotationsRef(docAnnots))
|
||||
}
|
||||
|
||||
pkg := rule.Module.Package
|
||||
pkgAnnots := as.GetPackageScope(pkg)
|
||||
if pkgAnnots != nil {
|
||||
refs = append(refs, NewAnnotationsRef(pkgAnnots))
|
||||
}
|
||||
|
||||
subPkgAnnots := as.GetSubpackagesScope(pkg.Path)
|
||||
// We need to reverse the order, as subPkgAnnots ordering will start at the root,
|
||||
// whereas we want to end at the root.
|
||||
for i := len(subPkgAnnots) - 1; i >= 0; i-- {
|
||||
refs = append(refs, NewAnnotationsRef(subPkgAnnots[i]))
|
||||
}
|
||||
|
||||
return refs
|
||||
}
|
||||
|
||||
func (ars FlatAnnotationsRefSet) Insert(ar *AnnotationsRef) FlatAnnotationsRefSet {
|
||||
result := make(FlatAnnotationsRefSet, 0, len(ars)+1)
|
||||
|
||||
// insertion sort, first by path, then location
|
||||
for i, current := range ars {
|
||||
if ar.Compare(current) < 0 {
|
||||
result = append(result, ar)
|
||||
result = append(result, ars[i:]...)
|
||||
break
|
||||
}
|
||||
result = append(result, current)
|
||||
}
|
||||
|
||||
if len(result) < len(ars)+1 {
|
||||
result = append(result, ar)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func newAnnotationTree() *annotationTreeNode {
|
||||
return &annotationTreeNode{
|
||||
Value: nil,
|
||||
Children: map[Value]*annotationTreeNode{},
|
||||
}
|
||||
}
|
||||
|
||||
func (t *annotationTreeNode) insert(path Ref, value *Annotations) {
|
||||
node := t
|
||||
for _, k := range path {
|
||||
child, ok := node.Children[k.Value]
|
||||
if !ok {
|
||||
child = newAnnotationTree()
|
||||
node.Children[k.Value] = child
|
||||
}
|
||||
node = child
|
||||
}
|
||||
node.Value = value
|
||||
}
|
||||
|
||||
func (t *annotationTreeNode) get(path Ref) *annotationTreeNode {
|
||||
node := t
|
||||
for _, k := range path {
|
||||
if node == nil {
|
||||
return nil
|
||||
}
|
||||
child, ok := node.Children[k.Value]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
node = child
|
||||
}
|
||||
return node
|
||||
}
|
||||
|
||||
// ancestors returns a slice of annotations in ascending order, starting with the root of ref; e.g.: 'root', 'root.foo', 'root.foo.bar'.
|
||||
func (t *annotationTreeNode) ancestors(path Ref) (result []*Annotations) {
|
||||
node := t
|
||||
for _, k := range path {
|
||||
if node == nil {
|
||||
return result
|
||||
}
|
||||
child, ok := node.Children[k.Value]
|
||||
if !ok {
|
||||
return result
|
||||
}
|
||||
if child.Value != nil {
|
||||
result = append(result, child.Value)
|
||||
}
|
||||
node = child
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func (t *annotationTreeNode) flatten(refs []*AnnotationsRef) []*AnnotationsRef {
|
||||
if a := t.Value; a != nil {
|
||||
refs = append(refs, NewAnnotationsRef(a))
|
||||
}
|
||||
for _, c := range t.Children {
|
||||
refs = c.flatten(refs)
|
||||
}
|
||||
return refs
|
||||
}
|
||||
|
||||
func (ar *AnnotationsRef) Compare(other *AnnotationsRef) int {
|
||||
if c := ar.Path.Compare(other.Path); c != 0 {
|
||||
return c
|
||||
}
|
||||
|
||||
if c := ar.Annotations.Location.Compare(other.Annotations.Location); c != 0 {
|
||||
return c
|
||||
}
|
||||
|
||||
return ar.Annotations.Compare(other.Annotations)
|
||||
return v1.BuildAnnotationSet(modules)
|
||||
}
|
||||
|
||||
3153
vendor/github.com/open-policy-agent/opa/ast/builtins.go
generated
vendored
3153
vendor/github.com/open-policy-agent/opa/ast/builtins.go
generated
vendored
File diff suppressed because it is too large
Load Diff
200
vendor/github.com/open-policy-agent/opa/ast/capabilities.go
generated
vendored
200
vendor/github.com/open-policy-agent/opa/ast/capabilities.go
generated
vendored
@@ -5,228 +5,54 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
_ "embed"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
caps "github.com/open-policy-agent/opa/capabilities"
|
||||
"github.com/open-policy-agent/opa/internal/semver"
|
||||
"github.com/open-policy-agent/opa/internal/wasm/sdk/opa/capabilities"
|
||||
"github.com/open-policy-agent/opa/util"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// VersonIndex contains an index from built-in function name, language feature,
|
||||
// and future rego keyword to version number. During the build, this is used to
|
||||
// create an index of the minimum version required for the built-in/feature/kw.
|
||||
type VersionIndex struct {
|
||||
Builtins map[string]semver.Version `json:"builtins"`
|
||||
Features map[string]semver.Version `json:"features"`
|
||||
Keywords map[string]semver.Version `json:"keywords"`
|
||||
}
|
||||
|
||||
// NOTE(tsandall): this file is generated by internal/cmd/genversionindex/main.go
|
||||
// and run as part of go:generate. We generate the version index as part of the
|
||||
// build process because it's relatively expensive to build (it takes ~500ms on
|
||||
// my machine) and never changes.
|
||||
//
|
||||
//go:embed version_index.json
|
||||
var versionIndexBs []byte
|
||||
|
||||
var minVersionIndex = func() VersionIndex {
|
||||
var vi VersionIndex
|
||||
err := json.Unmarshal(versionIndexBs, &vi)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return vi
|
||||
}()
|
||||
type VersionIndex = v1.VersionIndex
|
||||
|
||||
// In the compiler, we used this to check that we're OK working with ref heads.
|
||||
// If this isn't present, we'll fail. This is to ensure that older versions of
|
||||
// OPA can work with policies that we're compiling -- if they don't know ref
|
||||
// heads, they wouldn't be able to parse them.
|
||||
const FeatureRefHeadStringPrefixes = "rule_head_ref_string_prefixes"
|
||||
const FeatureRefHeads = "rule_head_refs"
|
||||
const FeatureRegoV1Import = "rego_v1_import"
|
||||
const FeatureRefHeadStringPrefixes = v1.FeatureRefHeadStringPrefixes
|
||||
const FeatureRefHeads = v1.FeatureRefHeads
|
||||
const FeatureRegoV1 = v1.FeatureRegoV1
|
||||
const FeatureRegoV1Import = v1.FeatureRegoV1Import
|
||||
|
||||
// Capabilities defines a structure containing data that describes the capabilities
|
||||
// or features supported by a particular version of OPA.
|
||||
type Capabilities struct {
|
||||
Builtins []*Builtin `json:"builtins,omitempty"`
|
||||
FutureKeywords []string `json:"future_keywords,omitempty"`
|
||||
WasmABIVersions []WasmABIVersion `json:"wasm_abi_versions,omitempty"`
|
||||
|
||||
// Features is a bit of a mixed bag for checking that an older version of OPA
|
||||
// is able to do what needs to be done.
|
||||
// TODO(sr): find better words ^^
|
||||
Features []string `json:"features,omitempty"`
|
||||
|
||||
// allow_net is an array of hostnames or IP addresses, that an OPA instance is
|
||||
// allowed to connect to.
|
||||
// If omitted, ANY host can be connected to. If empty, NO host can be connected to.
|
||||
// As of now, this only controls fetching remote refs for using JSON Schemas in
|
||||
// the type checker.
|
||||
// TODO(sr): support ports to further restrict connection peers
|
||||
// TODO(sr): support restricting `http.send` using the same mechanism (see https://github.com/open-policy-agent/opa/issues/3665)
|
||||
AllowNet []string `json:"allow_net,omitempty"`
|
||||
}
|
||||
type Capabilities = v1.Capabilities
|
||||
|
||||
// WasmABIVersion captures the Wasm ABI version. Its `Minor` version is indicating
|
||||
// backwards-compatible changes.
|
||||
type WasmABIVersion struct {
|
||||
Version int `json:"version"`
|
||||
Minor int `json:"minor_version"`
|
||||
}
|
||||
type WasmABIVersion = v1.WasmABIVersion
|
||||
|
||||
// CapabilitiesForThisVersion returns the capabilities of this version of OPA.
|
||||
func CapabilitiesForThisVersion() *Capabilities {
|
||||
f := &Capabilities{}
|
||||
|
||||
for _, vers := range capabilities.ABIVersions() {
|
||||
f.WasmABIVersions = append(f.WasmABIVersions, WasmABIVersion{Version: vers[0], Minor: vers[1]})
|
||||
}
|
||||
|
||||
f.Builtins = make([]*Builtin, len(Builtins))
|
||||
copy(f.Builtins, Builtins)
|
||||
sort.Slice(f.Builtins, func(i, j int) bool {
|
||||
return f.Builtins[i].Name < f.Builtins[j].Name
|
||||
})
|
||||
|
||||
for kw := range futureKeywords {
|
||||
f.FutureKeywords = append(f.FutureKeywords, kw)
|
||||
}
|
||||
sort.Strings(f.FutureKeywords)
|
||||
|
||||
f.Features = []string{
|
||||
FeatureRefHeadStringPrefixes,
|
||||
FeatureRefHeads,
|
||||
FeatureRegoV1Import,
|
||||
}
|
||||
|
||||
return f
|
||||
return v1.CapabilitiesForThisVersion(v1.CapabilitiesRegoVersion(DefaultRegoVersion))
|
||||
}
|
||||
|
||||
// LoadCapabilitiesJSON loads a JSON serialized capabilities structure from the reader r.
|
||||
func LoadCapabilitiesJSON(r io.Reader) (*Capabilities, error) {
|
||||
d := util.NewJSONDecoder(r)
|
||||
var c Capabilities
|
||||
return &c, d.Decode(&c)
|
||||
return v1.LoadCapabilitiesJSON(r)
|
||||
}
|
||||
|
||||
// LoadCapabilitiesVersion loads a JSON serialized capabilities structure from the specific version.
|
||||
func LoadCapabilitiesVersion(version string) (*Capabilities, error) {
|
||||
cvs, err := LoadCapabilitiesVersions()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, cv := range cvs {
|
||||
if cv == version {
|
||||
cont, err := caps.FS.ReadFile(cv + ".json")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return LoadCapabilitiesJSON(bytes.NewReader(cont))
|
||||
}
|
||||
|
||||
}
|
||||
return nil, fmt.Errorf("no capabilities version found %v", version)
|
||||
return v1.LoadCapabilitiesVersion(version)
|
||||
}
|
||||
|
||||
// LoadCapabilitiesFile loads a JSON serialized capabilities structure from a file.
|
||||
func LoadCapabilitiesFile(file string) (*Capabilities, error) {
|
||||
fd, err := os.Open(file)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer fd.Close()
|
||||
return LoadCapabilitiesJSON(fd)
|
||||
return v1.LoadCapabilitiesFile(file)
|
||||
}
|
||||
|
||||
// LoadCapabilitiesVersions loads all capabilities versions
|
||||
func LoadCapabilitiesVersions() ([]string, error) {
|
||||
ents, err := caps.FS.ReadDir(".")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
capabilitiesVersions := make([]string, 0, len(ents))
|
||||
for _, ent := range ents {
|
||||
capabilitiesVersions = append(capabilitiesVersions, strings.Replace(ent.Name(), ".json", "", 1))
|
||||
}
|
||||
return capabilitiesVersions, nil
|
||||
}
|
||||
|
||||
// MinimumCompatibleVersion returns the minimum compatible OPA version based on
|
||||
// the built-ins, features, and keywords in c.
|
||||
func (c *Capabilities) MinimumCompatibleVersion() (string, bool) {
|
||||
|
||||
var maxVersion semver.Version
|
||||
|
||||
// this is the oldest OPA release that includes capabilities
|
||||
if err := maxVersion.Set("0.17.0"); err != nil {
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
for _, bi := range c.Builtins {
|
||||
v, ok := minVersionIndex.Builtins[bi.Name]
|
||||
if !ok {
|
||||
return "", false
|
||||
}
|
||||
if v.Compare(maxVersion) > 0 {
|
||||
maxVersion = v
|
||||
}
|
||||
}
|
||||
|
||||
for _, kw := range c.FutureKeywords {
|
||||
v, ok := minVersionIndex.Keywords[kw]
|
||||
if !ok {
|
||||
return "", false
|
||||
}
|
||||
if v.Compare(maxVersion) > 0 {
|
||||
maxVersion = v
|
||||
}
|
||||
}
|
||||
|
||||
for _, feat := range c.Features {
|
||||
v, ok := minVersionIndex.Features[feat]
|
||||
if !ok {
|
||||
return "", false
|
||||
}
|
||||
if v.Compare(maxVersion) > 0 {
|
||||
maxVersion = v
|
||||
}
|
||||
}
|
||||
|
||||
return maxVersion.String(), true
|
||||
}
|
||||
|
||||
func (c *Capabilities) ContainsFeature(feature string) bool {
|
||||
for _, f := range c.Features {
|
||||
if f == feature {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// addBuiltinSorted inserts a built-in into c in sorted order. An existing built-in with the same name
|
||||
// will be overwritten.
|
||||
func (c *Capabilities) addBuiltinSorted(bi *Builtin) {
|
||||
i := sort.Search(len(c.Builtins), func(x int) bool {
|
||||
return c.Builtins[x].Name >= bi.Name
|
||||
})
|
||||
if i < len(c.Builtins) && bi.Name == c.Builtins[i].Name {
|
||||
c.Builtins[i] = bi
|
||||
return
|
||||
}
|
||||
c.Builtins = append(c.Builtins, nil)
|
||||
copy(c.Builtins[i+1:], c.Builtins[i:])
|
||||
c.Builtins[i] = bi
|
||||
return v1.LoadCapabilitiesVersions()
|
||||
}
|
||||
|
||||
1307
vendor/github.com/open-policy-agent/opa/ast/check.go
generated
vendored
1307
vendor/github.com/open-policy-agent/opa/ast/check.go
generated
vendored
File diff suppressed because it is too large
Load Diff
361
vendor/github.com/open-policy-agent/opa/ast/compare.go
generated
vendored
361
vendor/github.com/open-policy-agent/opa/ast/compare.go
generated
vendored
@@ -5,9 +5,7 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"math/big"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// Compare returns an integer indicating whether two AST values are less than,
|
||||
@@ -37,360 +35,5 @@ import (
|
||||
// is empty.
|
||||
// Other comparisons are consistent but not defined.
|
||||
func Compare(a, b interface{}) int {
|
||||
|
||||
if t, ok := a.(*Term); ok {
|
||||
if t == nil {
|
||||
a = nil
|
||||
} else {
|
||||
a = t.Value
|
||||
}
|
||||
}
|
||||
|
||||
if t, ok := b.(*Term); ok {
|
||||
if t == nil {
|
||||
b = nil
|
||||
} else {
|
||||
b = t.Value
|
||||
}
|
||||
}
|
||||
|
||||
if a == nil {
|
||||
if b == nil {
|
||||
return 0
|
||||
}
|
||||
return -1
|
||||
}
|
||||
if b == nil {
|
||||
return 1
|
||||
}
|
||||
|
||||
sortA := sortOrder(a)
|
||||
sortB := sortOrder(b)
|
||||
|
||||
if sortA < sortB {
|
||||
return -1
|
||||
} else if sortB < sortA {
|
||||
return 1
|
||||
}
|
||||
|
||||
switch a := a.(type) {
|
||||
case Null:
|
||||
return 0
|
||||
case Boolean:
|
||||
b := b.(Boolean)
|
||||
if a.Equal(b) {
|
||||
return 0
|
||||
}
|
||||
if !a {
|
||||
return -1
|
||||
}
|
||||
return 1
|
||||
case Number:
|
||||
if ai, err := json.Number(a).Int64(); err == nil {
|
||||
if bi, err := json.Number(b.(Number)).Int64(); err == nil {
|
||||
if ai == bi {
|
||||
return 0
|
||||
}
|
||||
if ai < bi {
|
||||
return -1
|
||||
}
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
||||
// We use big.Rat for comparing big numbers.
|
||||
// It replaces big.Float due to following reason:
|
||||
// big.Float comes with a default precision of 64, and setting a
|
||||
// larger precision results in more memory being allocated
|
||||
// (regardless of the actual number we are parsing with SetString).
|
||||
//
|
||||
// Note: If we're so close to zero that big.Float says we are zero, do
|
||||
// *not* big.Rat).SetString on the original string it'll potentially
|
||||
// take very long.
|
||||
var bigA, bigB *big.Rat
|
||||
fa, ok := new(big.Float).SetString(string(a))
|
||||
if !ok {
|
||||
panic("illegal value")
|
||||
}
|
||||
if fa.IsInt() {
|
||||
if i, _ := fa.Int64(); i == 0 {
|
||||
bigA = new(big.Rat).SetInt64(0)
|
||||
}
|
||||
}
|
||||
if bigA == nil {
|
||||
bigA, ok = new(big.Rat).SetString(string(a))
|
||||
if !ok {
|
||||
panic("illegal value")
|
||||
}
|
||||
}
|
||||
|
||||
fb, ok := new(big.Float).SetString(string(b.(Number)))
|
||||
if !ok {
|
||||
panic("illegal value")
|
||||
}
|
||||
if fb.IsInt() {
|
||||
if i, _ := fb.Int64(); i == 0 {
|
||||
bigB = new(big.Rat).SetInt64(0)
|
||||
}
|
||||
}
|
||||
if bigB == nil {
|
||||
bigB, ok = new(big.Rat).SetString(string(b.(Number)))
|
||||
if !ok {
|
||||
panic("illegal value")
|
||||
}
|
||||
}
|
||||
|
||||
return bigA.Cmp(bigB)
|
||||
case String:
|
||||
b := b.(String)
|
||||
if a.Equal(b) {
|
||||
return 0
|
||||
}
|
||||
if a < b {
|
||||
return -1
|
||||
}
|
||||
return 1
|
||||
case Var:
|
||||
b := b.(Var)
|
||||
if a.Equal(b) {
|
||||
return 0
|
||||
}
|
||||
if a < b {
|
||||
return -1
|
||||
}
|
||||
return 1
|
||||
case Ref:
|
||||
b := b.(Ref)
|
||||
return termSliceCompare(a, b)
|
||||
case *Array:
|
||||
b := b.(*Array)
|
||||
return termSliceCompare(a.elems, b.elems)
|
||||
case *lazyObj:
|
||||
return Compare(a.force(), b)
|
||||
case *object:
|
||||
if x, ok := b.(*lazyObj); ok {
|
||||
b = x.force()
|
||||
}
|
||||
b := b.(*object)
|
||||
return a.Compare(b)
|
||||
case Set:
|
||||
b := b.(Set)
|
||||
return a.Compare(b)
|
||||
case *ArrayComprehension:
|
||||
b := b.(*ArrayComprehension)
|
||||
if cmp := Compare(a.Term, b.Term); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
return Compare(a.Body, b.Body)
|
||||
case *ObjectComprehension:
|
||||
b := b.(*ObjectComprehension)
|
||||
if cmp := Compare(a.Key, b.Key); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
if cmp := Compare(a.Value, b.Value); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
return Compare(a.Body, b.Body)
|
||||
case *SetComprehension:
|
||||
b := b.(*SetComprehension)
|
||||
if cmp := Compare(a.Term, b.Term); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
return Compare(a.Body, b.Body)
|
||||
case Call:
|
||||
b := b.(Call)
|
||||
return termSliceCompare(a, b)
|
||||
case *Expr:
|
||||
b := b.(*Expr)
|
||||
return a.Compare(b)
|
||||
case *SomeDecl:
|
||||
b := b.(*SomeDecl)
|
||||
return a.Compare(b)
|
||||
case *Every:
|
||||
b := b.(*Every)
|
||||
return a.Compare(b)
|
||||
case *With:
|
||||
b := b.(*With)
|
||||
return a.Compare(b)
|
||||
case Body:
|
||||
b := b.(Body)
|
||||
return a.Compare(b)
|
||||
case *Head:
|
||||
b := b.(*Head)
|
||||
return a.Compare(b)
|
||||
case *Rule:
|
||||
b := b.(*Rule)
|
||||
return a.Compare(b)
|
||||
case Args:
|
||||
b := b.(Args)
|
||||
return termSliceCompare(a, b)
|
||||
case *Import:
|
||||
b := b.(*Import)
|
||||
return a.Compare(b)
|
||||
case *Package:
|
||||
b := b.(*Package)
|
||||
return a.Compare(b)
|
||||
case *Annotations:
|
||||
b := b.(*Annotations)
|
||||
return a.Compare(b)
|
||||
case *Module:
|
||||
b := b.(*Module)
|
||||
return a.Compare(b)
|
||||
}
|
||||
panic(fmt.Sprintf("illegal value: %T", a))
|
||||
}
|
||||
|
||||
type termSlice []*Term
|
||||
|
||||
func (s termSlice) Less(i, j int) bool { return Compare(s[i].Value, s[j].Value) < 0 }
|
||||
func (s termSlice) Swap(i, j int) { x := s[i]; s[i] = s[j]; s[j] = x }
|
||||
func (s termSlice) Len() int { return len(s) }
|
||||
|
||||
func sortOrder(x interface{}) int {
|
||||
switch x.(type) {
|
||||
case Null:
|
||||
return 0
|
||||
case Boolean:
|
||||
return 1
|
||||
case Number:
|
||||
return 2
|
||||
case String:
|
||||
return 3
|
||||
case Var:
|
||||
return 4
|
||||
case Ref:
|
||||
return 5
|
||||
case *Array:
|
||||
return 6
|
||||
case Object:
|
||||
return 7
|
||||
case Set:
|
||||
return 8
|
||||
case *ArrayComprehension:
|
||||
return 9
|
||||
case *ObjectComprehension:
|
||||
return 10
|
||||
case *SetComprehension:
|
||||
return 11
|
||||
case Call:
|
||||
return 12
|
||||
case Args:
|
||||
return 13
|
||||
case *Expr:
|
||||
return 100
|
||||
case *SomeDecl:
|
||||
return 101
|
||||
case *Every:
|
||||
return 102
|
||||
case *With:
|
||||
return 110
|
||||
case *Head:
|
||||
return 120
|
||||
case Body:
|
||||
return 200
|
||||
case *Rule:
|
||||
return 1000
|
||||
case *Import:
|
||||
return 1001
|
||||
case *Package:
|
||||
return 1002
|
||||
case *Annotations:
|
||||
return 1003
|
||||
case *Module:
|
||||
return 10000
|
||||
}
|
||||
panic(fmt.Sprintf("illegal value: %T", x))
|
||||
}
|
||||
|
||||
func importsCompare(a, b []*Import) int {
|
||||
minLen := len(a)
|
||||
if len(b) < minLen {
|
||||
minLen = len(b)
|
||||
}
|
||||
for i := 0; i < minLen; i++ {
|
||||
if cmp := a[i].Compare(b[i]); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
}
|
||||
if len(a) < len(b) {
|
||||
return -1
|
||||
}
|
||||
if len(b) < len(a) {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func annotationsCompare(a, b []*Annotations) int {
|
||||
minLen := len(a)
|
||||
if len(b) < minLen {
|
||||
minLen = len(b)
|
||||
}
|
||||
for i := 0; i < minLen; i++ {
|
||||
if cmp := a[i].Compare(b[i]); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
}
|
||||
if len(a) < len(b) {
|
||||
return -1
|
||||
}
|
||||
if len(b) < len(a) {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func rulesCompare(a, b []*Rule) int {
|
||||
minLen := len(a)
|
||||
if len(b) < minLen {
|
||||
minLen = len(b)
|
||||
}
|
||||
for i := 0; i < minLen; i++ {
|
||||
if cmp := a[i].Compare(b[i]); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
}
|
||||
if len(a) < len(b) {
|
||||
return -1
|
||||
}
|
||||
if len(b) < len(a) {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func termSliceCompare(a, b []*Term) int {
|
||||
minLen := len(a)
|
||||
if len(b) < minLen {
|
||||
minLen = len(b)
|
||||
}
|
||||
for i := 0; i < minLen; i++ {
|
||||
if cmp := Compare(a[i], b[i]); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
}
|
||||
if len(a) < len(b) {
|
||||
return -1
|
||||
} else if len(b) < len(a) {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func withSliceCompare(a, b []*With) int {
|
||||
minLen := len(a)
|
||||
if len(b) < minLen {
|
||||
minLen = len(b)
|
||||
}
|
||||
for i := 0; i < minLen; i++ {
|
||||
if cmp := Compare(a[i], b[i]); cmp != 0 {
|
||||
return cmp
|
||||
}
|
||||
}
|
||||
if len(a) < len(b) {
|
||||
return -1
|
||||
} else if len(b) < len(a) {
|
||||
return 1
|
||||
}
|
||||
return 0
|
||||
return v1.Compare(a, b)
|
||||
}
|
||||
|
||||
5811
vendor/github.com/open-policy-agent/opa/ast/compile.go
generated
vendored
5811
vendor/github.com/open-policy-agent/opa/ast/compile.go
generated
vendored
File diff suppressed because it is too large
Load Diff
34
vendor/github.com/open-policy-agent/opa/ast/compilehelper.go
generated
vendored
34
vendor/github.com/open-policy-agent/opa/ast/compilehelper.go
generated
vendored
@@ -4,41 +4,29 @@
|
||||
|
||||
package ast
|
||||
|
||||
import v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
|
||||
// CompileModules takes a set of Rego modules represented as strings and
|
||||
// compiles them for evaluation. The keys of the map are used as filenames.
|
||||
func CompileModules(modules map[string]string) (*Compiler, error) {
|
||||
return CompileModulesWithOpt(modules, CompileOpts{})
|
||||
return CompileModulesWithOpt(modules, CompileOpts{
|
||||
ParserOptions: ParserOptions{
|
||||
RegoVersion: DefaultRegoVersion,
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// CompileOpts defines a set of options for the compiler.
|
||||
type CompileOpts struct {
|
||||
EnablePrintStatements bool
|
||||
ParserOptions ParserOptions
|
||||
}
|
||||
type CompileOpts = v1.CompileOpts
|
||||
|
||||
// CompileModulesWithOpt takes a set of Rego modules represented as strings and
|
||||
// compiles them for evaluation. The keys of the map are used as filenames.
|
||||
func CompileModulesWithOpt(modules map[string]string, opts CompileOpts) (*Compiler, error) {
|
||||
|
||||
parsed := make(map[string]*Module, len(modules))
|
||||
|
||||
for f, module := range modules {
|
||||
var pm *Module
|
||||
var err error
|
||||
if pm, err = ParseModuleWithOpts(f, module, opts.ParserOptions); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
parsed[f] = pm
|
||||
if opts.ParserOptions.RegoVersion == RegoUndefined {
|
||||
opts.ParserOptions.RegoVersion = DefaultRegoVersion
|
||||
}
|
||||
|
||||
compiler := NewCompiler().WithEnablePrintStatements(opts.EnablePrintStatements)
|
||||
compiler.Compile(parsed)
|
||||
|
||||
if compiler.Failed() {
|
||||
return nil, compiler.Errors
|
||||
}
|
||||
|
||||
return compiler, nil
|
||||
return v1.CompileModulesWithOpt(modules, opts)
|
||||
}
|
||||
|
||||
// MustCompileModules compiles a set of Rego modules represented as strings. If
|
||||
|
||||
42
vendor/github.com/open-policy-agent/opa/ast/conflicts.go
generated
vendored
42
vendor/github.com/open-policy-agent/opa/ast/conflicts.go
generated
vendored
@@ -5,49 +5,11 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"strings"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// CheckPathConflicts returns a set of errors indicating paths that
|
||||
// are in conflict with the result of the provided callable.
|
||||
func CheckPathConflicts(c *Compiler, exists func([]string) (bool, error)) Errors {
|
||||
var errs Errors
|
||||
|
||||
root := c.RuleTree.Child(DefaultRootDocument.Value)
|
||||
if root == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, node := range root.Children {
|
||||
errs = append(errs, checkDocumentConflicts(node, exists, nil)...)
|
||||
}
|
||||
|
||||
return errs
|
||||
}
|
||||
|
||||
func checkDocumentConflicts(node *TreeNode, exists func([]string) (bool, error), path []string) Errors {
|
||||
|
||||
switch key := node.Key.(type) {
|
||||
case String:
|
||||
path = append(path, string(key))
|
||||
default: // other key types cannot conflict with data
|
||||
return nil
|
||||
}
|
||||
|
||||
if len(node.Values) > 0 {
|
||||
s := strings.Join(path, "/")
|
||||
if ok, err := exists(path); err != nil {
|
||||
return Errors{NewError(CompileErr, node.Values[0].(*Rule).Loc(), "conflict check for data path %v: %v", s, err.Error())}
|
||||
} else if ok {
|
||||
return Errors{NewError(CompileErr, node.Values[0].(*Rule).Loc(), "conflicting rule for data path %v found", s)}
|
||||
}
|
||||
}
|
||||
|
||||
var errs Errors
|
||||
|
||||
for _, child := range node.Children {
|
||||
errs = append(errs, checkDocumentConflicts(child, exists, path)...)
|
||||
}
|
||||
|
||||
return errs
|
||||
return v1.CheckPathConflicts(c, exists)
|
||||
}
|
||||
|
||||
36
vendor/github.com/open-policy-agent/opa/ast/doc.go
generated
vendored
36
vendor/github.com/open-policy-agent/opa/ast/doc.go
generated
vendored
@@ -1,36 +1,8 @@
|
||||
// Copyright 2016 The OPA Authors. All rights reserved.
|
||||
// Copyright 2024 The OPA Authors. All rights reserved.
|
||||
// Use of this source code is governed by an Apache2
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package ast declares Rego syntax tree types and also includes a parser and compiler for preparing policies for execution in the policy engine.
|
||||
//
|
||||
// Rego policies are defined using a relatively small set of types: modules, package and import declarations, rules, expressions, and terms. At their core, policies consist of rules that are defined by one or more expressions over documents available to the policy engine. The expressions are defined by intrinsic values (terms) such as strings, objects, variables, etc.
|
||||
//
|
||||
// Rego policies are typically defined in text files and then parsed and compiled by the policy engine at runtime. The parsing stage takes the text or string representation of the policy and converts it into an abstract syntax tree (AST) that consists of the types mentioned above. The AST is organized as follows:
|
||||
//
|
||||
// Module
|
||||
// |
|
||||
// +--- Package (Reference)
|
||||
// |
|
||||
// +--- Imports
|
||||
// | |
|
||||
// | +--- Import (Term)
|
||||
// |
|
||||
// +--- Rules
|
||||
// |
|
||||
// +--- Rule
|
||||
// |
|
||||
// +--- Head
|
||||
// | |
|
||||
// | +--- Name (Variable)
|
||||
// | |
|
||||
// | +--- Key (Term)
|
||||
// | |
|
||||
// | +--- Value (Term)
|
||||
// |
|
||||
// +--- Body
|
||||
// |
|
||||
// +--- Expression (Term | Terms | Variable Declaration)
|
||||
//
|
||||
// At query time, the policy engine expects policies to have been compiled. The compilation stage takes one or more modules and compiles them into a format that the policy engine supports.
|
||||
// Deprecated: This package is intended for older projects transitioning from OPA v0.x and will remain for the lifetime of OPA v1.x, but its use is not recommended.
|
||||
// For newer features and behaviours, such as defaulting to the Rego v1 syntax, use the corresponding components in the [github.com/open-policy-agent/opa/v1] package instead.
|
||||
// See https://www.openpolicyagent.org/docs/latest/v0-compatibility/ for more information.
|
||||
package ast
|
||||
|
||||
518
vendor/github.com/open-policy-agent/opa/ast/env.go
generated
vendored
518
vendor/github.com/open-policy-agent/opa/ast/env.go
generated
vendored
@@ -5,522 +5,8 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/open-policy-agent/opa/types"
|
||||
"github.com/open-policy-agent/opa/util"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// TypeEnv contains type info for static analysis such as type checking.
|
||||
type TypeEnv struct {
|
||||
tree *typeTreeNode
|
||||
next *TypeEnv
|
||||
newChecker func() *typeChecker
|
||||
}
|
||||
|
||||
// newTypeEnv returns an empty TypeEnv. The constructor is not exported because
|
||||
// type environments should only be created by the type checker.
|
||||
func newTypeEnv(f func() *typeChecker) *TypeEnv {
|
||||
return &TypeEnv{
|
||||
tree: newTypeTree(),
|
||||
newChecker: f,
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns the type of x.
|
||||
func (env *TypeEnv) Get(x interface{}) types.Type {
|
||||
|
||||
if term, ok := x.(*Term); ok {
|
||||
x = term.Value
|
||||
}
|
||||
|
||||
switch x := x.(type) {
|
||||
|
||||
// Scalars.
|
||||
case Null:
|
||||
return types.NewNull()
|
||||
case Boolean:
|
||||
return types.NewBoolean()
|
||||
case Number:
|
||||
return types.NewNumber()
|
||||
case String:
|
||||
return types.NewString()
|
||||
|
||||
// Composites.
|
||||
case *Array:
|
||||
static := make([]types.Type, x.Len())
|
||||
for i := range static {
|
||||
tpe := env.Get(x.Elem(i).Value)
|
||||
static[i] = tpe
|
||||
}
|
||||
|
||||
var dynamic types.Type
|
||||
if len(static) == 0 {
|
||||
dynamic = types.A
|
||||
}
|
||||
|
||||
return types.NewArray(static, dynamic)
|
||||
|
||||
case *lazyObj:
|
||||
return env.Get(x.force())
|
||||
case *object:
|
||||
static := []*types.StaticProperty{}
|
||||
var dynamic *types.DynamicProperty
|
||||
|
||||
x.Foreach(func(k, v *Term) {
|
||||
if IsConstant(k.Value) {
|
||||
kjson, err := JSON(k.Value)
|
||||
if err == nil {
|
||||
tpe := env.Get(v)
|
||||
static = append(static, types.NewStaticProperty(kjson, tpe))
|
||||
return
|
||||
}
|
||||
}
|
||||
// Can't handle it as a static property, fallback to dynamic
|
||||
typeK := env.Get(k.Value)
|
||||
typeV := env.Get(v.Value)
|
||||
dynamic = types.NewDynamicProperty(typeK, typeV)
|
||||
})
|
||||
|
||||
if len(static) == 0 && dynamic == nil {
|
||||
dynamic = types.NewDynamicProperty(types.A, types.A)
|
||||
}
|
||||
|
||||
return types.NewObject(static, dynamic)
|
||||
|
||||
case Set:
|
||||
var tpe types.Type
|
||||
x.Foreach(func(elem *Term) {
|
||||
other := env.Get(elem.Value)
|
||||
tpe = types.Or(tpe, other)
|
||||
})
|
||||
if tpe == nil {
|
||||
tpe = types.A
|
||||
}
|
||||
return types.NewSet(tpe)
|
||||
|
||||
// Comprehensions.
|
||||
case *ArrayComprehension:
|
||||
cpy, errs := env.newChecker().CheckBody(env, x.Body)
|
||||
if len(errs) == 0 {
|
||||
return types.NewArray(nil, cpy.Get(x.Term))
|
||||
}
|
||||
return nil
|
||||
case *ObjectComprehension:
|
||||
cpy, errs := env.newChecker().CheckBody(env, x.Body)
|
||||
if len(errs) == 0 {
|
||||
return types.NewObject(nil, types.NewDynamicProperty(cpy.Get(x.Key), cpy.Get(x.Value)))
|
||||
}
|
||||
return nil
|
||||
case *SetComprehension:
|
||||
cpy, errs := env.newChecker().CheckBody(env, x.Body)
|
||||
if len(errs) == 0 {
|
||||
return types.NewSet(cpy.Get(x.Term))
|
||||
}
|
||||
return nil
|
||||
|
||||
// Refs.
|
||||
case Ref:
|
||||
return env.getRef(x)
|
||||
|
||||
// Vars.
|
||||
case Var:
|
||||
if node := env.tree.Child(x); node != nil {
|
||||
return node.Value()
|
||||
}
|
||||
if env.next != nil {
|
||||
return env.next.Get(x)
|
||||
}
|
||||
return nil
|
||||
|
||||
// Calls.
|
||||
case Call:
|
||||
return nil
|
||||
|
||||
default:
|
||||
panic("unreachable")
|
||||
}
|
||||
}
|
||||
|
||||
func (env *TypeEnv) getRef(ref Ref) types.Type {
|
||||
|
||||
node := env.tree.Child(ref[0].Value)
|
||||
if node == nil {
|
||||
return env.getRefFallback(ref)
|
||||
}
|
||||
|
||||
return env.getRefRec(node, ref, ref[1:])
|
||||
}
|
||||
|
||||
func (env *TypeEnv) getRefFallback(ref Ref) types.Type {
|
||||
|
||||
if env.next != nil {
|
||||
return env.next.Get(ref)
|
||||
}
|
||||
|
||||
if RootDocumentNames.Contains(ref[0]) {
|
||||
return types.A
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (env *TypeEnv) getRefRec(node *typeTreeNode, ref, tail Ref) types.Type {
|
||||
if len(tail) == 0 {
|
||||
return env.getRefRecExtent(node)
|
||||
}
|
||||
|
||||
if node.Leaf() {
|
||||
if node.children.Len() > 0 {
|
||||
if child := node.Child(tail[0].Value); child != nil {
|
||||
return env.getRefRec(child, ref, tail[1:])
|
||||
}
|
||||
}
|
||||
return selectRef(node.Value(), tail)
|
||||
}
|
||||
|
||||
if !IsConstant(tail[0].Value) {
|
||||
return selectRef(env.getRefRecExtent(node), tail)
|
||||
}
|
||||
|
||||
child := node.Child(tail[0].Value)
|
||||
if child == nil {
|
||||
return env.getRefFallback(ref)
|
||||
}
|
||||
|
||||
return env.getRefRec(child, ref, tail[1:])
|
||||
}
|
||||
|
||||
func (env *TypeEnv) getRefRecExtent(node *typeTreeNode) types.Type {
|
||||
|
||||
if node.Leaf() {
|
||||
return node.Value()
|
||||
}
|
||||
|
||||
children := []*types.StaticProperty{}
|
||||
|
||||
node.Children().Iter(func(k, v util.T) bool {
|
||||
key := k.(Value)
|
||||
child := v.(*typeTreeNode)
|
||||
|
||||
tpe := env.getRefRecExtent(child)
|
||||
|
||||
// NOTE(sr): Converting to Golang-native types here is an extension of what we did
|
||||
// before -- only supporting strings. But since we cannot differentiate sets and arrays
|
||||
// that way, we could reconsider.
|
||||
switch key.(type) {
|
||||
case String, Number, Boolean: // skip anything else
|
||||
propKey, err := JSON(key)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("unreachable, ValueToInterface: %w", err))
|
||||
}
|
||||
children = append(children, types.NewStaticProperty(propKey, tpe))
|
||||
}
|
||||
return false
|
||||
})
|
||||
|
||||
// TODO(tsandall): for now, these objects can have any dynamic properties
|
||||
// because we don't have schema for base docs. Once schemas are supported
|
||||
// we can improve this.
|
||||
return types.NewObject(children, types.NewDynamicProperty(types.S, types.A))
|
||||
}
|
||||
|
||||
func (env *TypeEnv) wrap() *TypeEnv {
|
||||
cpy := *env
|
||||
cpy.next = env
|
||||
cpy.tree = newTypeTree()
|
||||
return &cpy
|
||||
}
|
||||
|
||||
// typeTreeNode is used to store type information in a tree.
|
||||
type typeTreeNode struct {
|
||||
key Value
|
||||
value types.Type
|
||||
children *util.HashMap
|
||||
}
|
||||
|
||||
func newTypeTree() *typeTreeNode {
|
||||
return &typeTreeNode{
|
||||
key: nil,
|
||||
value: nil,
|
||||
children: util.NewHashMap(valueEq, valueHash),
|
||||
}
|
||||
}
|
||||
|
||||
func (n *typeTreeNode) Child(key Value) *typeTreeNode {
|
||||
value, ok := n.children.Get(key)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return value.(*typeTreeNode)
|
||||
}
|
||||
|
||||
func (n *typeTreeNode) Children() *util.HashMap {
|
||||
return n.children
|
||||
}
|
||||
|
||||
func (n *typeTreeNode) Get(path Ref) types.Type {
|
||||
curr := n
|
||||
for _, term := range path {
|
||||
child, ok := curr.children.Get(term.Value)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
curr = child.(*typeTreeNode)
|
||||
}
|
||||
return curr.Value()
|
||||
}
|
||||
|
||||
func (n *typeTreeNode) Leaf() bool {
|
||||
return n.value != nil
|
||||
}
|
||||
|
||||
func (n *typeTreeNode) PutOne(key Value, tpe types.Type) {
|
||||
c, ok := n.children.Get(key)
|
||||
|
||||
var child *typeTreeNode
|
||||
if !ok {
|
||||
child = newTypeTree()
|
||||
child.key = key
|
||||
n.children.Put(key, child)
|
||||
} else {
|
||||
child = c.(*typeTreeNode)
|
||||
}
|
||||
|
||||
child.value = tpe
|
||||
}
|
||||
|
||||
func (n *typeTreeNode) Put(path Ref, tpe types.Type) {
|
||||
curr := n
|
||||
for _, term := range path {
|
||||
c, ok := curr.children.Get(term.Value)
|
||||
|
||||
var child *typeTreeNode
|
||||
if !ok {
|
||||
child = newTypeTree()
|
||||
child.key = term.Value
|
||||
curr.children.Put(child.key, child)
|
||||
} else {
|
||||
child = c.(*typeTreeNode)
|
||||
}
|
||||
|
||||
curr = child
|
||||
}
|
||||
curr.value = tpe
|
||||
}
|
||||
|
||||
// Insert inserts tpe at path in the tree, but also merges the value into any types.Object present along that path.
|
||||
// If a types.Object is inserted, any leafs already present further down the tree are merged into the inserted object.
|
||||
// path must be ground.
|
||||
func (n *typeTreeNode) Insert(path Ref, tpe types.Type, env *TypeEnv) {
|
||||
curr := n
|
||||
for i, term := range path {
|
||||
c, ok := curr.children.Get(term.Value)
|
||||
|
||||
var child *typeTreeNode
|
||||
if !ok {
|
||||
child = newTypeTree()
|
||||
child.key = term.Value
|
||||
curr.children.Put(child.key, child)
|
||||
} else {
|
||||
child = c.(*typeTreeNode)
|
||||
|
||||
if child.value != nil && i+1 < len(path) {
|
||||
// If child has an object value, merge the new value into it.
|
||||
if o, ok := child.value.(*types.Object); ok {
|
||||
var err error
|
||||
child.value, err = insertIntoObject(o, path[i+1:], tpe, env)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("unreachable, insertIntoObject: %w", err))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
curr = child
|
||||
}
|
||||
|
||||
curr.value = mergeTypes(curr.value, tpe)
|
||||
|
||||
if _, ok := tpe.(*types.Object); ok && curr.children.Len() > 0 {
|
||||
// merge all leafs into the inserted object
|
||||
leafs := curr.Leafs()
|
||||
for p, t := range leafs {
|
||||
var err error
|
||||
curr.value, err = insertIntoObject(curr.value.(*types.Object), *p, t, env)
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("unreachable, insertIntoObject: %w", err))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// mergeTypes merges the types of 'a' and 'b'. If both are sets, their 'of' types are joined with an types.Or.
|
||||
// If both are objects, the key types of their dynamic properties are joined with types.Or:s, and their value types
|
||||
// are recursively merged (using mergeTypes).
|
||||
// If 'a' and 'b' are both objects, and at least one of them have static properties, they are joined
|
||||
// with an types.Or, instead of being merged.
|
||||
// If 'a' is an Any containing an Object, and 'b' is an Object (or vice versa); AND both objects have no
|
||||
// static properties, they are merged.
|
||||
// If 'a' and 'b' are different types, they are joined with an types.Or.
|
||||
func mergeTypes(a, b types.Type) types.Type {
|
||||
if a == nil {
|
||||
return b
|
||||
}
|
||||
|
||||
if b == nil {
|
||||
return a
|
||||
}
|
||||
|
||||
switch a := a.(type) {
|
||||
case *types.Object:
|
||||
if bObj, ok := b.(*types.Object); ok && len(a.StaticProperties()) == 0 && len(bObj.StaticProperties()) == 0 {
|
||||
if len(a.StaticProperties()) > 0 || len(bObj.StaticProperties()) > 0 {
|
||||
return types.Or(a, bObj)
|
||||
}
|
||||
|
||||
aDynProps := a.DynamicProperties()
|
||||
bDynProps := bObj.DynamicProperties()
|
||||
dynProps := types.NewDynamicProperty(
|
||||
types.Or(aDynProps.Key, bDynProps.Key),
|
||||
mergeTypes(aDynProps.Value, bDynProps.Value))
|
||||
return types.NewObject(nil, dynProps)
|
||||
} else if bAny, ok := b.(types.Any); ok && len(a.StaticProperties()) == 0 {
|
||||
// If a is an object type with no static components ...
|
||||
for _, t := range bAny {
|
||||
if tObj, ok := t.(*types.Object); ok && len(tObj.StaticProperties()) == 0 {
|
||||
// ... and b is a types.Any containing an object with no static components, we merge them.
|
||||
aDynProps := a.DynamicProperties()
|
||||
tDynProps := tObj.DynamicProperties()
|
||||
tDynProps.Key = types.Or(tDynProps.Key, aDynProps.Key)
|
||||
tDynProps.Value = types.Or(tDynProps.Value, aDynProps.Value)
|
||||
return bAny
|
||||
}
|
||||
}
|
||||
}
|
||||
case *types.Set:
|
||||
if bSet, ok := b.(*types.Set); ok {
|
||||
return types.NewSet(types.Or(a.Of(), bSet.Of()))
|
||||
}
|
||||
case types.Any:
|
||||
if _, ok := b.(types.Any); !ok {
|
||||
return mergeTypes(b, a)
|
||||
}
|
||||
}
|
||||
|
||||
return types.Or(a, b)
|
||||
}
|
||||
|
||||
func (n *typeTreeNode) String() string {
|
||||
b := strings.Builder{}
|
||||
|
||||
if k := n.key; k != nil {
|
||||
b.WriteString(k.String())
|
||||
} else {
|
||||
b.WriteString("-")
|
||||
}
|
||||
|
||||
if v := n.value; v != nil {
|
||||
b.WriteString(": ")
|
||||
b.WriteString(v.String())
|
||||
}
|
||||
|
||||
n.children.Iter(func(_, v util.T) bool {
|
||||
if child, ok := v.(*typeTreeNode); ok {
|
||||
b.WriteString("\n\t+ ")
|
||||
s := child.String()
|
||||
s = strings.ReplaceAll(s, "\n", "\n\t")
|
||||
b.WriteString(s)
|
||||
}
|
||||
return false
|
||||
})
|
||||
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func insertIntoObject(o *types.Object, path Ref, tpe types.Type, env *TypeEnv) (*types.Object, error) {
|
||||
if len(path) == 0 {
|
||||
return o, nil
|
||||
}
|
||||
|
||||
key := env.Get(path[0].Value)
|
||||
|
||||
if len(path) == 1 {
|
||||
var dynamicProps *types.DynamicProperty
|
||||
if dp := o.DynamicProperties(); dp != nil {
|
||||
dynamicProps = types.NewDynamicProperty(types.Or(o.DynamicProperties().Key, key), types.Or(o.DynamicProperties().Value, tpe))
|
||||
} else {
|
||||
dynamicProps = types.NewDynamicProperty(key, tpe)
|
||||
}
|
||||
return types.NewObject(o.StaticProperties(), dynamicProps), nil
|
||||
}
|
||||
|
||||
child, err := insertIntoObject(types.NewObject(nil, nil), path[1:], tpe, env)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var dynamicProps *types.DynamicProperty
|
||||
if dp := o.DynamicProperties(); dp != nil {
|
||||
dynamicProps = types.NewDynamicProperty(types.Or(o.DynamicProperties().Key, key), types.Or(o.DynamicProperties().Value, child))
|
||||
} else {
|
||||
dynamicProps = types.NewDynamicProperty(key, child)
|
||||
}
|
||||
return types.NewObject(o.StaticProperties(), dynamicProps), nil
|
||||
}
|
||||
|
||||
func (n *typeTreeNode) Leafs() map[*Ref]types.Type {
|
||||
leafs := map[*Ref]types.Type{}
|
||||
n.children.Iter(func(_, v util.T) bool {
|
||||
collectLeafs(v.(*typeTreeNode), nil, leafs)
|
||||
return false
|
||||
})
|
||||
return leafs
|
||||
}
|
||||
|
||||
func collectLeafs(n *typeTreeNode, path Ref, leafs map[*Ref]types.Type) {
|
||||
nPath := append(path, NewTerm(n.key))
|
||||
if n.Leaf() {
|
||||
leafs[&nPath] = n.Value()
|
||||
return
|
||||
}
|
||||
n.children.Iter(func(_, v util.T) bool {
|
||||
collectLeafs(v.(*typeTreeNode), nPath, leafs)
|
||||
return false
|
||||
})
|
||||
}
|
||||
|
||||
func (n *typeTreeNode) Value() types.Type {
|
||||
return n.value
|
||||
}
|
||||
|
||||
// selectConstant returns the attribute of the type referred to by the term. If
|
||||
// the attribute type cannot be determined, nil is returned.
|
||||
func selectConstant(tpe types.Type, term *Term) types.Type {
|
||||
x, err := JSON(term.Value)
|
||||
if err == nil {
|
||||
return types.Select(tpe, x)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// selectRef returns the type of the nested attribute referred to by ref. If
|
||||
// the attribute type cannot be determined, nil is returned. If the ref
|
||||
// contains vars or refs, then the returned type will be a union of the
|
||||
// possible types.
|
||||
func selectRef(tpe types.Type, ref Ref) types.Type {
|
||||
|
||||
if tpe == nil || len(ref) == 0 {
|
||||
return tpe
|
||||
}
|
||||
|
||||
head, tail := ref[0], ref[1:]
|
||||
|
||||
switch head.Value.(type) {
|
||||
case Var, Ref, *Array, Object, Set:
|
||||
return selectRef(types.Values(tpe), tail)
|
||||
default:
|
||||
return selectRef(selectConstant(tpe, head), tail)
|
||||
}
|
||||
}
|
||||
type TypeEnv = v1.TypeEnv
|
||||
|
||||
99
vendor/github.com/open-policy-agent/opa/ast/errors.go
generated
vendored
99
vendor/github.com/open-policy-agent/opa/ast/errors.go
generated
vendored
@@ -5,119 +5,42 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// Errors represents a series of errors encountered during parsing, compiling,
|
||||
// etc.
|
||||
type Errors []*Error
|
||||
|
||||
func (e Errors) Error() string {
|
||||
|
||||
if len(e) == 0 {
|
||||
return "no error(s)"
|
||||
}
|
||||
|
||||
if len(e) == 1 {
|
||||
return fmt.Sprintf("1 error occurred: %v", e[0].Error())
|
||||
}
|
||||
|
||||
s := make([]string, len(e))
|
||||
for i, err := range e {
|
||||
s[i] = err.Error()
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%d errors occurred:\n%s", len(e), strings.Join(s, "\n"))
|
||||
}
|
||||
|
||||
// Sort sorts the error slice by location. If the locations are equal then the
|
||||
// error message is compared.
|
||||
func (e Errors) Sort() {
|
||||
sort.Slice(e, func(i, j int) bool {
|
||||
a := e[i]
|
||||
b := e[j]
|
||||
|
||||
if cmp := a.Location.Compare(b.Location); cmp != 0 {
|
||||
return cmp < 0
|
||||
}
|
||||
|
||||
return a.Error() < b.Error()
|
||||
})
|
||||
}
|
||||
type Errors = v1.Errors
|
||||
|
||||
const (
|
||||
// ParseErr indicates an unclassified parse error occurred.
|
||||
ParseErr = "rego_parse_error"
|
||||
ParseErr = v1.ParseErr
|
||||
|
||||
// CompileErr indicates an unclassified compile error occurred.
|
||||
CompileErr = "rego_compile_error"
|
||||
CompileErr = v1.CompileErr
|
||||
|
||||
// TypeErr indicates a type error was caught.
|
||||
TypeErr = "rego_type_error"
|
||||
TypeErr = v1.TypeErr
|
||||
|
||||
// UnsafeVarErr indicates an unsafe variable was found during compilation.
|
||||
UnsafeVarErr = "rego_unsafe_var_error"
|
||||
UnsafeVarErr = v1.UnsafeVarErr
|
||||
|
||||
// RecursionErr indicates recursion was found during compilation.
|
||||
RecursionErr = "rego_recursion_error"
|
||||
RecursionErr = v1.RecursionErr
|
||||
)
|
||||
|
||||
// IsError returns true if err is an AST error with code.
|
||||
func IsError(code string, err error) bool {
|
||||
if err, ok := err.(*Error); ok {
|
||||
return err.Code == code
|
||||
}
|
||||
return false
|
||||
return v1.IsError(code, err)
|
||||
}
|
||||
|
||||
// ErrorDetails defines the interface for detailed error messages.
|
||||
type ErrorDetails interface {
|
||||
Lines() []string
|
||||
}
|
||||
type ErrorDetails = v1.ErrorDetails
|
||||
|
||||
// Error represents a single error caught during parsing, compiling, etc.
|
||||
type Error struct {
|
||||
Code string `json:"code"`
|
||||
Message string `json:"message"`
|
||||
Location *Location `json:"location,omitempty"`
|
||||
Details ErrorDetails `json:"details,omitempty"`
|
||||
}
|
||||
|
||||
func (e *Error) Error() string {
|
||||
|
||||
var prefix string
|
||||
|
||||
if e.Location != nil {
|
||||
|
||||
if len(e.Location.File) > 0 {
|
||||
prefix += e.Location.File + ":" + fmt.Sprint(e.Location.Row)
|
||||
} else {
|
||||
prefix += fmt.Sprint(e.Location.Row) + ":" + fmt.Sprint(e.Location.Col)
|
||||
}
|
||||
}
|
||||
|
||||
msg := fmt.Sprintf("%v: %v", e.Code, e.Message)
|
||||
|
||||
if len(prefix) > 0 {
|
||||
msg = prefix + ": " + msg
|
||||
}
|
||||
|
||||
if e.Details != nil {
|
||||
for _, line := range e.Details.Lines() {
|
||||
msg += "\n\t" + line
|
||||
}
|
||||
}
|
||||
|
||||
return msg
|
||||
}
|
||||
type Error = v1.Error
|
||||
|
||||
// NewError returns a new Error object.
|
||||
func NewError(code string, loc *Location, f string, a ...interface{}) *Error {
|
||||
return &Error{
|
||||
Code: code,
|
||||
Location: loc,
|
||||
Message: fmt.Sprintf(f, a...),
|
||||
}
|
||||
return v1.NewError(code, loc, f, a...)
|
||||
}
|
||||
|
||||
896
vendor/github.com/open-policy-agent/opa/ast/index.go
generated
vendored
896
vendor/github.com/open-policy-agent/opa/ast/index.go
generated
vendored
@@ -5,904 +5,16 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/open-policy-agent/opa/util"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// RuleIndex defines the interface for rule indices.
|
||||
type RuleIndex interface {
|
||||
|
||||
// Build tries to construct an index for the given rules. If the index was
|
||||
// constructed, it returns true, otherwise false.
|
||||
Build(rules []*Rule) bool
|
||||
|
||||
// Lookup searches the index for rules that will match the provided
|
||||
// resolver. If the resolver returns an error, it is returned via err.
|
||||
Lookup(resolver ValueResolver) (*IndexResult, error)
|
||||
|
||||
// AllRules traverses the index and returns all rules that will match
|
||||
// the provided resolver without any optimizations (effectively with
|
||||
// indexing disabled). If the resolver returns an error, it is returned
|
||||
// via err.
|
||||
AllRules(resolver ValueResolver) (*IndexResult, error)
|
||||
}
|
||||
type RuleIndex v1.RuleIndex
|
||||
|
||||
// IndexResult contains the result of an index lookup.
|
||||
type IndexResult struct {
|
||||
Kind RuleKind
|
||||
Rules []*Rule
|
||||
Else map[*Rule][]*Rule
|
||||
Default *Rule
|
||||
EarlyExit bool
|
||||
OnlyGroundRefs bool
|
||||
}
|
||||
type IndexResult = v1.IndexResult
|
||||
|
||||
// NewIndexResult returns a new IndexResult object.
|
||||
func NewIndexResult(kind RuleKind) *IndexResult {
|
||||
return &IndexResult{
|
||||
Kind: kind,
|
||||
Else: map[*Rule][]*Rule{},
|
||||
}
|
||||
}
|
||||
|
||||
// Empty returns true if there are no rules to evaluate.
|
||||
func (ir *IndexResult) Empty() bool {
|
||||
return len(ir.Rules) == 0 && ir.Default == nil
|
||||
}
|
||||
|
||||
type baseDocEqIndex struct {
|
||||
skipIndexing Set
|
||||
isVirtual func(Ref) bool
|
||||
root *trieNode
|
||||
defaultRule *Rule
|
||||
kind RuleKind
|
||||
onlyGroundRefs bool
|
||||
}
|
||||
|
||||
func newBaseDocEqIndex(isVirtual func(Ref) bool) *baseDocEqIndex {
|
||||
return &baseDocEqIndex{
|
||||
skipIndexing: NewSet(NewTerm(InternalPrint.Ref())),
|
||||
isVirtual: isVirtual,
|
||||
root: newTrieNodeImpl(),
|
||||
onlyGroundRefs: true,
|
||||
}
|
||||
}
|
||||
|
||||
func (i *baseDocEqIndex) Build(rules []*Rule) bool {
|
||||
if len(rules) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
i.kind = rules[0].Head.RuleKind()
|
||||
indices := newrefindices(i.isVirtual)
|
||||
|
||||
// build indices for each rule.
|
||||
for idx := range rules {
|
||||
WalkRules(rules[idx], func(rule *Rule) bool {
|
||||
if rule.Default {
|
||||
i.defaultRule = rule
|
||||
return false
|
||||
}
|
||||
if i.onlyGroundRefs {
|
||||
i.onlyGroundRefs = rule.Head.Reference.IsGround()
|
||||
}
|
||||
var skip bool
|
||||
for _, expr := range rule.Body {
|
||||
if op := expr.OperatorTerm(); op != nil && i.skipIndexing.Contains(op) {
|
||||
skip = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !skip {
|
||||
for _, expr := range rule.Body {
|
||||
indices.Update(rule, expr)
|
||||
}
|
||||
}
|
||||
return false
|
||||
})
|
||||
}
|
||||
|
||||
// build trie out of indices.
|
||||
for idx := range rules {
|
||||
var prio int
|
||||
WalkRules(rules[idx], func(rule *Rule) bool {
|
||||
if rule.Default {
|
||||
return false
|
||||
}
|
||||
node := i.root
|
||||
if indices.Indexed(rule) {
|
||||
for _, ref := range indices.Sorted() {
|
||||
node = node.Insert(ref, indices.Value(rule, ref), indices.Mapper(rule, ref))
|
||||
}
|
||||
}
|
||||
// Insert rule into trie with (insertion order, priority order)
|
||||
// tuple. Retaining the insertion order allows us to return rules
|
||||
// in the order they were passed to this function.
|
||||
node.append([...]int{idx, prio}, rule)
|
||||
prio++
|
||||
return false
|
||||
})
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (i *baseDocEqIndex) Lookup(resolver ValueResolver) (*IndexResult, error) {
|
||||
|
||||
tr := newTrieTraversalResult()
|
||||
|
||||
err := i.root.Traverse(resolver, tr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
result := NewIndexResult(i.kind)
|
||||
result.Default = i.defaultRule
|
||||
result.OnlyGroundRefs = i.onlyGroundRefs
|
||||
result.Rules = make([]*Rule, 0, len(tr.ordering))
|
||||
|
||||
for _, pos := range tr.ordering {
|
||||
sort.Slice(tr.unordered[pos], func(i, j int) bool {
|
||||
return tr.unordered[pos][i].prio[1] < tr.unordered[pos][j].prio[1]
|
||||
})
|
||||
nodes := tr.unordered[pos]
|
||||
root := nodes[0].rule
|
||||
|
||||
result.Rules = append(result.Rules, root)
|
||||
if len(nodes) > 1 {
|
||||
result.Else[root] = make([]*Rule, len(nodes)-1)
|
||||
for i := 1; i < len(nodes); i++ {
|
||||
result.Else[root][i-1] = nodes[i].rule
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
result.EarlyExit = tr.values.Len() == 1 && tr.values.Slice()[0].IsGround()
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (i *baseDocEqIndex) AllRules(_ ValueResolver) (*IndexResult, error) {
|
||||
tr := newTrieTraversalResult()
|
||||
|
||||
// Walk over the rule trie and accumulate _all_ rules
|
||||
rw := &ruleWalker{result: tr}
|
||||
i.root.Do(rw)
|
||||
|
||||
result := NewIndexResult(i.kind)
|
||||
result.Default = i.defaultRule
|
||||
result.OnlyGroundRefs = i.onlyGroundRefs
|
||||
result.Rules = make([]*Rule, 0, len(tr.ordering))
|
||||
|
||||
for _, pos := range tr.ordering {
|
||||
sort.Slice(tr.unordered[pos], func(i, j int) bool {
|
||||
return tr.unordered[pos][i].prio[1] < tr.unordered[pos][j].prio[1]
|
||||
})
|
||||
nodes := tr.unordered[pos]
|
||||
root := nodes[0].rule
|
||||
result.Rules = append(result.Rules, root)
|
||||
if len(nodes) > 1 {
|
||||
result.Else[root] = make([]*Rule, len(nodes)-1)
|
||||
for i := 1; i < len(nodes); i++ {
|
||||
result.Else[root][i-1] = nodes[i].rule
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
result.EarlyExit = tr.values.Len() == 1 && tr.values.Slice()[0].IsGround()
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
type ruleWalker struct {
|
||||
result *trieTraversalResult
|
||||
}
|
||||
|
||||
func (r *ruleWalker) Do(x interface{}) trieWalker {
|
||||
tn := x.(*trieNode)
|
||||
r.result.Add(tn)
|
||||
return r
|
||||
}
|
||||
|
||||
type valueMapper struct {
|
||||
Key string
|
||||
MapValue func(Value) Value
|
||||
}
|
||||
|
||||
type refindex struct {
|
||||
Ref Ref
|
||||
Value Value
|
||||
Mapper *valueMapper
|
||||
}
|
||||
|
||||
type refindices struct {
|
||||
isVirtual func(Ref) bool
|
||||
rules map[*Rule][]*refindex
|
||||
frequency *util.HashMap
|
||||
sorted []Ref
|
||||
}
|
||||
|
||||
func newrefindices(isVirtual func(Ref) bool) *refindices {
|
||||
return &refindices{
|
||||
isVirtual: isVirtual,
|
||||
rules: map[*Rule][]*refindex{},
|
||||
frequency: util.NewHashMap(func(a, b util.T) bool {
|
||||
r1, r2 := a.(Ref), b.(Ref)
|
||||
return r1.Equal(r2)
|
||||
}, func(x util.T) int {
|
||||
return x.(Ref).Hash()
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
// Update attempts to update the refindices for the given expression in the
|
||||
// given rule. If the expression cannot be indexed the update does not affect
|
||||
// the indices.
|
||||
func (i *refindices) Update(rule *Rule, expr *Expr) {
|
||||
|
||||
if expr.Negated {
|
||||
return
|
||||
}
|
||||
|
||||
if len(expr.With) > 0 {
|
||||
// NOTE(tsandall): In the future, we may need to consider expressions
|
||||
// that have with statements applied to them.
|
||||
return
|
||||
}
|
||||
|
||||
op := expr.Operator()
|
||||
|
||||
switch {
|
||||
case op.Equal(Equality.Ref()):
|
||||
i.updateEq(rule, expr)
|
||||
|
||||
case op.Equal(Equal.Ref()) && len(expr.Operands()) == 2:
|
||||
// NOTE(tsandall): if equal() is called with more than two arguments the
|
||||
// output value is being captured in which case the indexer cannot
|
||||
// exclude the rule if the equal() call would return false (because the
|
||||
// false value must still be produced.)
|
||||
i.updateEq(rule, expr)
|
||||
|
||||
case op.Equal(GlobMatch.Ref()) && len(expr.Operands()) == 3:
|
||||
// NOTE(sr): Same as with equal() above -- 4 operands means the output
|
||||
// of `glob.match` is captured and the rule can thus not be excluded.
|
||||
i.updateGlobMatch(rule, expr)
|
||||
}
|
||||
}
|
||||
|
||||
// Sorted returns a sorted list of references that the indices were built from.
|
||||
// References that appear more frequently in the indexed rules are ordered
|
||||
// before less frequently appearing references.
|
||||
func (i *refindices) Sorted() []Ref {
|
||||
|
||||
if i.sorted == nil {
|
||||
counts := make([]int, 0, i.frequency.Len())
|
||||
i.sorted = make([]Ref, 0, i.frequency.Len())
|
||||
|
||||
i.frequency.Iter(func(k, v util.T) bool {
|
||||
counts = append(counts, v.(int))
|
||||
i.sorted = append(i.sorted, k.(Ref))
|
||||
return false
|
||||
})
|
||||
|
||||
sort.Slice(i.sorted, func(a, b int) bool {
|
||||
if counts[a] > counts[b] {
|
||||
return true
|
||||
} else if counts[b] > counts[a] {
|
||||
return false
|
||||
}
|
||||
return i.sorted[a][0].Loc().Compare(i.sorted[b][0].Loc()) < 0
|
||||
})
|
||||
}
|
||||
|
||||
return i.sorted
|
||||
}
|
||||
|
||||
func (i *refindices) Indexed(rule *Rule) bool {
|
||||
return len(i.rules[rule]) > 0
|
||||
}
|
||||
|
||||
func (i *refindices) Value(rule *Rule, ref Ref) Value {
|
||||
if index := i.index(rule, ref); index != nil {
|
||||
return index.Value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *refindices) Mapper(rule *Rule, ref Ref) *valueMapper {
|
||||
if index := i.index(rule, ref); index != nil {
|
||||
return index.Mapper
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (i *refindices) updateEq(rule *Rule, expr *Expr) {
|
||||
a, b := expr.Operand(0), expr.Operand(1)
|
||||
args := rule.Head.Args
|
||||
if idx, ok := eqOperandsToRefAndValue(i.isVirtual, args, a, b); ok {
|
||||
i.insert(rule, idx)
|
||||
return
|
||||
}
|
||||
if idx, ok := eqOperandsToRefAndValue(i.isVirtual, args, b, a); ok {
|
||||
i.insert(rule, idx)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
func (i *refindices) updateGlobMatch(rule *Rule, expr *Expr) {
|
||||
args := rule.Head.Args
|
||||
|
||||
delim, ok := globDelimiterToString(expr.Operand(1))
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
|
||||
if arr := globPatternToArray(expr.Operand(0), delim); arr != nil {
|
||||
// The 3rd operand of glob.match is the value to match. We assume the
|
||||
// 3rd operand was a reference that has been rewritten and bound to a
|
||||
// variable earlier in the query OR a function argument variable.
|
||||
match := expr.Operand(2)
|
||||
if _, ok := match.Value.(Var); ok {
|
||||
var ref Ref
|
||||
for _, other := range i.rules[rule] {
|
||||
if _, ok := other.Value.(Var); ok && other.Value.Compare(match.Value) == 0 {
|
||||
ref = other.Ref
|
||||
}
|
||||
}
|
||||
if ref == nil {
|
||||
for j, arg := range args {
|
||||
if arg.Equal(match) {
|
||||
ref = Ref{FunctionArgRootDocument, IntNumberTerm(j)}
|
||||
}
|
||||
}
|
||||
}
|
||||
if ref != nil {
|
||||
i.insert(rule, &refindex{
|
||||
Ref: ref,
|
||||
Value: arr.Value,
|
||||
Mapper: &valueMapper{
|
||||
Key: delim,
|
||||
MapValue: func(v Value) Value {
|
||||
if s, ok := v.(String); ok {
|
||||
return stringSliceToArray(splitStringEscaped(string(s), delim))
|
||||
}
|
||||
return v
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (i *refindices) insert(rule *Rule, index *refindex) {
|
||||
|
||||
count, ok := i.frequency.Get(index.Ref)
|
||||
if !ok {
|
||||
count = 0
|
||||
}
|
||||
|
||||
i.frequency.Put(index.Ref, count.(int)+1)
|
||||
|
||||
for pos, other := range i.rules[rule] {
|
||||
if other.Ref.Equal(index.Ref) {
|
||||
i.rules[rule][pos] = index
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
i.rules[rule] = append(i.rules[rule], index)
|
||||
}
|
||||
|
||||
func (i *refindices) index(rule *Rule, ref Ref) *refindex {
|
||||
for _, index := range i.rules[rule] {
|
||||
if index.Ref.Equal(ref) {
|
||||
return index
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type trieWalker interface {
|
||||
Do(x interface{}) trieWalker
|
||||
}
|
||||
|
||||
type trieTraversalResult struct {
|
||||
unordered map[int][]*ruleNode
|
||||
ordering []int
|
||||
values Set
|
||||
}
|
||||
|
||||
func newTrieTraversalResult() *trieTraversalResult {
|
||||
return &trieTraversalResult{
|
||||
unordered: map[int][]*ruleNode{},
|
||||
values: NewSet(),
|
||||
}
|
||||
}
|
||||
|
||||
func (tr *trieTraversalResult) Add(t *trieNode) {
|
||||
for _, node := range t.rules {
|
||||
root := node.prio[0]
|
||||
nodes, ok := tr.unordered[root]
|
||||
if !ok {
|
||||
tr.ordering = append(tr.ordering, root)
|
||||
}
|
||||
tr.unordered[root] = append(nodes, node)
|
||||
}
|
||||
if t.values != nil {
|
||||
t.values.Foreach(func(v *Term) { tr.values.Add(v) })
|
||||
}
|
||||
}
|
||||
|
||||
type trieNode struct {
|
||||
ref Ref
|
||||
values Set
|
||||
mappers []*valueMapper
|
||||
next *trieNode
|
||||
any *trieNode
|
||||
undefined *trieNode
|
||||
scalars *util.HashMap
|
||||
array *trieNode
|
||||
rules []*ruleNode
|
||||
}
|
||||
|
||||
func (node *trieNode) String() string {
|
||||
var flags []string
|
||||
flags = append(flags, fmt.Sprintf("self:%p", node))
|
||||
if len(node.ref) > 0 {
|
||||
flags = append(flags, node.ref.String())
|
||||
}
|
||||
if node.next != nil {
|
||||
flags = append(flags, fmt.Sprintf("next:%p", node.next))
|
||||
}
|
||||
if node.any != nil {
|
||||
flags = append(flags, fmt.Sprintf("any:%p", node.any))
|
||||
}
|
||||
if node.undefined != nil {
|
||||
flags = append(flags, fmt.Sprintf("undefined:%p", node.undefined))
|
||||
}
|
||||
if node.array != nil {
|
||||
flags = append(flags, fmt.Sprintf("array:%p", node.array))
|
||||
}
|
||||
if node.scalars.Len() > 0 {
|
||||
buf := make([]string, 0, node.scalars.Len())
|
||||
node.scalars.Iter(func(k, v util.T) bool {
|
||||
key := k.(Value)
|
||||
val := v.(*trieNode)
|
||||
buf = append(buf, fmt.Sprintf("scalar(%v):%p", key, val))
|
||||
return false
|
||||
})
|
||||
sort.Strings(buf)
|
||||
flags = append(flags, strings.Join(buf, " "))
|
||||
}
|
||||
if len(node.rules) > 0 {
|
||||
flags = append(flags, fmt.Sprintf("%d rule(s)", len(node.rules)))
|
||||
}
|
||||
if len(node.mappers) > 0 {
|
||||
flags = append(flags, fmt.Sprintf("%d mapper(s)", len(node.mappers)))
|
||||
}
|
||||
if node.values != nil {
|
||||
if l := node.values.Len(); l > 0 {
|
||||
flags = append(flags, fmt.Sprintf("%d value(s)", l))
|
||||
}
|
||||
}
|
||||
return strings.Join(flags, " ")
|
||||
}
|
||||
|
||||
func (node *trieNode) append(prio [2]int, rule *Rule) {
|
||||
node.rules = append(node.rules, &ruleNode{prio, rule})
|
||||
|
||||
if node.values != nil && rule.Head.Value != nil {
|
||||
node.values.Add(rule.Head.Value)
|
||||
return
|
||||
}
|
||||
|
||||
if node.values == nil && rule.Head.DocKind() == CompleteDoc {
|
||||
node.values = NewSet(rule.Head.Value)
|
||||
}
|
||||
}
|
||||
|
||||
type ruleNode struct {
|
||||
prio [2]int
|
||||
rule *Rule
|
||||
}
|
||||
|
||||
func newTrieNodeImpl() *trieNode {
|
||||
return &trieNode{
|
||||
scalars: util.NewHashMap(valueEq, valueHash),
|
||||
}
|
||||
}
|
||||
|
||||
func (node *trieNode) Do(walker trieWalker) {
|
||||
next := walker.Do(node)
|
||||
if next == nil {
|
||||
return
|
||||
}
|
||||
if node.any != nil {
|
||||
node.any.Do(next)
|
||||
}
|
||||
if node.undefined != nil {
|
||||
node.undefined.Do(next)
|
||||
}
|
||||
|
||||
node.scalars.Iter(func(_, v util.T) bool {
|
||||
child := v.(*trieNode)
|
||||
child.Do(next)
|
||||
return false
|
||||
})
|
||||
|
||||
if node.array != nil {
|
||||
node.array.Do(next)
|
||||
}
|
||||
if node.next != nil {
|
||||
node.next.Do(next)
|
||||
}
|
||||
}
|
||||
|
||||
func (node *trieNode) Insert(ref Ref, value Value, mapper *valueMapper) *trieNode {
|
||||
|
||||
if node.next == nil {
|
||||
node.next = newTrieNodeImpl()
|
||||
node.next.ref = ref
|
||||
}
|
||||
|
||||
if mapper != nil {
|
||||
node.next.addMapper(mapper)
|
||||
}
|
||||
|
||||
return node.next.insertValue(value)
|
||||
}
|
||||
|
||||
func (node *trieNode) Traverse(resolver ValueResolver, tr *trieTraversalResult) error {
|
||||
|
||||
if node == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
tr.Add(node)
|
||||
|
||||
return node.next.traverse(resolver, tr)
|
||||
}
|
||||
|
||||
func (node *trieNode) addMapper(mapper *valueMapper) {
|
||||
for i := range node.mappers {
|
||||
if node.mappers[i].Key == mapper.Key {
|
||||
return
|
||||
}
|
||||
}
|
||||
node.mappers = append(node.mappers, mapper)
|
||||
}
|
||||
|
||||
func (node *trieNode) insertValue(value Value) *trieNode {
|
||||
|
||||
switch value := value.(type) {
|
||||
case nil:
|
||||
if node.undefined == nil {
|
||||
node.undefined = newTrieNodeImpl()
|
||||
}
|
||||
return node.undefined
|
||||
case Var:
|
||||
if node.any == nil {
|
||||
node.any = newTrieNodeImpl()
|
||||
}
|
||||
return node.any
|
||||
case Null, Boolean, Number, String:
|
||||
child, ok := node.scalars.Get(value)
|
||||
if !ok {
|
||||
child = newTrieNodeImpl()
|
||||
node.scalars.Put(value, child)
|
||||
}
|
||||
return child.(*trieNode)
|
||||
case *Array:
|
||||
if node.array == nil {
|
||||
node.array = newTrieNodeImpl()
|
||||
}
|
||||
return node.array.insertArray(value)
|
||||
}
|
||||
|
||||
panic("illegal value")
|
||||
}
|
||||
|
||||
func (node *trieNode) insertArray(arr *Array) *trieNode {
|
||||
|
||||
if arr.Len() == 0 {
|
||||
return node
|
||||
}
|
||||
|
||||
switch head := arr.Elem(0).Value.(type) {
|
||||
case Var:
|
||||
if node.any == nil {
|
||||
node.any = newTrieNodeImpl()
|
||||
}
|
||||
return node.any.insertArray(arr.Slice(1, -1))
|
||||
case Null, Boolean, Number, String:
|
||||
child, ok := node.scalars.Get(head)
|
||||
if !ok {
|
||||
child = newTrieNodeImpl()
|
||||
node.scalars.Put(head, child)
|
||||
}
|
||||
return child.(*trieNode).insertArray(arr.Slice(1, -1))
|
||||
}
|
||||
|
||||
panic("illegal value")
|
||||
}
|
||||
|
||||
func (node *trieNode) traverse(resolver ValueResolver, tr *trieTraversalResult) error {
|
||||
|
||||
if node == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
v, err := resolver.Resolve(node.ref)
|
||||
if err != nil {
|
||||
if IsUnknownValueErr(err) {
|
||||
return node.traverseUnknown(resolver, tr)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if node.undefined != nil {
|
||||
err = node.undefined.Traverse(resolver, tr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if v == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if node.any != nil {
|
||||
err = node.any.Traverse(resolver, tr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if err := node.traverseValue(resolver, tr, v); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for i := range node.mappers {
|
||||
if err := node.traverseValue(resolver, tr, node.mappers[i].MapValue(v)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node *trieNode) traverseValue(resolver ValueResolver, tr *trieTraversalResult, value Value) error {
|
||||
|
||||
switch value := value.(type) {
|
||||
case *Array:
|
||||
if node.array == nil {
|
||||
return nil
|
||||
}
|
||||
return node.array.traverseArray(resolver, tr, value)
|
||||
|
||||
case Null, Boolean, Number, String:
|
||||
child, ok := node.scalars.Get(value)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return child.(*trieNode).Traverse(resolver, tr)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (node *trieNode) traverseArray(resolver ValueResolver, tr *trieTraversalResult, arr *Array) error {
|
||||
|
||||
if arr.Len() == 0 {
|
||||
return node.Traverse(resolver, tr)
|
||||
}
|
||||
|
||||
if node.any != nil {
|
||||
err := node.any.traverseArray(resolver, tr, arr.Slice(1, -1))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
head := arr.Elem(0).Value
|
||||
|
||||
if !IsScalar(head) {
|
||||
return nil
|
||||
}
|
||||
|
||||
child, ok := node.scalars.Get(head)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return child.(*trieNode).traverseArray(resolver, tr, arr.Slice(1, -1))
|
||||
}
|
||||
|
||||
func (node *trieNode) traverseUnknown(resolver ValueResolver, tr *trieTraversalResult) error {
|
||||
|
||||
if node == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := node.Traverse(resolver, tr); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := node.undefined.traverseUnknown(resolver, tr); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := node.any.traverseUnknown(resolver, tr); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := node.array.traverseUnknown(resolver, tr); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var iterErr error
|
||||
node.scalars.Iter(func(_, v util.T) bool {
|
||||
child := v.(*trieNode)
|
||||
if iterErr = child.traverseUnknown(resolver, tr); iterErr != nil {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
})
|
||||
|
||||
return iterErr
|
||||
}
|
||||
|
||||
// If term `a` is one of the function's operands, we store a Ref: `args[0]`
|
||||
// for the argument number. So for `f(x, y) { x = 10; y = 12 }`, we'll
|
||||
// bind `args[0]` and `args[1]` to this rule when called for (x=10) and
|
||||
// (y=12) respectively.
|
||||
func eqOperandsToRefAndValue(isVirtual func(Ref) bool, args []*Term, a, b *Term) (*refindex, bool) {
|
||||
switch v := a.Value.(type) {
|
||||
case Var:
|
||||
for i, arg := range args {
|
||||
if arg.Value.Compare(v) == 0 {
|
||||
if bval, ok := indexValue(b); ok {
|
||||
return &refindex{Ref: Ref{FunctionArgRootDocument, IntNumberTerm(i)}, Value: bval}, true
|
||||
}
|
||||
}
|
||||
}
|
||||
case Ref:
|
||||
if !RootDocumentNames.Contains(v[0]) {
|
||||
return nil, false
|
||||
}
|
||||
if isVirtual(v) {
|
||||
return nil, false
|
||||
}
|
||||
if v.IsNested() || !v.IsGround() {
|
||||
return nil, false
|
||||
}
|
||||
if bval, ok := indexValue(b); ok {
|
||||
return &refindex{Ref: v, Value: bval}, true
|
||||
}
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func indexValue(b *Term) (Value, bool) {
|
||||
switch b := b.Value.(type) {
|
||||
case Null, Boolean, Number, String, Var:
|
||||
return b, true
|
||||
case *Array:
|
||||
stop := false
|
||||
first := true
|
||||
vis := NewGenericVisitor(func(x interface{}) bool {
|
||||
if first {
|
||||
first = false
|
||||
return false
|
||||
}
|
||||
switch x.(type) {
|
||||
// No nested structures or values that require evaluation (other than var).
|
||||
case *Array, Object, Set, *ArrayComprehension, *ObjectComprehension, *SetComprehension, Ref:
|
||||
stop = true
|
||||
}
|
||||
return stop
|
||||
})
|
||||
vis.Walk(b)
|
||||
if !stop {
|
||||
return b, true
|
||||
}
|
||||
}
|
||||
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func globDelimiterToString(delim *Term) (string, bool) {
|
||||
|
||||
arr, ok := delim.Value.(*Array)
|
||||
if !ok {
|
||||
return "", false
|
||||
}
|
||||
|
||||
var result string
|
||||
|
||||
if arr.Len() == 0 {
|
||||
result = "."
|
||||
} else {
|
||||
for i := 0; i < arr.Len(); i++ {
|
||||
term := arr.Elem(i)
|
||||
s, ok := term.Value.(String)
|
||||
if !ok {
|
||||
return "", false
|
||||
}
|
||||
result += string(s)
|
||||
}
|
||||
}
|
||||
|
||||
return result, true
|
||||
}
|
||||
|
||||
func globPatternToArray(pattern *Term, delim string) *Term {
|
||||
|
||||
s, ok := pattern.Value.(String)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
|
||||
parts := splitStringEscaped(string(s), delim)
|
||||
arr := make([]*Term, len(parts))
|
||||
|
||||
for i := range parts {
|
||||
if parts[i] == "*" {
|
||||
arr[i] = VarTerm("$globwildcard")
|
||||
} else {
|
||||
var escaped bool
|
||||
for _, c := range parts[i] {
|
||||
if c == '\\' {
|
||||
escaped = !escaped
|
||||
continue
|
||||
}
|
||||
if !escaped {
|
||||
switch c {
|
||||
case '[', '?', '{', '*':
|
||||
// TODO(tsandall): super glob and character pattern
|
||||
// matching not supported yet.
|
||||
return nil
|
||||
}
|
||||
}
|
||||
escaped = false
|
||||
}
|
||||
arr[i] = StringTerm(parts[i])
|
||||
}
|
||||
}
|
||||
|
||||
return NewTerm(NewArray(arr...))
|
||||
}
|
||||
|
||||
// splits s on characters in delim except if delim characters have been escaped
|
||||
// with reverse solidus.
|
||||
func splitStringEscaped(s string, delim string) []string {
|
||||
|
||||
var last, curr int
|
||||
var escaped bool
|
||||
var result []string
|
||||
|
||||
for ; curr < len(s); curr++ {
|
||||
if s[curr] == '\\' || escaped {
|
||||
escaped = !escaped
|
||||
continue
|
||||
}
|
||||
if strings.ContainsRune(delim, rune(s[curr])) {
|
||||
result = append(result, s[last:curr])
|
||||
last = curr + 1
|
||||
}
|
||||
}
|
||||
|
||||
result = append(result, s[last:])
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func stringSliceToArray(s []string) *Array {
|
||||
arr := make([]*Term, len(s))
|
||||
for i, v := range s {
|
||||
arr[i] = StringTerm(v)
|
||||
}
|
||||
return NewArray(arr...)
|
||||
return v1.NewIndexResult(kind)
|
||||
}
|
||||
|
||||
24
vendor/github.com/open-policy-agent/opa/ast/interning.go
generated
vendored
Normal file
24
vendor/github.com/open-policy-agent/opa/ast/interning.go
generated
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
// Copyright 2024 The OPA Authors. All rights reserved.
|
||||
// Use of this source code is governed by an Apache2
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package ast
|
||||
|
||||
import (
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
func InternedBooleanTerm(b bool) *Term {
|
||||
return v1.InternedBooleanTerm(b)
|
||||
}
|
||||
|
||||
// InternedIntNumberTerm returns a term with the given integer value. The term is
|
||||
// cached between -1 to 512, and for values outside of that range, this function
|
||||
// is equivalent to ast.IntNumberTerm.
|
||||
func InternedIntNumberTerm(i int) *Term {
|
||||
return v1.InternedIntNumberTerm(i)
|
||||
}
|
||||
|
||||
func HasInternedIntNumberTerm(i int) bool {
|
||||
return v1.HasInternedIntNumberTerm(i)
|
||||
}
|
||||
8
vendor/github.com/open-policy-agent/opa/ast/json/doc.go
generated
vendored
Normal file
8
vendor/github.com/open-policy-agent/opa/ast/json/doc.go
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
// Copyright 2024 The OPA Authors. All rights reserved.
|
||||
// Use of this source code is governed by an Apache2
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Deprecated: This package is intended for older projects transitioning from OPA v0.x and will remain for the lifetime of OPA v1.x, but its use is not recommended.
|
||||
// For newer features and behaviours, such as defaulting to the Rego v1 syntax, use the corresponding components in the [github.com/open-policy-agent/opa/v1] package instead.
|
||||
// See https://www.openpolicyagent.org/docs/latest/v0-compatibility/ for more information.
|
||||
package json
|
||||
31
vendor/github.com/open-policy-agent/opa/ast/json/json.go
generated
vendored
31
vendor/github.com/open-policy-agent/opa/ast/json/json.go
generated
vendored
@@ -1,36 +1,15 @@
|
||||
package json
|
||||
|
||||
import v1 "github.com/open-policy-agent/opa/v1/ast/json"
|
||||
|
||||
// Options defines the options for JSON operations,
|
||||
// currently only marshaling can be configured
|
||||
type Options struct {
|
||||
MarshalOptions MarshalOptions
|
||||
}
|
||||
type Options = v1.Options
|
||||
|
||||
// MarshalOptions defines the options for JSON marshaling,
|
||||
// currently only toggling the marshaling of location information is supported
|
||||
type MarshalOptions struct {
|
||||
// IncludeLocation toggles the marshaling of location information
|
||||
IncludeLocation NodeToggle
|
||||
// IncludeLocationText additionally/optionally includes the text of the location
|
||||
IncludeLocationText bool
|
||||
// ExcludeLocationFile additionally/optionally excludes the file of the location
|
||||
// Note that this is inverted (i.e. not "include" as the default needs to remain false)
|
||||
ExcludeLocationFile bool
|
||||
}
|
||||
type MarshalOptions = v1.MarshalOptions
|
||||
|
||||
// NodeToggle is a generic struct to allow the toggling of
|
||||
// settings for different ast node types
|
||||
type NodeToggle struct {
|
||||
Term bool
|
||||
Package bool
|
||||
Comment bool
|
||||
Import bool
|
||||
Rule bool
|
||||
Head bool
|
||||
Expr bool
|
||||
SomeDecl bool
|
||||
Every bool
|
||||
With bool
|
||||
Annotations bool
|
||||
AnnotationsRef bool
|
||||
}
|
||||
type NodeToggle = v1.NodeToggle
|
||||
|
||||
121
vendor/github.com/open-policy-agent/opa/ast/map.go
generated
vendored
121
vendor/github.com/open-policy-agent/opa/ast/map.go
generated
vendored
@@ -5,129 +5,14 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
|
||||
"github.com/open-policy-agent/opa/util"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// ValueMap represents a key/value map between AST term values. Any type of term
|
||||
// can be used as a key in the map.
|
||||
type ValueMap struct {
|
||||
hashMap *util.HashMap
|
||||
}
|
||||
type ValueMap = v1.ValueMap
|
||||
|
||||
// NewValueMap returns a new ValueMap.
|
||||
func NewValueMap() *ValueMap {
|
||||
vs := &ValueMap{
|
||||
hashMap: util.NewHashMap(valueEq, valueHash),
|
||||
}
|
||||
return vs
|
||||
}
|
||||
|
||||
// MarshalJSON provides a custom marshaller for the ValueMap which
|
||||
// will include the key, value, and value type.
|
||||
func (vs *ValueMap) MarshalJSON() ([]byte, error) {
|
||||
var tmp []map[string]interface{}
|
||||
vs.Iter(func(k Value, v Value) bool {
|
||||
tmp = append(tmp, map[string]interface{}{
|
||||
"name": k.String(),
|
||||
"type": TypeName(v),
|
||||
"value": v,
|
||||
})
|
||||
return false
|
||||
})
|
||||
return json.Marshal(tmp)
|
||||
}
|
||||
|
||||
// Copy returns a shallow copy of the ValueMap.
|
||||
func (vs *ValueMap) Copy() *ValueMap {
|
||||
if vs == nil {
|
||||
return nil
|
||||
}
|
||||
cpy := NewValueMap()
|
||||
cpy.hashMap = vs.hashMap.Copy()
|
||||
return cpy
|
||||
}
|
||||
|
||||
// Equal returns true if this ValueMap equals the other.
|
||||
func (vs *ValueMap) Equal(other *ValueMap) bool {
|
||||
if vs == nil {
|
||||
return other == nil || other.Len() == 0
|
||||
}
|
||||
if other == nil {
|
||||
return vs == nil || vs.Len() == 0
|
||||
}
|
||||
return vs.hashMap.Equal(other.hashMap)
|
||||
}
|
||||
|
||||
// Len returns the number of elements in the map.
|
||||
func (vs *ValueMap) Len() int {
|
||||
if vs == nil {
|
||||
return 0
|
||||
}
|
||||
return vs.hashMap.Len()
|
||||
}
|
||||
|
||||
// Get returns the value in the map for k.
|
||||
func (vs *ValueMap) Get(k Value) Value {
|
||||
if vs != nil {
|
||||
if v, ok := vs.hashMap.Get(k); ok {
|
||||
return v.(Value)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Hash returns a hash code for this ValueMap.
|
||||
func (vs *ValueMap) Hash() int {
|
||||
if vs == nil {
|
||||
return 0
|
||||
}
|
||||
return vs.hashMap.Hash()
|
||||
}
|
||||
|
||||
// Iter calls the iter function for each key/value pair in the map. If the iter
|
||||
// function returns true, iteration stops.
|
||||
func (vs *ValueMap) Iter(iter func(Value, Value) bool) bool {
|
||||
if vs == nil {
|
||||
return false
|
||||
}
|
||||
return vs.hashMap.Iter(func(kt, vt util.T) bool {
|
||||
k := kt.(Value)
|
||||
v := vt.(Value)
|
||||
return iter(k, v)
|
||||
})
|
||||
}
|
||||
|
||||
// Put inserts a key k into the map with value v.
|
||||
func (vs *ValueMap) Put(k, v Value) {
|
||||
if vs == nil {
|
||||
panic("put on nil value map")
|
||||
}
|
||||
vs.hashMap.Put(k, v)
|
||||
}
|
||||
|
||||
// Delete removes a key k from the map.
|
||||
func (vs *ValueMap) Delete(k Value) {
|
||||
if vs == nil {
|
||||
return
|
||||
}
|
||||
vs.hashMap.Delete(k)
|
||||
}
|
||||
|
||||
func (vs *ValueMap) String() string {
|
||||
if vs == nil {
|
||||
return "{}"
|
||||
}
|
||||
return vs.hashMap.String()
|
||||
}
|
||||
|
||||
func valueHash(v util.T) int {
|
||||
return v.(Value).Hash()
|
||||
}
|
||||
|
||||
func valueEq(a, b util.T) bool {
|
||||
av := a.(Value)
|
||||
bv := b.(Value)
|
||||
return av.Compare(bv) == 0
|
||||
return v1.NewValueMap()
|
||||
}
|
||||
|
||||
11
vendor/github.com/open-policy-agent/opa/ast/marshal.go
generated
vendored
11
vendor/github.com/open-policy-agent/opa/ast/marshal.go
generated
vendored
@@ -1,11 +0,0 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
astJSON "github.com/open-policy-agent/opa/ast/json"
|
||||
)
|
||||
|
||||
// customJSON is an interface that can be implemented by AST nodes that
|
||||
// allows the parser to set options for JSON operations on that node.
|
||||
type customJSON interface {
|
||||
setJSONOptions(astJSON.Options)
|
||||
}
|
||||
2712
vendor/github.com/open-policy-agent/opa/ast/parser.go
generated
vendored
2712
vendor/github.com/open-policy-agent/opa/ast/parser.go
generated
vendored
File diff suppressed because it is too large
Load Diff
598
vendor/github.com/open-policy-agent/opa/ast/parser_ext.go
generated
vendored
598
vendor/github.com/open-policy-agent/opa/ast/parser_ext.go
generated
vendored
@@ -1,24 +1,14 @@
|
||||
// Copyright 2016 The OPA Authors. All rights reserved.
|
||||
// Copyright 2024 The OPA Authors. All rights reserved.
|
||||
// Use of this source code is governed by an Apache2
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// This file contains extra functions for parsing Rego.
|
||||
// Most of the parsing is handled by the code in parser.go,
|
||||
// however, there are additional utilities that are
|
||||
// helpful for dealing with Rego source inputs (e.g., REPL
|
||||
// statements, source files, etc.)
|
||||
|
||||
package ast
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
"github.com/open-policy-agent/opa/ast/internal/tokens"
|
||||
astJSON "github.com/open-policy-agent/opa/ast/json"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// MustParseBody returns a parsed body.
|
||||
@@ -30,11 +20,7 @@ func MustParseBody(input string) Body {
|
||||
// MustParseBodyWithOpts returns a parsed body.
|
||||
// If an error occurs during parsing, panic.
|
||||
func MustParseBodyWithOpts(input string, opts ParserOptions) Body {
|
||||
parsed, err := ParseBodyWithOpts(input, opts)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return parsed
|
||||
return v1.MustParseBodyWithOpts(input, setDefaultRegoVersion(opts))
|
||||
}
|
||||
|
||||
// MustParseExpr returns a parsed expression.
|
||||
@@ -66,11 +52,7 @@ func MustParseModule(input string) *Module {
|
||||
// MustParseModuleWithOpts returns a parsed module.
|
||||
// If an error occurs during parsing, panic.
|
||||
func MustParseModuleWithOpts(input string, opts ParserOptions) *Module {
|
||||
parsed, err := ParseModuleWithOpts("", input, opts)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return parsed
|
||||
return v1.MustParseModuleWithOpts(input, setDefaultRegoVersion(opts))
|
||||
}
|
||||
|
||||
// MustParsePackage returns a Package.
|
||||
@@ -104,11 +86,7 @@ func MustParseStatement(input string) Statement {
|
||||
}
|
||||
|
||||
func MustParseStatementWithOpts(input string, popts ParserOptions) Statement {
|
||||
parsed, err := ParseStatementWithOpts(input, popts)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return parsed
|
||||
return v1.MustParseStatementWithOpts(input, setDefaultRegoVersion(popts))
|
||||
}
|
||||
|
||||
// MustParseRef returns a parsed reference.
|
||||
@@ -134,11 +112,7 @@ func MustParseRule(input string) *Rule {
|
||||
// MustParseRuleWithOpts returns a parsed rule.
|
||||
// If an error occurs during parsing, panic.
|
||||
func MustParseRuleWithOpts(input string, opts ParserOptions) *Rule {
|
||||
parsed, err := ParseRuleWithOpts(input, opts)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return parsed
|
||||
return v1.MustParseRuleWithOpts(input, setDefaultRegoVersion(opts))
|
||||
}
|
||||
|
||||
// MustParseTerm returns a parsed term.
|
||||
@@ -154,331 +128,59 @@ func MustParseTerm(input string) *Term {
|
||||
// ParseRuleFromBody returns a rule if the body can be interpreted as a rule
|
||||
// definition. Otherwise, an error is returned.
|
||||
func ParseRuleFromBody(module *Module, body Body) (*Rule, error) {
|
||||
|
||||
if len(body) != 1 {
|
||||
return nil, fmt.Errorf("multiple expressions cannot be used for rule head")
|
||||
}
|
||||
|
||||
return ParseRuleFromExpr(module, body[0])
|
||||
return v1.ParseRuleFromBody(module, body)
|
||||
}
|
||||
|
||||
// ParseRuleFromExpr returns a rule if the expression can be interpreted as a
|
||||
// rule definition.
|
||||
func ParseRuleFromExpr(module *Module, expr *Expr) (*Rule, error) {
|
||||
|
||||
if len(expr.With) > 0 {
|
||||
return nil, fmt.Errorf("expressions using with keyword cannot be used for rule head")
|
||||
}
|
||||
|
||||
if expr.Negated {
|
||||
return nil, fmt.Errorf("negated expressions cannot be used for rule head")
|
||||
}
|
||||
|
||||
if _, ok := expr.Terms.(*SomeDecl); ok {
|
||||
return nil, errors.New("'some' declarations cannot be used for rule head")
|
||||
}
|
||||
|
||||
if term, ok := expr.Terms.(*Term); ok {
|
||||
switch v := term.Value.(type) {
|
||||
case Ref:
|
||||
if len(v) > 2 { // 2+ dots
|
||||
return ParseCompleteDocRuleWithDotsFromTerm(module, term)
|
||||
}
|
||||
return ParsePartialSetDocRuleFromTerm(module, term)
|
||||
default:
|
||||
return nil, fmt.Errorf("%v cannot be used for rule name", TypeName(v))
|
||||
}
|
||||
}
|
||||
|
||||
if _, ok := expr.Terms.([]*Term); !ok {
|
||||
// This is a defensive check in case other kinds of expression terms are
|
||||
// introduced in the future.
|
||||
return nil, errors.New("expression cannot be used for rule head")
|
||||
}
|
||||
|
||||
if expr.IsEquality() {
|
||||
return parseCompleteRuleFromEq(module, expr)
|
||||
} else if expr.IsAssignment() {
|
||||
rule, err := parseCompleteRuleFromEq(module, expr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rule.Head.Assign = true
|
||||
return rule, nil
|
||||
}
|
||||
|
||||
if _, ok := BuiltinMap[expr.Operator().String()]; ok {
|
||||
return nil, fmt.Errorf("rule name conflicts with built-in function")
|
||||
}
|
||||
|
||||
return ParseRuleFromCallExpr(module, expr.Terms.([]*Term))
|
||||
}
|
||||
|
||||
func parseCompleteRuleFromEq(module *Module, expr *Expr) (rule *Rule, err error) {
|
||||
|
||||
// ensure the rule location is set to the expr location
|
||||
// the helper functions called below try to set the location based
|
||||
// on the terms they've been provided but that is not as accurate.
|
||||
defer func() {
|
||||
if rule != nil {
|
||||
rule.Location = expr.Location
|
||||
rule.Head.Location = expr.Location
|
||||
}
|
||||
}()
|
||||
|
||||
lhs, rhs := expr.Operand(0), expr.Operand(1)
|
||||
if lhs == nil || rhs == nil {
|
||||
return nil, errors.New("assignment requires two operands")
|
||||
}
|
||||
|
||||
rule, err = ParseRuleFromCallEqExpr(module, lhs, rhs)
|
||||
if err == nil {
|
||||
return rule, nil
|
||||
}
|
||||
|
||||
rule, err = ParsePartialObjectDocRuleFromEqExpr(module, lhs, rhs)
|
||||
if err == nil {
|
||||
return rule, nil
|
||||
}
|
||||
|
||||
return ParseCompleteDocRuleFromEqExpr(module, lhs, rhs)
|
||||
return v1.ParseRuleFromExpr(module, expr)
|
||||
}
|
||||
|
||||
// ParseCompleteDocRuleFromAssignmentExpr returns a rule if the expression can
|
||||
// be interpreted as a complete document definition declared with the assignment
|
||||
// operator.
|
||||
func ParseCompleteDocRuleFromAssignmentExpr(module *Module, lhs, rhs *Term) (*Rule, error) {
|
||||
|
||||
rule, err := ParseCompleteDocRuleFromEqExpr(module, lhs, rhs)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rule.Head.Assign = true
|
||||
|
||||
return rule, nil
|
||||
return v1.ParseCompleteDocRuleFromAssignmentExpr(module, lhs, rhs)
|
||||
}
|
||||
|
||||
// ParseCompleteDocRuleFromEqExpr returns a rule if the expression can be
|
||||
// interpreted as a complete document definition.
|
||||
func ParseCompleteDocRuleFromEqExpr(module *Module, lhs, rhs *Term) (*Rule, error) {
|
||||
var head *Head
|
||||
|
||||
if v, ok := lhs.Value.(Var); ok {
|
||||
// Modify the code to add the location to the head ref
|
||||
// and set the head ref's jsonOptions.
|
||||
head = VarHead(v, lhs.Location, &lhs.jsonOptions)
|
||||
} else if r, ok := lhs.Value.(Ref); ok { // groundness ?
|
||||
if _, ok := r[0].Value.(Var); !ok {
|
||||
return nil, fmt.Errorf("invalid rule head: %v", r)
|
||||
}
|
||||
head = RefHead(r)
|
||||
if len(r) > 1 && !r[len(r)-1].IsGround() {
|
||||
return nil, fmt.Errorf("ref not ground")
|
||||
}
|
||||
} else {
|
||||
return nil, fmt.Errorf("%v cannot be used for rule name", TypeName(lhs.Value))
|
||||
}
|
||||
head.Value = rhs
|
||||
head.Location = lhs.Location
|
||||
head.setJSONOptions(lhs.jsonOptions)
|
||||
|
||||
body := NewBody(NewExpr(BooleanTerm(true).SetLocation(rhs.Location)).SetLocation(rhs.Location))
|
||||
setJSONOptions(body, &rhs.jsonOptions)
|
||||
|
||||
return &Rule{
|
||||
Location: lhs.Location,
|
||||
Head: head,
|
||||
Body: body,
|
||||
Module: module,
|
||||
jsonOptions: lhs.jsonOptions,
|
||||
generatedBody: true,
|
||||
}, nil
|
||||
return v1.ParseCompleteDocRuleFromEqExpr(module, lhs, rhs)
|
||||
}
|
||||
|
||||
func ParseCompleteDocRuleWithDotsFromTerm(module *Module, term *Term) (*Rule, error) {
|
||||
ref, ok := term.Value.(Ref)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("%v cannot be used for rule name", TypeName(term.Value))
|
||||
}
|
||||
|
||||
if _, ok := ref[0].Value.(Var); !ok {
|
||||
return nil, fmt.Errorf("invalid rule head: %v", ref)
|
||||
}
|
||||
head := RefHead(ref, BooleanTerm(true).SetLocation(term.Location))
|
||||
head.generatedValue = true
|
||||
head.Location = term.Location
|
||||
head.jsonOptions = term.jsonOptions
|
||||
|
||||
body := NewBody(NewExpr(BooleanTerm(true).SetLocation(term.Location)).SetLocation(term.Location))
|
||||
setJSONOptions(body, &term.jsonOptions)
|
||||
|
||||
return &Rule{
|
||||
Location: term.Location,
|
||||
Head: head,
|
||||
Body: body,
|
||||
Module: module,
|
||||
|
||||
jsonOptions: term.jsonOptions,
|
||||
}, nil
|
||||
return v1.ParseCompleteDocRuleWithDotsFromTerm(module, term)
|
||||
}
|
||||
|
||||
// ParsePartialObjectDocRuleFromEqExpr returns a rule if the expression can be
|
||||
// interpreted as a partial object document definition.
|
||||
func ParsePartialObjectDocRuleFromEqExpr(module *Module, lhs, rhs *Term) (*Rule, error) {
|
||||
ref, ok := lhs.Value.(Ref)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("%v cannot be used as rule name", TypeName(lhs.Value))
|
||||
}
|
||||
|
||||
if _, ok := ref[0].Value.(Var); !ok {
|
||||
return nil, fmt.Errorf("invalid rule head: %v", ref)
|
||||
}
|
||||
|
||||
head := RefHead(ref, rhs)
|
||||
if len(ref) == 2 { // backcompat for naked `foo.bar = "baz"` statements
|
||||
head.Name = ref[0].Value.(Var)
|
||||
head.Key = ref[1]
|
||||
}
|
||||
head.Location = rhs.Location
|
||||
head.jsonOptions = rhs.jsonOptions
|
||||
|
||||
body := NewBody(NewExpr(BooleanTerm(true).SetLocation(rhs.Location)).SetLocation(rhs.Location))
|
||||
setJSONOptions(body, &rhs.jsonOptions)
|
||||
|
||||
rule := &Rule{
|
||||
Location: rhs.Location,
|
||||
Head: head,
|
||||
Body: body,
|
||||
Module: module,
|
||||
jsonOptions: rhs.jsonOptions,
|
||||
}
|
||||
|
||||
return rule, nil
|
||||
return v1.ParsePartialObjectDocRuleFromEqExpr(module, lhs, rhs)
|
||||
}
|
||||
|
||||
// ParsePartialSetDocRuleFromTerm returns a rule if the term can be interpreted
|
||||
// as a partial set document definition.
|
||||
func ParsePartialSetDocRuleFromTerm(module *Module, term *Term) (*Rule, error) {
|
||||
|
||||
ref, ok := term.Value.(Ref)
|
||||
if !ok || len(ref) == 1 {
|
||||
return nil, fmt.Errorf("%vs cannot be used for rule head", TypeName(term.Value))
|
||||
}
|
||||
if _, ok := ref[0].Value.(Var); !ok {
|
||||
return nil, fmt.Errorf("invalid rule head: %v", ref)
|
||||
}
|
||||
|
||||
head := RefHead(ref)
|
||||
if len(ref) == 2 {
|
||||
v, ok := ref[0].Value.(Var)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("%vs cannot be used for rule head", TypeName(term.Value))
|
||||
}
|
||||
// Modify the code to add the location to the head ref
|
||||
// and set the head ref's jsonOptions.
|
||||
head = VarHead(v, ref[0].Location, &ref[0].jsonOptions)
|
||||
head.Key = ref[1]
|
||||
}
|
||||
head.Location = term.Location
|
||||
head.jsonOptions = term.jsonOptions
|
||||
|
||||
body := NewBody(NewExpr(BooleanTerm(true).SetLocation(term.Location)).SetLocation(term.Location))
|
||||
setJSONOptions(body, &term.jsonOptions)
|
||||
|
||||
rule := &Rule{
|
||||
Location: term.Location,
|
||||
Head: head,
|
||||
Body: body,
|
||||
Module: module,
|
||||
jsonOptions: term.jsonOptions,
|
||||
}
|
||||
|
||||
return rule, nil
|
||||
return v1.ParsePartialSetDocRuleFromTerm(module, term)
|
||||
}
|
||||
|
||||
// ParseRuleFromCallEqExpr returns a rule if the term can be interpreted as a
|
||||
// function definition (e.g., f(x) = y => f(x) = y { true }).
|
||||
func ParseRuleFromCallEqExpr(module *Module, lhs, rhs *Term) (*Rule, error) {
|
||||
|
||||
call, ok := lhs.Value.(Call)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("must be call")
|
||||
}
|
||||
|
||||
ref, ok := call[0].Value.(Ref)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("%vs cannot be used in function signature", TypeName(call[0].Value))
|
||||
}
|
||||
if _, ok := ref[0].Value.(Var); !ok {
|
||||
return nil, fmt.Errorf("invalid rule head: %v", ref)
|
||||
}
|
||||
|
||||
head := RefHead(ref, rhs)
|
||||
head.Location = lhs.Location
|
||||
head.Args = Args(call[1:])
|
||||
head.jsonOptions = lhs.jsonOptions
|
||||
|
||||
body := NewBody(NewExpr(BooleanTerm(true).SetLocation(rhs.Location)).SetLocation(rhs.Location))
|
||||
setJSONOptions(body, &rhs.jsonOptions)
|
||||
|
||||
rule := &Rule{
|
||||
Location: lhs.Location,
|
||||
Head: head,
|
||||
Body: body,
|
||||
Module: module,
|
||||
jsonOptions: lhs.jsonOptions,
|
||||
}
|
||||
|
||||
return rule, nil
|
||||
return v1.ParseRuleFromCallEqExpr(module, lhs, rhs)
|
||||
}
|
||||
|
||||
// ParseRuleFromCallExpr returns a rule if the terms can be interpreted as a
|
||||
// function returning true or some value (e.g., f(x) => f(x) = true { true }).
|
||||
func ParseRuleFromCallExpr(module *Module, terms []*Term) (*Rule, error) {
|
||||
|
||||
if len(terms) <= 1 {
|
||||
return nil, fmt.Errorf("rule argument list must take at least one argument")
|
||||
}
|
||||
|
||||
loc := terms[0].Location
|
||||
ref := terms[0].Value.(Ref)
|
||||
if _, ok := ref[0].Value.(Var); !ok {
|
||||
return nil, fmt.Errorf("invalid rule head: %v", ref)
|
||||
}
|
||||
head := RefHead(ref, BooleanTerm(true).SetLocation(loc))
|
||||
head.Location = loc
|
||||
head.Args = terms[1:]
|
||||
head.jsonOptions = terms[0].jsonOptions
|
||||
|
||||
body := NewBody(NewExpr(BooleanTerm(true).SetLocation(loc)).SetLocation(loc))
|
||||
setJSONOptions(body, &terms[0].jsonOptions)
|
||||
|
||||
rule := &Rule{
|
||||
Location: loc,
|
||||
Head: head,
|
||||
Module: module,
|
||||
Body: body,
|
||||
jsonOptions: terms[0].jsonOptions,
|
||||
}
|
||||
return rule, nil
|
||||
return v1.ParseRuleFromCallExpr(module, terms)
|
||||
}
|
||||
|
||||
// ParseImports returns a slice of Import objects.
|
||||
func ParseImports(input string) ([]*Import, error) {
|
||||
stmts, _, err := ParseStatements("", input)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result := []*Import{}
|
||||
for _, stmt := range stmts {
|
||||
if imp, ok := stmt.(*Import); ok {
|
||||
result = append(result, imp)
|
||||
} else {
|
||||
return nil, fmt.Errorf("expected import but got %T", stmt)
|
||||
}
|
||||
}
|
||||
return result, nil
|
||||
return v1.ParseImports(input)
|
||||
}
|
||||
|
||||
// ParseModule returns a parsed Module object.
|
||||
@@ -492,11 +194,7 @@ func ParseModule(filename, input string) (*Module, error) {
|
||||
// For details on Module objects and their fields, see policy.go.
|
||||
// Empty input will return nil, nil.
|
||||
func ParseModuleWithOpts(filename, input string, popts ParserOptions) (*Module, error) {
|
||||
stmts, comments, err := ParseStatementsWithOpts(filename, input, popts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return parseModule(filename, stmts, comments, popts.RegoVersion)
|
||||
return v1.ParseModuleWithOpts(filename, input, setDefaultRegoVersion(popts))
|
||||
}
|
||||
|
||||
// ParseBody returns exactly one body.
|
||||
@@ -508,28 +206,7 @@ func ParseBody(input string) (Body, error) {
|
||||
// ParseBodyWithOpts returns exactly one body. It does _not_ set SkipRules: true on its own,
|
||||
// but respects whatever ParserOptions it's been given.
|
||||
func ParseBodyWithOpts(input string, popts ParserOptions) (Body, error) {
|
||||
|
||||
stmts, _, err := ParseStatementsWithOpts("", input, popts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
result := Body{}
|
||||
|
||||
for _, stmt := range stmts {
|
||||
switch stmt := stmt.(type) {
|
||||
case Body:
|
||||
for i := range stmt {
|
||||
result.Append(stmt[i])
|
||||
}
|
||||
case *Comment:
|
||||
// skip
|
||||
default:
|
||||
return nil, fmt.Errorf("expected body but got %T", stmt)
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
return v1.ParseBodyWithOpts(input, setDefaultRegoVersion(popts))
|
||||
}
|
||||
|
||||
// ParseExpr returns exactly one expression.
|
||||
@@ -548,15 +225,7 @@ func ParseExpr(input string) (*Expr, error) {
|
||||
// ParsePackage returns exactly one Package.
|
||||
// If multiple statements are parsed, an error is returned.
|
||||
func ParsePackage(input string) (*Package, error) {
|
||||
stmt, err := ParseStatement(input)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pkg, ok := stmt.(*Package)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("expected package but got %T", stmt)
|
||||
}
|
||||
return pkg, nil
|
||||
return v1.ParsePackage(input)
|
||||
}
|
||||
|
||||
// ParseTerm returns exactly one term.
|
||||
@@ -592,18 +261,7 @@ func ParseRef(input string) (Ref, error) {
|
||||
// ParseRuleWithOpts returns exactly one rule.
|
||||
// If multiple rules are parsed, an error is returned.
|
||||
func ParseRuleWithOpts(input string, opts ParserOptions) (*Rule, error) {
|
||||
stmts, _, err := ParseStatementsWithOpts("", input, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(stmts) != 1 {
|
||||
return nil, fmt.Errorf("expected exactly one statement (rule), got %v = %T, %T", stmts, stmts[0], stmts[1])
|
||||
}
|
||||
rule, ok := stmts[0].(*Rule)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("expected rule but got %T", stmts[0])
|
||||
}
|
||||
return rule, nil
|
||||
return v1.ParseRuleWithOpts(input, setDefaultRegoVersion(opts))
|
||||
}
|
||||
|
||||
// ParseRule returns exactly one rule.
|
||||
@@ -622,20 +280,13 @@ func ParseStatement(input string) (Statement, error) {
|
||||
return nil, err
|
||||
}
|
||||
if len(stmts) != 1 {
|
||||
return nil, fmt.Errorf("expected exactly one statement")
|
||||
return nil, errors.New("expected exactly one statement")
|
||||
}
|
||||
return stmts[0], nil
|
||||
}
|
||||
|
||||
func ParseStatementWithOpts(input string, popts ParserOptions) (Statement, error) {
|
||||
stmts, _, err := ParseStatementsWithOpts("", input, popts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(stmts) != 1 {
|
||||
return nil, fmt.Errorf("expected exactly one statement")
|
||||
}
|
||||
return stmts[0], nil
|
||||
return v1.ParseStatementWithOpts(input, setDefaultRegoVersion(popts))
|
||||
}
|
||||
|
||||
// ParseStatements is deprecated. Use ParseStatementWithOpts instead.
|
||||
@@ -646,204 +297,15 @@ func ParseStatements(filename, input string) ([]Statement, []*Comment, error) {
|
||||
// ParseStatementsWithOpts returns a slice of parsed statements. This is the
|
||||
// default return value from the parser.
|
||||
func ParseStatementsWithOpts(filename, input string, popts ParserOptions) ([]Statement, []*Comment, error) {
|
||||
|
||||
parser := NewParser().
|
||||
WithFilename(filename).
|
||||
WithReader(bytes.NewBufferString(input)).
|
||||
WithProcessAnnotation(popts.ProcessAnnotation).
|
||||
WithFutureKeywords(popts.FutureKeywords...).
|
||||
WithAllFutureKeywords(popts.AllFutureKeywords).
|
||||
WithCapabilities(popts.Capabilities).
|
||||
WithSkipRules(popts.SkipRules).
|
||||
WithJSONOptions(popts.JSONOptions).
|
||||
WithRegoVersion(popts.RegoVersion).
|
||||
withUnreleasedKeywords(popts.unreleasedKeywords)
|
||||
|
||||
stmts, comments, errs := parser.Parse()
|
||||
|
||||
if len(errs) > 0 {
|
||||
return nil, nil, errs
|
||||
}
|
||||
|
||||
return stmts, comments, nil
|
||||
}
|
||||
|
||||
func parseModule(filename string, stmts []Statement, comments []*Comment, regoCompatibilityMode RegoVersion) (*Module, error) {
|
||||
|
||||
if len(stmts) == 0 {
|
||||
return nil, NewError(ParseErr, &Location{File: filename}, "empty module")
|
||||
}
|
||||
|
||||
var errs Errors
|
||||
|
||||
pkg, ok := stmts[0].(*Package)
|
||||
if !ok {
|
||||
loc := stmts[0].Loc()
|
||||
errs = append(errs, NewError(ParseErr, loc, "package expected"))
|
||||
}
|
||||
|
||||
mod := &Module{
|
||||
Package: pkg,
|
||||
stmts: stmts,
|
||||
}
|
||||
|
||||
// The comments slice only holds comments that were not their own statements.
|
||||
mod.Comments = append(mod.Comments, comments...)
|
||||
mod.regoVersion = regoCompatibilityMode
|
||||
|
||||
for i, stmt := range stmts[1:] {
|
||||
switch stmt := stmt.(type) {
|
||||
case *Import:
|
||||
mod.Imports = append(mod.Imports, stmt)
|
||||
if mod.regoVersion == RegoV0 && Compare(stmt.Path.Value, RegoV1CompatibleRef) == 0 {
|
||||
mod.regoVersion = RegoV0CompatV1
|
||||
}
|
||||
case *Rule:
|
||||
setRuleModule(stmt, mod)
|
||||
mod.Rules = append(mod.Rules, stmt)
|
||||
case Body:
|
||||
rule, err := ParseRuleFromBody(mod, stmt)
|
||||
if err != nil {
|
||||
errs = append(errs, NewError(ParseErr, stmt[0].Location, err.Error()))
|
||||
continue
|
||||
}
|
||||
rule.generatedBody = true
|
||||
mod.Rules = append(mod.Rules, rule)
|
||||
|
||||
// NOTE(tsandall): the statement should now be interpreted as a
|
||||
// rule so update the statement list. This is important for the
|
||||
// logic below that associates annotations with statements.
|
||||
stmts[i+1] = rule
|
||||
case *Package:
|
||||
errs = append(errs, NewError(ParseErr, stmt.Loc(), "unexpected package"))
|
||||
case *Annotations:
|
||||
mod.Annotations = append(mod.Annotations, stmt)
|
||||
case *Comment:
|
||||
// Ignore comments, they're handled above.
|
||||
default:
|
||||
panic("illegal value") // Indicates grammar is out-of-sync with code.
|
||||
}
|
||||
}
|
||||
|
||||
if mod.regoVersion == RegoV0CompatV1 || mod.regoVersion == RegoV1 {
|
||||
for _, rule := range mod.Rules {
|
||||
for r := rule; r != nil; r = r.Else {
|
||||
errs = append(errs, CheckRegoV1(r)...)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(errs) > 0 {
|
||||
return nil, errs
|
||||
}
|
||||
|
||||
errs = append(errs, attachAnnotationsNodes(mod)...)
|
||||
|
||||
if len(errs) > 0 {
|
||||
return nil, errs
|
||||
}
|
||||
|
||||
attachRuleAnnotations(mod)
|
||||
|
||||
return mod, nil
|
||||
}
|
||||
|
||||
func ruleDeclarationHasKeyword(rule *Rule, keyword tokens.Token) bool {
|
||||
for _, kw := range rule.Head.keywords {
|
||||
if kw == keyword {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func newScopeAttachmentErr(a *Annotations, want string) *Error {
|
||||
var have string
|
||||
if a.node != nil {
|
||||
have = fmt.Sprintf(" (have %v)", TypeName(a.node))
|
||||
}
|
||||
return NewError(ParseErr, a.Loc(), "annotation scope '%v' must be applied to %v%v", a.Scope, want, have)
|
||||
}
|
||||
|
||||
func setRuleModule(rule *Rule, module *Module) {
|
||||
rule.Module = module
|
||||
if rule.Else != nil {
|
||||
setRuleModule(rule.Else, module)
|
||||
}
|
||||
}
|
||||
|
||||
func setJSONOptions(x interface{}, jsonOptions *astJSON.Options) {
|
||||
vis := NewGenericVisitor(func(x interface{}) bool {
|
||||
if x, ok := x.(customJSON); ok {
|
||||
x.setJSONOptions(*jsonOptions)
|
||||
}
|
||||
return false
|
||||
})
|
||||
vis.Walk(x)
|
||||
return v1.ParseStatementsWithOpts(filename, input, setDefaultRegoVersion(popts))
|
||||
}
|
||||
|
||||
// ParserErrorDetail holds additional details for parser errors.
|
||||
type ParserErrorDetail struct {
|
||||
Line string `json:"line"`
|
||||
Idx int `json:"idx"`
|
||||
}
|
||||
|
||||
func newParserErrorDetail(bs []byte, offset int) *ParserErrorDetail {
|
||||
|
||||
// Find first non-space character at or before offset position.
|
||||
if offset >= len(bs) {
|
||||
offset = len(bs) - 1
|
||||
} else if offset < 0 {
|
||||
offset = 0
|
||||
}
|
||||
|
||||
for offset > 0 && unicode.IsSpace(rune(bs[offset])) {
|
||||
offset--
|
||||
}
|
||||
|
||||
// Find beginning of line containing offset.
|
||||
begin := offset
|
||||
|
||||
for begin > 0 && !isNewLineChar(bs[begin]) {
|
||||
begin--
|
||||
}
|
||||
|
||||
if isNewLineChar(bs[begin]) {
|
||||
begin++
|
||||
}
|
||||
|
||||
// Find end of line containing offset.
|
||||
end := offset
|
||||
|
||||
for end < len(bs) && !isNewLineChar(bs[end]) {
|
||||
end++
|
||||
}
|
||||
|
||||
if begin > end {
|
||||
begin = end
|
||||
}
|
||||
|
||||
// Extract line and compute index of offset byte in line.
|
||||
line := bs[begin:end]
|
||||
index := offset - begin
|
||||
|
||||
return &ParserErrorDetail{
|
||||
Line: string(line),
|
||||
Idx: index,
|
||||
}
|
||||
}
|
||||
|
||||
// Lines returns the pretty formatted line output for the error details.
|
||||
func (d ParserErrorDetail) Lines() []string {
|
||||
line := strings.TrimLeft(d.Line, "\t") // remove leading tabs
|
||||
tabCount := len(d.Line) - len(line)
|
||||
indent := d.Idx - tabCount
|
||||
if indent < 0 {
|
||||
indent = 0
|
||||
}
|
||||
return []string{line, strings.Repeat(" ", indent) + "^"}
|
||||
}
|
||||
|
||||
func isNewLineChar(b byte) bool {
|
||||
return b == '\r' || b == '\n'
|
||||
type ParserErrorDetail = v1.ParserErrorDetail
|
||||
|
||||
func setDefaultRegoVersion(opts ParserOptions) ParserOptions {
|
||||
if opts.RegoVersion == RegoUndefined {
|
||||
opts.RegoVersion = DefaultRegoVersion
|
||||
}
|
||||
return opts
|
||||
}
|
||||
|
||||
1972
vendor/github.com/open-policy-agent/opa/ast/policy.go
generated
vendored
1972
vendor/github.com/open-policy-agent/opa/ast/policy.go
generated
vendored
File diff suppressed because it is too large
Load Diff
70
vendor/github.com/open-policy-agent/opa/ast/pretty.go
generated
vendored
70
vendor/github.com/open-policy-agent/opa/ast/pretty.go
generated
vendored
@@ -5,78 +5,14 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// Pretty writes a pretty representation of the AST rooted at x to w.
|
||||
//
|
||||
// This is function is intended for debug purposes when inspecting ASTs.
|
||||
func Pretty(w io.Writer, x interface{}) {
|
||||
pp := &prettyPrinter{
|
||||
depth: -1,
|
||||
w: w,
|
||||
}
|
||||
NewBeforeAfterVisitor(pp.Before, pp.After).Walk(x)
|
||||
}
|
||||
|
||||
type prettyPrinter struct {
|
||||
depth int
|
||||
w io.Writer
|
||||
}
|
||||
|
||||
func (pp *prettyPrinter) Before(x interface{}) bool {
|
||||
switch x.(type) {
|
||||
case *Term:
|
||||
default:
|
||||
pp.depth++
|
||||
}
|
||||
|
||||
switch x := x.(type) {
|
||||
case *Term:
|
||||
return false
|
||||
case Args:
|
||||
if len(x) == 0 {
|
||||
return false
|
||||
}
|
||||
pp.writeType(x)
|
||||
case *Expr:
|
||||
extras := []string{}
|
||||
if x.Negated {
|
||||
extras = append(extras, "negated")
|
||||
}
|
||||
extras = append(extras, fmt.Sprintf("index=%d", x.Index))
|
||||
pp.writeIndent("%v %v", TypeName(x), strings.Join(extras, " "))
|
||||
case Null, Boolean, Number, String, Var:
|
||||
pp.writeValue(x)
|
||||
default:
|
||||
pp.writeType(x)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (pp *prettyPrinter) After(x interface{}) {
|
||||
switch x.(type) {
|
||||
case *Term:
|
||||
default:
|
||||
pp.depth--
|
||||
}
|
||||
}
|
||||
|
||||
func (pp *prettyPrinter) writeValue(x interface{}) {
|
||||
pp.writeIndent(fmt.Sprint(x))
|
||||
}
|
||||
|
||||
func (pp *prettyPrinter) writeType(x interface{}) {
|
||||
pp.writeIndent(TypeName(x))
|
||||
}
|
||||
|
||||
func (pp *prettyPrinter) writeIndent(f string, a ...interface{}) {
|
||||
pad := strings.Repeat(" ", pp.depth)
|
||||
pp.write(pad+f, a...)
|
||||
}
|
||||
|
||||
func (pp *prettyPrinter) write(f string, a ...interface{}) {
|
||||
fmt.Fprintf(pp.w, f+"\n", a...)
|
||||
v1.Pretty(w, x)
|
||||
}
|
||||
|
||||
52
vendor/github.com/open-policy-agent/opa/ast/schema.go
generated
vendored
52
vendor/github.com/open-policy-agent/opa/ast/schema.go
generated
vendored
@@ -5,59 +5,13 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/open-policy-agent/opa/types"
|
||||
"github.com/open-policy-agent/opa/util"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// SchemaSet holds a map from a path to a schema.
|
||||
type SchemaSet struct {
|
||||
m *util.HashMap
|
||||
}
|
||||
type SchemaSet = v1.SchemaSet
|
||||
|
||||
// NewSchemaSet returns an empty SchemaSet.
|
||||
func NewSchemaSet() *SchemaSet {
|
||||
|
||||
eqFunc := func(a, b util.T) bool {
|
||||
return a.(Ref).Equal(b.(Ref))
|
||||
}
|
||||
|
||||
hashFunc := func(x util.T) int { return x.(Ref).Hash() }
|
||||
|
||||
return &SchemaSet{
|
||||
m: util.NewHashMap(eqFunc, hashFunc),
|
||||
}
|
||||
}
|
||||
|
||||
// Put inserts a raw schema into the set.
|
||||
func (ss *SchemaSet) Put(path Ref, raw interface{}) {
|
||||
ss.m.Put(path, raw)
|
||||
}
|
||||
|
||||
// Get returns the raw schema identified by the path.
|
||||
func (ss *SchemaSet) Get(path Ref) interface{} {
|
||||
if ss == nil {
|
||||
return nil
|
||||
}
|
||||
x, ok := ss.m.Get(path)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return x
|
||||
}
|
||||
|
||||
func loadSchema(raw interface{}, allowNet []string) (types.Type, error) {
|
||||
|
||||
jsonSchema, err := compileSchema(raw, allowNet)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tpe, err := newSchemaParser().parseSchema(jsonSchema.RootSchema)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("type checking: %w", err)
|
||||
}
|
||||
|
||||
return tpe, nil
|
||||
return v1.NewSchemaSet()
|
||||
}
|
||||
|
||||
8
vendor/github.com/open-policy-agent/opa/ast/strings.go
generated
vendored
8
vendor/github.com/open-policy-agent/opa/ast/strings.go
generated
vendored
@@ -5,14 +5,10 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"strings"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// TypeName returns a human readable name for the AST element type.
|
||||
func TypeName(x interface{}) string {
|
||||
if _, ok := x.(*lazyObj); ok {
|
||||
return "object"
|
||||
}
|
||||
return strings.ToLower(reflect.Indirect(reflect.ValueOf(x)).Type().Name())
|
||||
return v1.TypeName(x)
|
||||
}
|
||||
|
||||
3098
vendor/github.com/open-policy-agent/opa/ast/term.go
generated
vendored
3098
vendor/github.com/open-policy-agent/opa/ast/term.go
generated
vendored
File diff suppressed because it is too large
Load Diff
401
vendor/github.com/open-policy-agent/opa/ast/transform.go
generated
vendored
401
vendor/github.com/open-policy-agent/opa/ast/transform.go
generated
vendored
@@ -5,427 +5,42 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// Transformer defines the interface for transforming AST elements. If the
|
||||
// transformer returns nil and does not indicate an error, the AST element will
|
||||
// be set to nil and no transformations will be applied to children of the
|
||||
// element.
|
||||
type Transformer interface {
|
||||
Transform(interface{}) (interface{}, error)
|
||||
}
|
||||
type Transformer = v1.Transformer
|
||||
|
||||
// Transform iterates the AST and calls the Transform function on the
|
||||
// Transformer t for x before recursing.
|
||||
func Transform(t Transformer, x interface{}) (interface{}, error) {
|
||||
|
||||
if term, ok := x.(*Term); ok {
|
||||
return Transform(t, term.Value)
|
||||
}
|
||||
|
||||
y, err := t.Transform(x)
|
||||
if err != nil {
|
||||
return x, err
|
||||
}
|
||||
|
||||
if y == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var ok bool
|
||||
switch y := y.(type) {
|
||||
case *Module:
|
||||
p, err := Transform(t, y.Package)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Package, ok = p.(*Package); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y.Package, p)
|
||||
}
|
||||
for i := range y.Imports {
|
||||
imp, err := Transform(t, y.Imports[i])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Imports[i], ok = imp.(*Import); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y.Imports[i], imp)
|
||||
}
|
||||
}
|
||||
for i := range y.Rules {
|
||||
rule, err := Transform(t, y.Rules[i])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Rules[i], ok = rule.(*Rule); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y.Rules[i], rule)
|
||||
}
|
||||
}
|
||||
for i := range y.Annotations {
|
||||
a, err := Transform(t, y.Annotations[i])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Annotations[i], ok = a.(*Annotations); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y.Annotations[i], a)
|
||||
}
|
||||
}
|
||||
for i := range y.Comments {
|
||||
comment, err := Transform(t, y.Comments[i])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Comments[i], ok = comment.(*Comment); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y.Comments[i], comment)
|
||||
}
|
||||
}
|
||||
return y, nil
|
||||
case *Package:
|
||||
ref, err := Transform(t, y.Path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Path, ok = ref.(Ref); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y.Path, ref)
|
||||
}
|
||||
return y, nil
|
||||
case *Import:
|
||||
y.Path, err = transformTerm(t, y.Path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Alias, err = transformVar(t, y.Alias); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return y, nil
|
||||
case *Rule:
|
||||
if y.Head, err = transformHead(t, y.Head); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Body, err = transformBody(t, y.Body); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Else != nil {
|
||||
rule, err := Transform(t, y.Else)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Else, ok = rule.(*Rule); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y.Else, rule)
|
||||
}
|
||||
}
|
||||
return y, nil
|
||||
case *Head:
|
||||
if y.Reference, err = transformRef(t, y.Reference); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Name, err = transformVar(t, y.Name); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Args, err = transformArgs(t, y.Args); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Key != nil {
|
||||
if y.Key, err = transformTerm(t, y.Key); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if y.Value != nil {
|
||||
if y.Value, err = transformTerm(t, y.Value); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return y, nil
|
||||
case Args:
|
||||
for i := range y {
|
||||
if y[i], err = transformTerm(t, y[i]); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return y, nil
|
||||
case Body:
|
||||
for i, e := range y {
|
||||
e, err := Transform(t, e)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y[i], ok = e.(*Expr); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y[i], e)
|
||||
}
|
||||
}
|
||||
return y, nil
|
||||
case *Expr:
|
||||
switch ts := y.Terms.(type) {
|
||||
case *SomeDecl:
|
||||
decl, err := Transform(t, ts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Terms, ok = decl.(*SomeDecl); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y, decl)
|
||||
}
|
||||
return y, nil
|
||||
case []*Term:
|
||||
for i := range ts {
|
||||
if ts[i], err = transformTerm(t, ts[i]); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
case *Term:
|
||||
if y.Terms, err = transformTerm(t, ts); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
case *Every:
|
||||
if ts.Key != nil {
|
||||
ts.Key, err = transformTerm(t, ts.Key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
ts.Value, err = transformTerm(t, ts.Value)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ts.Domain, err = transformTerm(t, ts.Domain)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
ts.Body, err = transformBody(t, ts.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
y.Terms = ts
|
||||
}
|
||||
for i, w := range y.With {
|
||||
w, err := Transform(t, w)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.With[i], ok = w.(*With); !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", y.With[i], w)
|
||||
}
|
||||
}
|
||||
return y, nil
|
||||
case *With:
|
||||
if y.Target, err = transformTerm(t, y.Target); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Value, err = transformTerm(t, y.Value); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return y, nil
|
||||
case Ref:
|
||||
for i, term := range y {
|
||||
if y[i], err = transformTerm(t, term); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return y, nil
|
||||
case *object:
|
||||
return y.Map(func(k, v *Term) (*Term, *Term, error) {
|
||||
k, err := transformTerm(t, k)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
v, err = transformTerm(t, v)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
return k, v, nil
|
||||
})
|
||||
case *Array:
|
||||
for i := 0; i < y.Len(); i++ {
|
||||
v, err := transformTerm(t, y.Elem(i))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
y.set(i, v)
|
||||
}
|
||||
return y, nil
|
||||
case Set:
|
||||
y, err = y.Map(func(term *Term) (*Term, error) {
|
||||
return transformTerm(t, term)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return y, nil
|
||||
case *ArrayComprehension:
|
||||
if y.Term, err = transformTerm(t, y.Term); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Body, err = transformBody(t, y.Body); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return y, nil
|
||||
case *ObjectComprehension:
|
||||
if y.Key, err = transformTerm(t, y.Key); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Value, err = transformTerm(t, y.Value); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Body, err = transformBody(t, y.Body); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return y, nil
|
||||
case *SetComprehension:
|
||||
if y.Term, err = transformTerm(t, y.Term); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if y.Body, err = transformBody(t, y.Body); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return y, nil
|
||||
case Call:
|
||||
for i := range y {
|
||||
if y[i], err = transformTerm(t, y[i]); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return y, nil
|
||||
default:
|
||||
return y, nil
|
||||
}
|
||||
return v1.Transform(t, x)
|
||||
}
|
||||
|
||||
// TransformRefs calls the function f on all references under x.
|
||||
func TransformRefs(x interface{}, f func(Ref) (Value, error)) (interface{}, error) {
|
||||
t := &GenericTransformer{func(x interface{}) (interface{}, error) {
|
||||
if r, ok := x.(Ref); ok {
|
||||
return f(r)
|
||||
}
|
||||
return x, nil
|
||||
}}
|
||||
return Transform(t, x)
|
||||
return v1.TransformRefs(x, f)
|
||||
}
|
||||
|
||||
// TransformVars calls the function f on all vars under x.
|
||||
func TransformVars(x interface{}, f func(Var) (Value, error)) (interface{}, error) {
|
||||
t := &GenericTransformer{func(x interface{}) (interface{}, error) {
|
||||
if v, ok := x.(Var); ok {
|
||||
return f(v)
|
||||
}
|
||||
return x, nil
|
||||
}}
|
||||
return Transform(t, x)
|
||||
return v1.TransformVars(x, f)
|
||||
}
|
||||
|
||||
// TransformComprehensions calls the functio nf on all comprehensions under x.
|
||||
func TransformComprehensions(x interface{}, f func(interface{}) (Value, error)) (interface{}, error) {
|
||||
t := &GenericTransformer{func(x interface{}) (interface{}, error) {
|
||||
switch x := x.(type) {
|
||||
case *ArrayComprehension:
|
||||
return f(x)
|
||||
case *SetComprehension:
|
||||
return f(x)
|
||||
case *ObjectComprehension:
|
||||
return f(x)
|
||||
}
|
||||
return x, nil
|
||||
}}
|
||||
return Transform(t, x)
|
||||
return v1.TransformComprehensions(x, f)
|
||||
}
|
||||
|
||||
// GenericTransformer implements the Transformer interface to provide a utility
|
||||
// to transform AST nodes using a closure.
|
||||
type GenericTransformer struct {
|
||||
f func(interface{}) (interface{}, error)
|
||||
}
|
||||
type GenericTransformer = v1.GenericTransformer
|
||||
|
||||
// NewGenericTransformer returns a new GenericTransformer that will transform
|
||||
// AST nodes using the function f.
|
||||
func NewGenericTransformer(f func(x interface{}) (interface{}, error)) *GenericTransformer {
|
||||
return &GenericTransformer{
|
||||
f: f,
|
||||
}
|
||||
}
|
||||
|
||||
// Transform calls the function f on the GenericTransformer.
|
||||
func (t *GenericTransformer) Transform(x interface{}) (interface{}, error) {
|
||||
return t.f(x)
|
||||
}
|
||||
|
||||
func transformHead(t Transformer, head *Head) (*Head, error) {
|
||||
y, err := Transform(t, head)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
h, ok := y.(*Head)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", head, y)
|
||||
}
|
||||
return h, nil
|
||||
}
|
||||
|
||||
func transformArgs(t Transformer, args Args) (Args, error) {
|
||||
y, err := Transform(t, args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
a, ok := y.(Args)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", args, y)
|
||||
}
|
||||
return a, nil
|
||||
}
|
||||
|
||||
func transformBody(t Transformer, body Body) (Body, error) {
|
||||
y, err := Transform(t, body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
r, ok := y.(Body)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", body, y)
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
func transformTerm(t Transformer, term *Term) (*Term, error) {
|
||||
v, err := transformValue(t, term.Value)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
r := &Term{
|
||||
Value: v,
|
||||
Location: term.Location,
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
func transformValue(t Transformer, v Value) (Value, error) {
|
||||
v1, err := Transform(t, v)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
r, ok := v1.(Value)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", v, v1)
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
func transformVar(t Transformer, v Var) (Var, error) {
|
||||
v1, err := Transform(t, v)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
r, ok := v1.(Var)
|
||||
if !ok {
|
||||
return "", fmt.Errorf("illegal transform: %T != %T", v, v1)
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
func transformRef(t Transformer, r Ref) (Ref, error) {
|
||||
r1, err := Transform(t, r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
r2, ok := r1.(Ref)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("illegal transform: %T != %T", r, r2)
|
||||
}
|
||||
return r2, nil
|
||||
return v1.NewGenericTransformer(f)
|
||||
}
|
||||
|
||||
225
vendor/github.com/open-policy-agent/opa/ast/unify.go
generated
vendored
225
vendor/github.com/open-policy-agent/opa/ast/unify.go
generated
vendored
@@ -4,232 +4,11 @@
|
||||
|
||||
package ast
|
||||
|
||||
func isRefSafe(ref Ref, safe VarSet) bool {
|
||||
switch head := ref[0].Value.(type) {
|
||||
case Var:
|
||||
return safe.Contains(head)
|
||||
case Call:
|
||||
return isCallSafe(head, safe)
|
||||
default:
|
||||
for v := range ref[0].Vars() {
|
||||
if !safe.Contains(v) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
func isCallSafe(call Call, safe VarSet) bool {
|
||||
vis := NewVarVisitor().WithParams(SafetyCheckVisitorParams)
|
||||
vis.Walk(call)
|
||||
unsafe := vis.Vars().Diff(safe)
|
||||
return len(unsafe) == 0
|
||||
}
|
||||
import v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
|
||||
// Unify returns a set of variables that will be unified when the equality expression defined by
|
||||
// terms a and b is evaluated. The unifier assumes that variables in the VarSet safe are already
|
||||
// unified.
|
||||
func Unify(safe VarSet, a *Term, b *Term) VarSet {
|
||||
u := &unifier{
|
||||
safe: safe,
|
||||
unified: VarSet{},
|
||||
unknown: map[Var]VarSet{},
|
||||
}
|
||||
u.unify(a, b)
|
||||
return u.unified
|
||||
}
|
||||
|
||||
type unifier struct {
|
||||
safe VarSet
|
||||
unified VarSet
|
||||
unknown map[Var]VarSet
|
||||
}
|
||||
|
||||
func (u *unifier) isSafe(x Var) bool {
|
||||
return u.safe.Contains(x) || u.unified.Contains(x)
|
||||
}
|
||||
|
||||
func (u *unifier) unify(a *Term, b *Term) {
|
||||
|
||||
switch a := a.Value.(type) {
|
||||
|
||||
case Var:
|
||||
switch b := b.Value.(type) {
|
||||
case Var:
|
||||
if u.isSafe(b) {
|
||||
u.markSafe(a)
|
||||
} else if u.isSafe(a) {
|
||||
u.markSafe(b)
|
||||
} else {
|
||||
u.markUnknown(a, b)
|
||||
u.markUnknown(b, a)
|
||||
}
|
||||
case *Array, Object:
|
||||
u.unifyAll(a, b)
|
||||
case Ref:
|
||||
if isRefSafe(b, u.safe) {
|
||||
u.markSafe(a)
|
||||
}
|
||||
case Call:
|
||||
if isCallSafe(b, u.safe) {
|
||||
u.markSafe(a)
|
||||
}
|
||||
default:
|
||||
u.markSafe(a)
|
||||
}
|
||||
|
||||
case Ref:
|
||||
if isRefSafe(a, u.safe) {
|
||||
switch b := b.Value.(type) {
|
||||
case Var:
|
||||
u.markSafe(b)
|
||||
case *Array, Object:
|
||||
u.markAllSafe(b)
|
||||
}
|
||||
}
|
||||
|
||||
case Call:
|
||||
if isCallSafe(a, u.safe) {
|
||||
switch b := b.Value.(type) {
|
||||
case Var:
|
||||
u.markSafe(b)
|
||||
case *Array, Object:
|
||||
u.markAllSafe(b)
|
||||
}
|
||||
}
|
||||
|
||||
case *ArrayComprehension:
|
||||
switch b := b.Value.(type) {
|
||||
case Var:
|
||||
u.markSafe(b)
|
||||
case *Array:
|
||||
u.markAllSafe(b)
|
||||
}
|
||||
case *ObjectComprehension:
|
||||
switch b := b.Value.(type) {
|
||||
case Var:
|
||||
u.markSafe(b)
|
||||
case *object:
|
||||
u.markAllSafe(b)
|
||||
}
|
||||
case *SetComprehension:
|
||||
switch b := b.Value.(type) {
|
||||
case Var:
|
||||
u.markSafe(b)
|
||||
}
|
||||
|
||||
case *Array:
|
||||
switch b := b.Value.(type) {
|
||||
case Var:
|
||||
u.unifyAll(b, a)
|
||||
case *ArrayComprehension, *ObjectComprehension, *SetComprehension:
|
||||
u.markAllSafe(a)
|
||||
case Ref:
|
||||
if isRefSafe(b, u.safe) {
|
||||
u.markAllSafe(a)
|
||||
}
|
||||
case Call:
|
||||
if isCallSafe(b, u.safe) {
|
||||
u.markAllSafe(a)
|
||||
}
|
||||
case *Array:
|
||||
if a.Len() == b.Len() {
|
||||
for i := 0; i < a.Len(); i++ {
|
||||
u.unify(a.Elem(i), b.Elem(i))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
case *object:
|
||||
switch b := b.Value.(type) {
|
||||
case Var:
|
||||
u.unifyAll(b, a)
|
||||
case Ref:
|
||||
if isRefSafe(b, u.safe) {
|
||||
u.markAllSafe(a)
|
||||
}
|
||||
case Call:
|
||||
if isCallSafe(b, u.safe) {
|
||||
u.markAllSafe(a)
|
||||
}
|
||||
case *object:
|
||||
if a.Len() == b.Len() {
|
||||
_ = a.Iter(func(k, v *Term) error {
|
||||
if v2 := b.Get(k); v2 != nil {
|
||||
u.unify(v, v2)
|
||||
}
|
||||
return nil
|
||||
}) // impossible to return error
|
||||
}
|
||||
}
|
||||
|
||||
default:
|
||||
switch b := b.Value.(type) {
|
||||
case Var:
|
||||
u.markSafe(b)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (u *unifier) markAllSafe(x Value) {
|
||||
vis := u.varVisitor()
|
||||
vis.Walk(x)
|
||||
for v := range vis.Vars() {
|
||||
u.markSafe(v)
|
||||
}
|
||||
}
|
||||
|
||||
func (u *unifier) markSafe(x Var) {
|
||||
u.unified.Add(x)
|
||||
|
||||
// Add dependencies of 'x' to safe set
|
||||
vs := u.unknown[x]
|
||||
delete(u.unknown, x)
|
||||
for v := range vs {
|
||||
u.markSafe(v)
|
||||
}
|
||||
|
||||
// Add dependants of 'x' to safe set if they have no more
|
||||
// dependencies.
|
||||
for v, deps := range u.unknown {
|
||||
if deps.Contains(x) {
|
||||
delete(deps, x)
|
||||
if len(deps) == 0 {
|
||||
u.markSafe(v)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (u *unifier) markUnknown(a, b Var) {
|
||||
if _, ok := u.unknown[a]; !ok {
|
||||
u.unknown[a] = NewVarSet()
|
||||
}
|
||||
u.unknown[a].Add(b)
|
||||
}
|
||||
|
||||
func (u *unifier) unifyAll(a Var, b Value) {
|
||||
if u.isSafe(a) {
|
||||
u.markAllSafe(b)
|
||||
} else {
|
||||
vis := u.varVisitor()
|
||||
vis.Walk(b)
|
||||
unsafe := vis.Vars().Diff(u.safe).Diff(u.unified)
|
||||
if len(unsafe) == 0 {
|
||||
u.markSafe(a)
|
||||
} else {
|
||||
for v := range unsafe {
|
||||
u.markUnknown(a, v)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (u *unifier) varVisitor() *VarVisitor {
|
||||
return NewVarVisitor().WithParams(VarVisitorParams{
|
||||
SkipRefHead: true,
|
||||
SkipObjectKeys: true,
|
||||
SkipClosures: true,
|
||||
})
|
||||
return v1.Unify(safe, a, b)
|
||||
}
|
||||
|
||||
89
vendor/github.com/open-policy-agent/opa/ast/varset.go
generated
vendored
89
vendor/github.com/open-policy-agent/opa/ast/varset.go
generated
vendored
@@ -5,96 +5,13 @@
|
||||
package ast
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
)
|
||||
|
||||
// VarSet represents a set of variables.
|
||||
type VarSet map[Var]struct{}
|
||||
type VarSet = v1.VarSet
|
||||
|
||||
// NewVarSet returns a new VarSet containing the specified variables.
|
||||
func NewVarSet(vs ...Var) VarSet {
|
||||
s := VarSet{}
|
||||
for _, v := range vs {
|
||||
s.Add(v)
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// Add updates the set to include the variable "v".
|
||||
func (s VarSet) Add(v Var) {
|
||||
s[v] = struct{}{}
|
||||
}
|
||||
|
||||
// Contains returns true if the set contains the variable "v".
|
||||
func (s VarSet) Contains(v Var) bool {
|
||||
_, ok := s[v]
|
||||
return ok
|
||||
}
|
||||
|
||||
// Copy returns a shallow copy of the VarSet.
|
||||
func (s VarSet) Copy() VarSet {
|
||||
cpy := VarSet{}
|
||||
for v := range s {
|
||||
cpy.Add(v)
|
||||
}
|
||||
return cpy
|
||||
}
|
||||
|
||||
// Diff returns a VarSet containing variables in s that are not in vs.
|
||||
func (s VarSet) Diff(vs VarSet) VarSet {
|
||||
r := VarSet{}
|
||||
for v := range s {
|
||||
if !vs.Contains(v) {
|
||||
r.Add(v)
|
||||
}
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// Equal returns true if s contains exactly the same elements as vs.
|
||||
func (s VarSet) Equal(vs VarSet) bool {
|
||||
if len(s.Diff(vs)) > 0 {
|
||||
return false
|
||||
}
|
||||
return len(vs.Diff(s)) == 0
|
||||
}
|
||||
|
||||
// Intersect returns a VarSet containing variables in s that are in vs.
|
||||
func (s VarSet) Intersect(vs VarSet) VarSet {
|
||||
r := VarSet{}
|
||||
for v := range s {
|
||||
if vs.Contains(v) {
|
||||
r.Add(v)
|
||||
}
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
||||
// Sorted returns a sorted slice of vars from s.
|
||||
func (s VarSet) Sorted() []Var {
|
||||
sorted := make([]Var, 0, len(s))
|
||||
for v := range s {
|
||||
sorted = append(sorted, v)
|
||||
}
|
||||
sort.Slice(sorted, func(i, j int) bool {
|
||||
return sorted[i].Compare(sorted[j]) < 0
|
||||
})
|
||||
return sorted
|
||||
}
|
||||
|
||||
// Update merges the other VarSet into this VarSet.
|
||||
func (s VarSet) Update(vs VarSet) {
|
||||
for v := range vs {
|
||||
s.Add(v)
|
||||
}
|
||||
}
|
||||
|
||||
func (s VarSet) String() string {
|
||||
tmp := make([]string, 0, len(s))
|
||||
for v := range s {
|
||||
tmp = append(tmp, string(v))
|
||||
}
|
||||
sort.Strings(tmp)
|
||||
return fmt.Sprintf("%v", tmp)
|
||||
return v1.NewVarSet(vs...)
|
||||
}
|
||||
|
||||
704
vendor/github.com/open-policy-agent/opa/ast/visit.go
generated
vendored
704
vendor/github.com/open-policy-agent/opa/ast/visit.go
generated
vendored
@@ -4,780 +4,120 @@
|
||||
|
||||
package ast
|
||||
|
||||
import v1 "github.com/open-policy-agent/opa/v1/ast"
|
||||
|
||||
// Visitor defines the interface for iterating AST elements. The Visit function
|
||||
// can return a Visitor w which will be used to visit the children of the AST
|
||||
// element v. If the Visit function returns nil, the children will not be
|
||||
// visited.
|
||||
// Deprecated: use GenericVisitor or another visitor implementation
|
||||
type Visitor interface {
|
||||
Visit(v interface{}) (w Visitor)
|
||||
}
|
||||
type Visitor = v1.Visitor
|
||||
|
||||
// BeforeAndAfterVisitor wraps Visitor to provide hooks for being called before
|
||||
// and after the AST has been visited.
|
||||
// Deprecated: use GenericVisitor or another visitor implementation
|
||||
type BeforeAndAfterVisitor interface {
|
||||
Visitor
|
||||
Before(x interface{})
|
||||
After(x interface{})
|
||||
}
|
||||
type BeforeAndAfterVisitor = v1.BeforeAndAfterVisitor
|
||||
|
||||
// Walk iterates the AST by calling the Visit function on the Visitor
|
||||
// v for x before recursing.
|
||||
// Deprecated: use GenericVisitor.Walk
|
||||
func Walk(v Visitor, x interface{}) {
|
||||
if bav, ok := v.(BeforeAndAfterVisitor); !ok {
|
||||
walk(v, x)
|
||||
} else {
|
||||
bav.Before(x)
|
||||
defer bav.After(x)
|
||||
walk(bav, x)
|
||||
}
|
||||
v1.Walk(v, x)
|
||||
}
|
||||
|
||||
// WalkBeforeAndAfter iterates the AST by calling the Visit function on the
|
||||
// Visitor v for x before recursing.
|
||||
// Deprecated: use GenericVisitor.Walk
|
||||
func WalkBeforeAndAfter(v BeforeAndAfterVisitor, x interface{}) {
|
||||
Walk(v, x)
|
||||
}
|
||||
|
||||
func walk(v Visitor, x interface{}) {
|
||||
w := v.Visit(x)
|
||||
if w == nil {
|
||||
return
|
||||
}
|
||||
switch x := x.(type) {
|
||||
case *Module:
|
||||
Walk(w, x.Package)
|
||||
for i := range x.Imports {
|
||||
Walk(w, x.Imports[i])
|
||||
}
|
||||
for i := range x.Rules {
|
||||
Walk(w, x.Rules[i])
|
||||
}
|
||||
for i := range x.Annotations {
|
||||
Walk(w, x.Annotations[i])
|
||||
}
|
||||
for i := range x.Comments {
|
||||
Walk(w, x.Comments[i])
|
||||
}
|
||||
case *Package:
|
||||
Walk(w, x.Path)
|
||||
case *Import:
|
||||
Walk(w, x.Path)
|
||||
Walk(w, x.Alias)
|
||||
case *Rule:
|
||||
Walk(w, x.Head)
|
||||
Walk(w, x.Body)
|
||||
if x.Else != nil {
|
||||
Walk(w, x.Else)
|
||||
}
|
||||
case *Head:
|
||||
Walk(w, x.Name)
|
||||
Walk(w, x.Args)
|
||||
if x.Key != nil {
|
||||
Walk(w, x.Key)
|
||||
}
|
||||
if x.Value != nil {
|
||||
Walk(w, x.Value)
|
||||
}
|
||||
case Body:
|
||||
for i := range x {
|
||||
Walk(w, x[i])
|
||||
}
|
||||
case Args:
|
||||
for i := range x {
|
||||
Walk(w, x[i])
|
||||
}
|
||||
case *Expr:
|
||||
switch ts := x.Terms.(type) {
|
||||
case *Term, *SomeDecl, *Every:
|
||||
Walk(w, ts)
|
||||
case []*Term:
|
||||
for i := range ts {
|
||||
Walk(w, ts[i])
|
||||
}
|
||||
}
|
||||
for i := range x.With {
|
||||
Walk(w, x.With[i])
|
||||
}
|
||||
case *With:
|
||||
Walk(w, x.Target)
|
||||
Walk(w, x.Value)
|
||||
case *Term:
|
||||
Walk(w, x.Value)
|
||||
case Ref:
|
||||
for i := range x {
|
||||
Walk(w, x[i])
|
||||
}
|
||||
case *object:
|
||||
x.Foreach(func(k, vv *Term) {
|
||||
Walk(w, k)
|
||||
Walk(w, vv)
|
||||
})
|
||||
case *Array:
|
||||
x.Foreach(func(t *Term) {
|
||||
Walk(w, t)
|
||||
})
|
||||
case Set:
|
||||
x.Foreach(func(t *Term) {
|
||||
Walk(w, t)
|
||||
})
|
||||
case *ArrayComprehension:
|
||||
Walk(w, x.Term)
|
||||
Walk(w, x.Body)
|
||||
case *ObjectComprehension:
|
||||
Walk(w, x.Key)
|
||||
Walk(w, x.Value)
|
||||
Walk(w, x.Body)
|
||||
case *SetComprehension:
|
||||
Walk(w, x.Term)
|
||||
Walk(w, x.Body)
|
||||
case Call:
|
||||
for i := range x {
|
||||
Walk(w, x[i])
|
||||
}
|
||||
case *Every:
|
||||
if x.Key != nil {
|
||||
Walk(w, x.Key)
|
||||
}
|
||||
Walk(w, x.Value)
|
||||
Walk(w, x.Domain)
|
||||
Walk(w, x.Body)
|
||||
case *SomeDecl:
|
||||
for i := range x.Symbols {
|
||||
Walk(w, x.Symbols[i])
|
||||
}
|
||||
}
|
||||
v1.WalkBeforeAndAfter(v, x)
|
||||
}
|
||||
|
||||
// WalkVars calls the function f on all vars under x. If the function f
|
||||
// returns true, AST nodes under the last node will not be visited.
|
||||
func WalkVars(x interface{}, f func(Var) bool) {
|
||||
vis := &GenericVisitor{func(x interface{}) bool {
|
||||
if v, ok := x.(Var); ok {
|
||||
return f(v)
|
||||
}
|
||||
return false
|
||||
}}
|
||||
vis.Walk(x)
|
||||
v1.WalkVars(x, f)
|
||||
}
|
||||
|
||||
// WalkClosures calls the function f on all closures under x. If the function f
|
||||
// returns true, AST nodes under the last node will not be visited.
|
||||
func WalkClosures(x interface{}, f func(interface{}) bool) {
|
||||
vis := &GenericVisitor{func(x interface{}) bool {
|
||||
switch x := x.(type) {
|
||||
case *ArrayComprehension, *ObjectComprehension, *SetComprehension, *Every:
|
||||
return f(x)
|
||||
}
|
||||
return false
|
||||
}}
|
||||
vis.Walk(x)
|
||||
v1.WalkClosures(x, f)
|
||||
}
|
||||
|
||||
// WalkRefs calls the function f on all references under x. If the function f
|
||||
// returns true, AST nodes under the last node will not be visited.
|
||||
func WalkRefs(x interface{}, f func(Ref) bool) {
|
||||
vis := &GenericVisitor{func(x interface{}) bool {
|
||||
if r, ok := x.(Ref); ok {
|
||||
return f(r)
|
||||
}
|
||||
return false
|
||||
}}
|
||||
vis.Walk(x)
|
||||
v1.WalkRefs(x, f)
|
||||
}
|
||||
|
||||
// WalkTerms calls the function f on all terms under x. If the function f
|
||||
// returns true, AST nodes under the last node will not be visited.
|
||||
func WalkTerms(x interface{}, f func(*Term) bool) {
|
||||
vis := &GenericVisitor{func(x interface{}) bool {
|
||||
if term, ok := x.(*Term); ok {
|
||||
return f(term)
|
||||
}
|
||||
return false
|
||||
}}
|
||||
vis.Walk(x)
|
||||
v1.WalkTerms(x, f)
|
||||
}
|
||||
|
||||
// WalkWiths calls the function f on all with modifiers under x. If the function f
|
||||
// returns true, AST nodes under the last node will not be visited.
|
||||
func WalkWiths(x interface{}, f func(*With) bool) {
|
||||
vis := &GenericVisitor{func(x interface{}) bool {
|
||||
if w, ok := x.(*With); ok {
|
||||
return f(w)
|
||||
}
|
||||
return false
|
||||
}}
|
||||
vis.Walk(x)
|
||||
v1.WalkWiths(x, f)
|
||||
}
|
||||
|
||||
// WalkExprs calls the function f on all expressions under x. If the function f
|
||||
// returns true, AST nodes under the last node will not be visited.
|
||||
func WalkExprs(x interface{}, f func(*Expr) bool) {
|
||||
vis := &GenericVisitor{func(x interface{}) bool {
|
||||
if r, ok := x.(*Expr); ok {
|
||||
return f(r)
|
||||
}
|
||||
return false
|
||||
}}
|
||||
vis.Walk(x)
|
||||
v1.WalkExprs(x, f)
|
||||
}
|
||||
|
||||
// WalkBodies calls the function f on all bodies under x. If the function f
|
||||
// returns true, AST nodes under the last node will not be visited.
|
||||
func WalkBodies(x interface{}, f func(Body) bool) {
|
||||
vis := &GenericVisitor{func(x interface{}) bool {
|
||||
if b, ok := x.(Body); ok {
|
||||
return f(b)
|
||||
}
|
||||
return false
|
||||
}}
|
||||
vis.Walk(x)
|
||||
v1.WalkBodies(x, f)
|
||||
}
|
||||
|
||||
// WalkRules calls the function f on all rules under x. If the function f
|
||||
// returns true, AST nodes under the last node will not be visited.
|
||||
func WalkRules(x interface{}, f func(*Rule) bool) {
|
||||
vis := &GenericVisitor{func(x interface{}) bool {
|
||||
if r, ok := x.(*Rule); ok {
|
||||
stop := f(r)
|
||||
// NOTE(tsandall): since rules cannot be embedded inside of queries
|
||||
// we can stop early if there is no else block.
|
||||
if stop || r.Else == nil {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}}
|
||||
vis.Walk(x)
|
||||
v1.WalkRules(x, f)
|
||||
}
|
||||
|
||||
// WalkNodes calls the function f on all nodes under x. If the function f
|
||||
// returns true, AST nodes under the last node will not be visited.
|
||||
func WalkNodes(x interface{}, f func(Node) bool) {
|
||||
vis := &GenericVisitor{func(x interface{}) bool {
|
||||
if n, ok := x.(Node); ok {
|
||||
return f(n)
|
||||
}
|
||||
return false
|
||||
}}
|
||||
vis.Walk(x)
|
||||
v1.WalkNodes(x, f)
|
||||
}
|
||||
|
||||
// GenericVisitor provides a utility to walk over AST nodes using a
|
||||
// closure. If the closure returns true, the visitor will not walk
|
||||
// over AST nodes under x.
|
||||
type GenericVisitor struct {
|
||||
f func(x interface{}) bool
|
||||
}
|
||||
type GenericVisitor = v1.GenericVisitor
|
||||
|
||||
// NewGenericVisitor returns a new GenericVisitor that will invoke the function
|
||||
// f on AST nodes.
|
||||
func NewGenericVisitor(f func(x interface{}) bool) *GenericVisitor {
|
||||
return &GenericVisitor{f}
|
||||
}
|
||||
|
||||
// Walk iterates the AST by calling the function f on the
|
||||
// GenericVisitor before recursing. Contrary to the generic Walk, this
|
||||
// does not require allocating the visitor from heap.
|
||||
func (vis *GenericVisitor) Walk(x interface{}) {
|
||||
if vis.f(x) {
|
||||
return
|
||||
}
|
||||
|
||||
switch x := x.(type) {
|
||||
case *Module:
|
||||
vis.Walk(x.Package)
|
||||
for i := range x.Imports {
|
||||
vis.Walk(x.Imports[i])
|
||||
}
|
||||
for i := range x.Rules {
|
||||
vis.Walk(x.Rules[i])
|
||||
}
|
||||
for i := range x.Annotations {
|
||||
vis.Walk(x.Annotations[i])
|
||||
}
|
||||
for i := range x.Comments {
|
||||
vis.Walk(x.Comments[i])
|
||||
}
|
||||
case *Package:
|
||||
vis.Walk(x.Path)
|
||||
case *Import:
|
||||
vis.Walk(x.Path)
|
||||
vis.Walk(x.Alias)
|
||||
case *Rule:
|
||||
vis.Walk(x.Head)
|
||||
vis.Walk(x.Body)
|
||||
if x.Else != nil {
|
||||
vis.Walk(x.Else)
|
||||
}
|
||||
case *Head:
|
||||
vis.Walk(x.Name)
|
||||
vis.Walk(x.Args)
|
||||
if x.Key != nil {
|
||||
vis.Walk(x.Key)
|
||||
}
|
||||
if x.Value != nil {
|
||||
vis.Walk(x.Value)
|
||||
}
|
||||
case Body:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case Args:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case *Expr:
|
||||
switch ts := x.Terms.(type) {
|
||||
case *Term, *SomeDecl, *Every:
|
||||
vis.Walk(ts)
|
||||
case []*Term:
|
||||
for i := range ts {
|
||||
vis.Walk(ts[i])
|
||||
}
|
||||
}
|
||||
for i := range x.With {
|
||||
vis.Walk(x.With[i])
|
||||
}
|
||||
case *With:
|
||||
vis.Walk(x.Target)
|
||||
vis.Walk(x.Value)
|
||||
case *Term:
|
||||
vis.Walk(x.Value)
|
||||
case Ref:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case *object:
|
||||
x.Foreach(func(k, _ *Term) {
|
||||
vis.Walk(k)
|
||||
vis.Walk(x.Get(k))
|
||||
})
|
||||
case Object:
|
||||
x.Foreach(func(k, _ *Term) {
|
||||
vis.Walk(k)
|
||||
vis.Walk(x.Get(k))
|
||||
})
|
||||
case *Array:
|
||||
x.Foreach(func(t *Term) {
|
||||
vis.Walk(t)
|
||||
})
|
||||
case Set:
|
||||
xSlice := x.Slice()
|
||||
for i := range xSlice {
|
||||
vis.Walk(xSlice[i])
|
||||
}
|
||||
case *ArrayComprehension:
|
||||
vis.Walk(x.Term)
|
||||
vis.Walk(x.Body)
|
||||
case *ObjectComprehension:
|
||||
vis.Walk(x.Key)
|
||||
vis.Walk(x.Value)
|
||||
vis.Walk(x.Body)
|
||||
case *SetComprehension:
|
||||
vis.Walk(x.Term)
|
||||
vis.Walk(x.Body)
|
||||
case Call:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case *Every:
|
||||
if x.Key != nil {
|
||||
vis.Walk(x.Key)
|
||||
}
|
||||
vis.Walk(x.Value)
|
||||
vis.Walk(x.Domain)
|
||||
vis.Walk(x.Body)
|
||||
case *SomeDecl:
|
||||
for i := range x.Symbols {
|
||||
vis.Walk(x.Symbols[i])
|
||||
}
|
||||
}
|
||||
return v1.NewGenericVisitor(f)
|
||||
}
|
||||
|
||||
// BeforeAfterVisitor provides a utility to walk over AST nodes using
|
||||
// closures. If the before closure returns true, the visitor will not
|
||||
// walk over AST nodes under x. The after closure is invoked always
|
||||
// after visiting a node.
|
||||
type BeforeAfterVisitor struct {
|
||||
before func(x interface{}) bool
|
||||
after func(x interface{})
|
||||
}
|
||||
type BeforeAfterVisitor = v1.BeforeAfterVisitor
|
||||
|
||||
// NewBeforeAfterVisitor returns a new BeforeAndAfterVisitor that
|
||||
// will invoke the functions before and after AST nodes.
|
||||
func NewBeforeAfterVisitor(before func(x interface{}) bool, after func(x interface{})) *BeforeAfterVisitor {
|
||||
return &BeforeAfterVisitor{before, after}
|
||||
}
|
||||
|
||||
// Walk iterates the AST by calling the functions on the
|
||||
// BeforeAndAfterVisitor before and after recursing. Contrary to the
|
||||
// generic Walk, this does not require allocating the visitor from
|
||||
// heap.
|
||||
func (vis *BeforeAfterVisitor) Walk(x interface{}) {
|
||||
defer vis.after(x)
|
||||
if vis.before(x) {
|
||||
return
|
||||
}
|
||||
|
||||
switch x := x.(type) {
|
||||
case *Module:
|
||||
vis.Walk(x.Package)
|
||||
for i := range x.Imports {
|
||||
vis.Walk(x.Imports[i])
|
||||
}
|
||||
for i := range x.Rules {
|
||||
vis.Walk(x.Rules[i])
|
||||
}
|
||||
for i := range x.Annotations {
|
||||
vis.Walk(x.Annotations[i])
|
||||
}
|
||||
for i := range x.Comments {
|
||||
vis.Walk(x.Comments[i])
|
||||
}
|
||||
case *Package:
|
||||
vis.Walk(x.Path)
|
||||
case *Import:
|
||||
vis.Walk(x.Path)
|
||||
vis.Walk(x.Alias)
|
||||
case *Rule:
|
||||
vis.Walk(x.Head)
|
||||
vis.Walk(x.Body)
|
||||
if x.Else != nil {
|
||||
vis.Walk(x.Else)
|
||||
}
|
||||
case *Head:
|
||||
if len(x.Reference) > 0 {
|
||||
vis.Walk(x.Reference)
|
||||
} else {
|
||||
vis.Walk(x.Name)
|
||||
if x.Key != nil {
|
||||
vis.Walk(x.Key)
|
||||
}
|
||||
}
|
||||
vis.Walk(x.Args)
|
||||
if x.Value != nil {
|
||||
vis.Walk(x.Value)
|
||||
}
|
||||
case Body:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case Args:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case *Expr:
|
||||
switch ts := x.Terms.(type) {
|
||||
case *Term, *SomeDecl, *Every:
|
||||
vis.Walk(ts)
|
||||
case []*Term:
|
||||
for i := range ts {
|
||||
vis.Walk(ts[i])
|
||||
}
|
||||
}
|
||||
for i := range x.With {
|
||||
vis.Walk(x.With[i])
|
||||
}
|
||||
case *With:
|
||||
vis.Walk(x.Target)
|
||||
vis.Walk(x.Value)
|
||||
case *Term:
|
||||
vis.Walk(x.Value)
|
||||
case Ref:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case *object:
|
||||
x.Foreach(func(k, _ *Term) {
|
||||
vis.Walk(k)
|
||||
vis.Walk(x.Get(k))
|
||||
})
|
||||
case Object:
|
||||
x.Foreach(func(k, _ *Term) {
|
||||
vis.Walk(k)
|
||||
vis.Walk(x.Get(k))
|
||||
})
|
||||
case *Array:
|
||||
x.Foreach(func(t *Term) {
|
||||
vis.Walk(t)
|
||||
})
|
||||
case Set:
|
||||
xSlice := x.Slice()
|
||||
for i := range xSlice {
|
||||
vis.Walk(xSlice[i])
|
||||
}
|
||||
case *ArrayComprehension:
|
||||
vis.Walk(x.Term)
|
||||
vis.Walk(x.Body)
|
||||
case *ObjectComprehension:
|
||||
vis.Walk(x.Key)
|
||||
vis.Walk(x.Value)
|
||||
vis.Walk(x.Body)
|
||||
case *SetComprehension:
|
||||
vis.Walk(x.Term)
|
||||
vis.Walk(x.Body)
|
||||
case Call:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case *Every:
|
||||
if x.Key != nil {
|
||||
vis.Walk(x.Key)
|
||||
}
|
||||
vis.Walk(x.Value)
|
||||
vis.Walk(x.Domain)
|
||||
vis.Walk(x.Body)
|
||||
case *SomeDecl:
|
||||
for i := range x.Symbols {
|
||||
vis.Walk(x.Symbols[i])
|
||||
}
|
||||
}
|
||||
return v1.NewBeforeAfterVisitor(before, after)
|
||||
}
|
||||
|
||||
// VarVisitor walks AST nodes under a given node and collects all encountered
|
||||
// variables. The collected variables can be controlled by specifying
|
||||
// VarVisitorParams when creating the visitor.
|
||||
type VarVisitor struct {
|
||||
params VarVisitorParams
|
||||
vars VarSet
|
||||
}
|
||||
type VarVisitor = v1.VarVisitor
|
||||
|
||||
// VarVisitorParams contains settings for a VarVisitor.
|
||||
type VarVisitorParams struct {
|
||||
SkipRefHead bool
|
||||
SkipRefCallHead bool
|
||||
SkipObjectKeys bool
|
||||
SkipClosures bool
|
||||
SkipWithTarget bool
|
||||
SkipSets bool
|
||||
}
|
||||
type VarVisitorParams = v1.VarVisitorParams
|
||||
|
||||
// NewVarVisitor returns a new VarVisitor object.
|
||||
func NewVarVisitor() *VarVisitor {
|
||||
return &VarVisitor{
|
||||
vars: NewVarSet(),
|
||||
}
|
||||
}
|
||||
|
||||
// WithParams sets the parameters in params on vis.
|
||||
func (vis *VarVisitor) WithParams(params VarVisitorParams) *VarVisitor {
|
||||
vis.params = params
|
||||
return vis
|
||||
}
|
||||
|
||||
// Vars returns a VarSet that contains collected vars.
|
||||
func (vis *VarVisitor) Vars() VarSet {
|
||||
return vis.vars
|
||||
}
|
||||
|
||||
// visit determines if the VarVisitor will recurse into x: if it returns `true`,
|
||||
// the visitor will _skip_ that branch of the AST
|
||||
func (vis *VarVisitor) visit(v interface{}) bool {
|
||||
if vis.params.SkipObjectKeys {
|
||||
if o, ok := v.(Object); ok {
|
||||
o.Foreach(func(_, v *Term) {
|
||||
vis.Walk(v)
|
||||
})
|
||||
return true
|
||||
}
|
||||
}
|
||||
if vis.params.SkipRefHead {
|
||||
if r, ok := v.(Ref); ok {
|
||||
rSlice := r[1:]
|
||||
for i := range rSlice {
|
||||
vis.Walk(rSlice[i])
|
||||
}
|
||||
return true
|
||||
}
|
||||
}
|
||||
if vis.params.SkipClosures {
|
||||
switch v := v.(type) {
|
||||
case *ArrayComprehension, *ObjectComprehension, *SetComprehension:
|
||||
return true
|
||||
case *Expr:
|
||||
if ev, ok := v.Terms.(*Every); ok {
|
||||
vis.Walk(ev.Domain)
|
||||
// We're _not_ walking ev.Body -- that's the closure here
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
if vis.params.SkipWithTarget {
|
||||
if v, ok := v.(*With); ok {
|
||||
vis.Walk(v.Value)
|
||||
return true
|
||||
}
|
||||
}
|
||||
if vis.params.SkipSets {
|
||||
if _, ok := v.(Set); ok {
|
||||
return true
|
||||
}
|
||||
}
|
||||
if vis.params.SkipRefCallHead {
|
||||
switch v := v.(type) {
|
||||
case *Expr:
|
||||
if terms, ok := v.Terms.([]*Term); ok {
|
||||
termSlice := terms[0].Value.(Ref)[1:]
|
||||
for i := range termSlice {
|
||||
vis.Walk(termSlice[i])
|
||||
}
|
||||
for i := 1; i < len(terms); i++ {
|
||||
vis.Walk(terms[i])
|
||||
}
|
||||
for i := range v.With {
|
||||
vis.Walk(v.With[i])
|
||||
}
|
||||
return true
|
||||
}
|
||||
case Call:
|
||||
operator := v[0].Value.(Ref)
|
||||
for i := 1; i < len(operator); i++ {
|
||||
vis.Walk(operator[i])
|
||||
}
|
||||
for i := 1; i < len(v); i++ {
|
||||
vis.Walk(v[i])
|
||||
}
|
||||
return true
|
||||
case *With:
|
||||
if ref, ok := v.Target.Value.(Ref); ok {
|
||||
refSlice := ref[1:]
|
||||
for i := range refSlice {
|
||||
vis.Walk(refSlice[i])
|
||||
}
|
||||
}
|
||||
if ref, ok := v.Value.Value.(Ref); ok {
|
||||
refSlice := ref[1:]
|
||||
for i := range refSlice {
|
||||
vis.Walk(refSlice[i])
|
||||
}
|
||||
} else {
|
||||
vis.Walk(v.Value)
|
||||
}
|
||||
return true
|
||||
}
|
||||
}
|
||||
if v, ok := v.(Var); ok {
|
||||
vis.vars.Add(v)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Walk iterates the AST by calling the function f on the
|
||||
// GenericVisitor before recursing. Contrary to the generic Walk, this
|
||||
// does not require allocating the visitor from heap.
|
||||
func (vis *VarVisitor) Walk(x interface{}) {
|
||||
if vis.visit(x) {
|
||||
return
|
||||
}
|
||||
|
||||
switch x := x.(type) {
|
||||
case *Module:
|
||||
vis.Walk(x.Package)
|
||||
for i := range x.Imports {
|
||||
vis.Walk(x.Imports[i])
|
||||
}
|
||||
for i := range x.Rules {
|
||||
vis.Walk(x.Rules[i])
|
||||
}
|
||||
for i := range x.Comments {
|
||||
vis.Walk(x.Comments[i])
|
||||
}
|
||||
case *Package:
|
||||
vis.Walk(x.Path)
|
||||
case *Import:
|
||||
vis.Walk(x.Path)
|
||||
vis.Walk(x.Alias)
|
||||
case *Rule:
|
||||
vis.Walk(x.Head)
|
||||
vis.Walk(x.Body)
|
||||
if x.Else != nil {
|
||||
vis.Walk(x.Else)
|
||||
}
|
||||
case *Head:
|
||||
if len(x.Reference) > 0 {
|
||||
vis.Walk(x.Reference)
|
||||
} else {
|
||||
vis.Walk(x.Name)
|
||||
if x.Key != nil {
|
||||
vis.Walk(x.Key)
|
||||
}
|
||||
}
|
||||
vis.Walk(x.Args)
|
||||
|
||||
if x.Value != nil {
|
||||
vis.Walk(x.Value)
|
||||
}
|
||||
case Body:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case Args:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case *Expr:
|
||||
switch ts := x.Terms.(type) {
|
||||
case *Term, *SomeDecl, *Every:
|
||||
vis.Walk(ts)
|
||||
case []*Term:
|
||||
for i := range ts {
|
||||
vis.Walk(ts[i])
|
||||
}
|
||||
}
|
||||
for i := range x.With {
|
||||
vis.Walk(x.With[i])
|
||||
}
|
||||
case *With:
|
||||
vis.Walk(x.Target)
|
||||
vis.Walk(x.Value)
|
||||
case *Term:
|
||||
vis.Walk(x.Value)
|
||||
case Ref:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case *object:
|
||||
x.Foreach(func(k, _ *Term) {
|
||||
vis.Walk(k)
|
||||
vis.Walk(x.Get(k))
|
||||
})
|
||||
case *Array:
|
||||
x.Foreach(func(t *Term) {
|
||||
vis.Walk(t)
|
||||
})
|
||||
case Set:
|
||||
xSlice := x.Slice()
|
||||
for i := range xSlice {
|
||||
vis.Walk(xSlice[i])
|
||||
}
|
||||
case *ArrayComprehension:
|
||||
vis.Walk(x.Term)
|
||||
vis.Walk(x.Body)
|
||||
case *ObjectComprehension:
|
||||
vis.Walk(x.Key)
|
||||
vis.Walk(x.Value)
|
||||
vis.Walk(x.Body)
|
||||
case *SetComprehension:
|
||||
vis.Walk(x.Term)
|
||||
vis.Walk(x.Body)
|
||||
case Call:
|
||||
for i := range x {
|
||||
vis.Walk(x[i])
|
||||
}
|
||||
case *Every:
|
||||
if x.Key != nil {
|
||||
vis.Walk(x.Key)
|
||||
}
|
||||
vis.Walk(x.Value)
|
||||
vis.Walk(x.Domain)
|
||||
vis.Walk(x.Body)
|
||||
case *SomeDecl:
|
||||
for i := range x.Symbols {
|
||||
vis.Walk(x.Symbols[i])
|
||||
}
|
||||
}
|
||||
return v1.NewVarVisitor()
|
||||
}
|
||||
|
||||
1688
vendor/github.com/open-policy-agent/opa/bundle/bundle.go
generated
vendored
1688
vendor/github.com/open-policy-agent/opa/bundle/bundle.go
generated
vendored
File diff suppressed because it is too large
Load Diff
8
vendor/github.com/open-policy-agent/opa/bundle/doc.go
generated
vendored
Normal file
8
vendor/github.com/open-policy-agent/opa/bundle/doc.go
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
// Copyright 2024 The OPA Authors. All rights reserved.
|
||||
// Use of this source code is governed by an Apache2
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Deprecated: This package is intended for older projects transitioning from OPA v0.x and will remain for the lifetime of OPA v1.x, but its use is not recommended.
|
||||
// For newer features and behaviours, such as defaulting to the Rego v1 syntax, use the corresponding components in the [github.com/open-policy-agent/opa/v1] package instead.
|
||||
// See https://www.openpolicyagent.org/docs/latest/v0-compatibility/ for more information.
|
||||
package bundle
|
||||
482
vendor/github.com/open-policy-agent/opa/bundle/file.go
generated
vendored
482
vendor/github.com/open-policy-agent/opa/bundle/file.go
generated
vendored
@@ -1,508 +1,50 @@
|
||||
package bundle
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"compress/gzip"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/open-policy-agent/opa/loader/filter"
|
||||
|
||||
"github.com/open-policy-agent/opa/storage"
|
||||
v1 "github.com/open-policy-agent/opa/v1/bundle"
|
||||
)
|
||||
|
||||
const maxSizeLimitBytesErrMsg = "bundle file %s size (%d bytes) exceeds configured size_limit_bytes (%d bytes)"
|
||||
|
||||
// Descriptor contains information about a file and
|
||||
// can be used to read the file contents.
|
||||
type Descriptor struct {
|
||||
url string
|
||||
path string
|
||||
reader io.Reader
|
||||
closer io.Closer
|
||||
closeOnce *sync.Once
|
||||
}
|
||||
|
||||
// lazyFile defers reading the file until the first call of Read
|
||||
type lazyFile struct {
|
||||
path string
|
||||
file *os.File
|
||||
}
|
||||
|
||||
// newLazyFile creates a new instance of lazyFile
|
||||
func newLazyFile(path string) *lazyFile {
|
||||
return &lazyFile{path: path}
|
||||
}
|
||||
|
||||
// Read implements io.Reader. It will check if the file has been opened
|
||||
// and open it if it has not before attempting to read using the file's
|
||||
// read method
|
||||
func (f *lazyFile) Read(b []byte) (int, error) {
|
||||
var err error
|
||||
|
||||
if f.file == nil {
|
||||
if f.file, err = os.Open(f.path); err != nil {
|
||||
return 0, fmt.Errorf("failed to open file %s: %w", f.path, err)
|
||||
}
|
||||
}
|
||||
|
||||
return f.file.Read(b)
|
||||
}
|
||||
|
||||
// Close closes the lazy file if it has been opened using the file's
|
||||
// close method
|
||||
func (f *lazyFile) Close() error {
|
||||
if f.file != nil {
|
||||
return f.file.Close()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
type Descriptor = v1.Descriptor
|
||||
|
||||
func NewDescriptor(url, path string, reader io.Reader) *Descriptor {
|
||||
return &Descriptor{
|
||||
url: url,
|
||||
path: path,
|
||||
reader: reader,
|
||||
}
|
||||
return v1.NewDescriptor(url, path, reader)
|
||||
}
|
||||
|
||||
func (d *Descriptor) WithCloser(closer io.Closer) *Descriptor {
|
||||
d.closer = closer
|
||||
d.closeOnce = new(sync.Once)
|
||||
return d
|
||||
}
|
||||
|
||||
// Path returns the path of the file.
|
||||
func (d *Descriptor) Path() string {
|
||||
return d.path
|
||||
}
|
||||
|
||||
// URL returns the url of the file.
|
||||
func (d *Descriptor) URL() string {
|
||||
return d.url
|
||||
}
|
||||
|
||||
// Read will read all the contents from the file the Descriptor refers to
|
||||
// into the dest writer up n bytes. Will return an io.EOF error
|
||||
// if EOF is encountered before n bytes are read.
|
||||
func (d *Descriptor) Read(dest io.Writer, n int64) (int64, error) {
|
||||
n, err := io.CopyN(dest, d.reader, n)
|
||||
return n, err
|
||||
}
|
||||
|
||||
// Close the file, on some Loader implementations this might be a no-op.
|
||||
// It should *always* be called regardless of file.
|
||||
func (d *Descriptor) Close() error {
|
||||
var err error
|
||||
if d.closer != nil {
|
||||
d.closeOnce.Do(func() {
|
||||
err = d.closer.Close()
|
||||
})
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
type PathFormat int64
|
||||
type PathFormat = v1.PathFormat
|
||||
|
||||
const (
|
||||
Chrooted PathFormat = iota
|
||||
SlashRooted
|
||||
Passthrough
|
||||
Chrooted = v1.Chrooted
|
||||
SlashRooted = v1.SlashRooted
|
||||
Passthrough = v1.Passthrough
|
||||
)
|
||||
|
||||
// DirectoryLoader defines an interface which can be used to load
|
||||
// files from a directory by iterating over each one in the tree.
|
||||
type DirectoryLoader interface {
|
||||
// NextFile must return io.EOF if there is no next value. The returned
|
||||
// descriptor should *always* be closed when no longer needed.
|
||||
NextFile() (*Descriptor, error)
|
||||
WithFilter(filter filter.LoaderFilter) DirectoryLoader
|
||||
WithPathFormat(PathFormat) DirectoryLoader
|
||||
WithSizeLimitBytes(sizeLimitBytes int64) DirectoryLoader
|
||||
WithFollowSymlinks(followSymlinks bool) DirectoryLoader
|
||||
}
|
||||
|
||||
type dirLoader struct {
|
||||
root string
|
||||
files []string
|
||||
idx int
|
||||
filter filter.LoaderFilter
|
||||
pathFormat PathFormat
|
||||
maxSizeLimitBytes int64
|
||||
followSymlinks bool
|
||||
}
|
||||
|
||||
// Normalize root directory, ex "./src/bundle" -> "src/bundle"
|
||||
// We don't need an absolute path, but this makes the joined/trimmed
|
||||
// paths more uniform.
|
||||
func normalizeRootDirectory(root string) string {
|
||||
if len(root) > 1 {
|
||||
if root[0] == '.' && root[1] == filepath.Separator {
|
||||
if len(root) == 2 {
|
||||
root = root[:1] // "./" -> "."
|
||||
} else {
|
||||
root = root[2:] // remove leading "./"
|
||||
}
|
||||
}
|
||||
}
|
||||
return root
|
||||
}
|
||||
type DirectoryLoader = v1.DirectoryLoader
|
||||
|
||||
// NewDirectoryLoader returns a basic DirectoryLoader implementation
|
||||
// that will load files from a given root directory path.
|
||||
func NewDirectoryLoader(root string) DirectoryLoader {
|
||||
d := dirLoader{
|
||||
root: normalizeRootDirectory(root),
|
||||
pathFormat: Chrooted,
|
||||
}
|
||||
return &d
|
||||
}
|
||||
|
||||
// WithFilter specifies the filter object to use to filter files while loading bundles
|
||||
func (d *dirLoader) WithFilter(filter filter.LoaderFilter) DirectoryLoader {
|
||||
d.filter = filter
|
||||
return d
|
||||
}
|
||||
|
||||
// WithPathFormat specifies how a path is formatted in a Descriptor
|
||||
func (d *dirLoader) WithPathFormat(pathFormat PathFormat) DirectoryLoader {
|
||||
d.pathFormat = pathFormat
|
||||
return d
|
||||
}
|
||||
|
||||
// WithSizeLimitBytes specifies the maximum size of any file in the directory to read
|
||||
func (d *dirLoader) WithSizeLimitBytes(sizeLimitBytes int64) DirectoryLoader {
|
||||
d.maxSizeLimitBytes = sizeLimitBytes
|
||||
return d
|
||||
}
|
||||
|
||||
// WithFollowSymlinks specifies whether to follow symlinks when loading files from the directory
|
||||
func (d *dirLoader) WithFollowSymlinks(followSymlinks bool) DirectoryLoader {
|
||||
d.followSymlinks = followSymlinks
|
||||
return d
|
||||
}
|
||||
|
||||
func formatPath(fileName string, root string, pathFormat PathFormat) string {
|
||||
switch pathFormat {
|
||||
case SlashRooted:
|
||||
if !strings.HasPrefix(fileName, string(filepath.Separator)) {
|
||||
return string(filepath.Separator) + fileName
|
||||
}
|
||||
return fileName
|
||||
case Chrooted:
|
||||
// Trim off the root directory and return path as if chrooted
|
||||
result := strings.TrimPrefix(fileName, filepath.FromSlash(root))
|
||||
if root == "." && filepath.Base(fileName) == ManifestExt {
|
||||
result = fileName
|
||||
}
|
||||
if !strings.HasPrefix(result, string(filepath.Separator)) {
|
||||
result = string(filepath.Separator) + result
|
||||
}
|
||||
return result
|
||||
case Passthrough:
|
||||
fallthrough
|
||||
default:
|
||||
return fileName
|
||||
}
|
||||
}
|
||||
|
||||
// NextFile iterates to the next file in the directory tree
|
||||
// and returns a file Descriptor for the file.
|
||||
func (d *dirLoader) NextFile() (*Descriptor, error) {
|
||||
// build a list of all files we will iterate over and read, but only one time
|
||||
if d.files == nil {
|
||||
d.files = []string{}
|
||||
err := filepath.Walk(d.root, func(path string, info os.FileInfo, _ error) error {
|
||||
if info == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if info.Mode().IsRegular() {
|
||||
if d.filter != nil && d.filter(filepath.ToSlash(path), info, getdepth(path, false)) {
|
||||
return nil
|
||||
}
|
||||
if d.maxSizeLimitBytes > 0 && info.Size() > d.maxSizeLimitBytes {
|
||||
return fmt.Errorf(maxSizeLimitBytesErrMsg, strings.TrimPrefix(path, "/"), info.Size(), d.maxSizeLimitBytes)
|
||||
}
|
||||
d.files = append(d.files, path)
|
||||
} else if d.followSymlinks && info.Mode().Type()&fs.ModeSymlink == fs.ModeSymlink {
|
||||
if d.filter != nil && d.filter(filepath.ToSlash(path), info, getdepth(path, false)) {
|
||||
return nil
|
||||
}
|
||||
if d.maxSizeLimitBytes > 0 && info.Size() > d.maxSizeLimitBytes {
|
||||
return fmt.Errorf(maxSizeLimitBytesErrMsg, strings.TrimPrefix(path, "/"), info.Size(), d.maxSizeLimitBytes)
|
||||
}
|
||||
d.files = append(d.files, path)
|
||||
} else if info.Mode().IsDir() {
|
||||
if d.filter != nil && d.filter(filepath.ToSlash(path), info, getdepth(path, true)) {
|
||||
return filepath.SkipDir
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to list files: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// If done reading files then just return io.EOF
|
||||
// errors for each NextFile() call
|
||||
if d.idx >= len(d.files) {
|
||||
return nil, io.EOF
|
||||
}
|
||||
|
||||
fileName := d.files[d.idx]
|
||||
d.idx++
|
||||
fh := newLazyFile(fileName)
|
||||
|
||||
cleanedPath := formatPath(fileName, d.root, d.pathFormat)
|
||||
f := NewDescriptor(filepath.Join(d.root, cleanedPath), cleanedPath, fh).WithCloser(fh)
|
||||
return f, nil
|
||||
}
|
||||
|
||||
type tarballLoader struct {
|
||||
baseURL string
|
||||
r io.Reader
|
||||
tr *tar.Reader
|
||||
files []file
|
||||
idx int
|
||||
filter filter.LoaderFilter
|
||||
skipDir map[string]struct{}
|
||||
pathFormat PathFormat
|
||||
maxSizeLimitBytes int64
|
||||
}
|
||||
|
||||
type file struct {
|
||||
name string
|
||||
reader io.Reader
|
||||
path storage.Path
|
||||
raw []byte
|
||||
return v1.NewDirectoryLoader(root)
|
||||
}
|
||||
|
||||
// NewTarballLoader is deprecated. Use NewTarballLoaderWithBaseURL instead.
|
||||
func NewTarballLoader(r io.Reader) DirectoryLoader {
|
||||
l := tarballLoader{
|
||||
r: r,
|
||||
pathFormat: Passthrough,
|
||||
}
|
||||
return &l
|
||||
return v1.NewTarballLoader(r)
|
||||
}
|
||||
|
||||
// NewTarballLoaderWithBaseURL returns a new DirectoryLoader that reads
|
||||
// files out of a gzipped tar archive. The file URLs will be prefixed
|
||||
// with the baseURL.
|
||||
func NewTarballLoaderWithBaseURL(r io.Reader, baseURL string) DirectoryLoader {
|
||||
l := tarballLoader{
|
||||
baseURL: strings.TrimSuffix(baseURL, "/"),
|
||||
r: r,
|
||||
pathFormat: Passthrough,
|
||||
}
|
||||
return &l
|
||||
}
|
||||
|
||||
// WithFilter specifies the filter object to use to filter files while loading bundles
|
||||
func (t *tarballLoader) WithFilter(filter filter.LoaderFilter) DirectoryLoader {
|
||||
t.filter = filter
|
||||
return t
|
||||
}
|
||||
|
||||
// WithPathFormat specifies how a path is formatted in a Descriptor
|
||||
func (t *tarballLoader) WithPathFormat(pathFormat PathFormat) DirectoryLoader {
|
||||
t.pathFormat = pathFormat
|
||||
return t
|
||||
}
|
||||
|
||||
// WithSizeLimitBytes specifies the maximum size of any file in the tarball to read
|
||||
func (t *tarballLoader) WithSizeLimitBytes(sizeLimitBytes int64) DirectoryLoader {
|
||||
t.maxSizeLimitBytes = sizeLimitBytes
|
||||
return t
|
||||
}
|
||||
|
||||
// WithFollowSymlinks is a no-op for tarballLoader
|
||||
func (t *tarballLoader) WithFollowSymlinks(_ bool) DirectoryLoader {
|
||||
return t
|
||||
}
|
||||
|
||||
// NextFile iterates to the next file in the directory tree
|
||||
// and returns a file Descriptor for the file.
|
||||
func (t *tarballLoader) NextFile() (*Descriptor, error) {
|
||||
if t.tr == nil {
|
||||
gr, err := gzip.NewReader(t.r)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("archive read failed: %w", err)
|
||||
}
|
||||
|
||||
t.tr = tar.NewReader(gr)
|
||||
}
|
||||
|
||||
if t.files == nil {
|
||||
t.files = []file{}
|
||||
|
||||
if t.skipDir == nil {
|
||||
t.skipDir = map[string]struct{}{}
|
||||
}
|
||||
|
||||
for {
|
||||
header, err := t.tr.Next()
|
||||
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Keep iterating on the archive until we find a normal file
|
||||
if header.Typeflag == tar.TypeReg {
|
||||
|
||||
if t.filter != nil {
|
||||
|
||||
if t.filter(filepath.ToSlash(header.Name), header.FileInfo(), getdepth(header.Name, false)) {
|
||||
continue
|
||||
}
|
||||
|
||||
basePath := strings.Trim(filepath.Dir(filepath.ToSlash(header.Name)), "/")
|
||||
|
||||
// check if the directory is to be skipped
|
||||
if _, ok := t.skipDir[basePath]; ok {
|
||||
continue
|
||||
}
|
||||
|
||||
match := false
|
||||
for p := range t.skipDir {
|
||||
if strings.HasPrefix(basePath, p) {
|
||||
match = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if match {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if t.maxSizeLimitBytes > 0 && header.Size > t.maxSizeLimitBytes {
|
||||
return nil, fmt.Errorf(maxSizeLimitBytesErrMsg, header.Name, header.Size, t.maxSizeLimitBytes)
|
||||
}
|
||||
|
||||
f := file{name: header.Name}
|
||||
|
||||
// Note(philipc): We rely on the previous size check in this loop for safety.
|
||||
buf := bytes.NewBuffer(make([]byte, 0, header.Size))
|
||||
if _, err := io.Copy(buf, t.tr); err != nil {
|
||||
return nil, fmt.Errorf("failed to copy file %s: %w", header.Name, err)
|
||||
}
|
||||
|
||||
f.reader = buf
|
||||
|
||||
t.files = append(t.files, f)
|
||||
} else if header.Typeflag == tar.TypeDir {
|
||||
cleanedPath := filepath.ToSlash(header.Name)
|
||||
if t.filter != nil && t.filter(cleanedPath, header.FileInfo(), getdepth(header.Name, true)) {
|
||||
t.skipDir[strings.Trim(cleanedPath, "/")] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If done reading files then just return io.EOF
|
||||
// errors for each NextFile() call
|
||||
if t.idx >= len(t.files) {
|
||||
return nil, io.EOF
|
||||
}
|
||||
|
||||
f := t.files[t.idx]
|
||||
t.idx++
|
||||
|
||||
cleanedPath := formatPath(f.name, "", t.pathFormat)
|
||||
d := NewDescriptor(filepath.Join(t.baseURL, cleanedPath), cleanedPath, f.reader)
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// Next implements the storage.Iterator interface.
|
||||
// It iterates to the next policy or data file in the directory tree
|
||||
// and returns a storage.Update for the file.
|
||||
func (it *iterator) Next() (*storage.Update, error) {
|
||||
if it.files == nil {
|
||||
it.files = []file{}
|
||||
|
||||
for _, item := range it.raw {
|
||||
f := file{name: item.Path}
|
||||
|
||||
fpath := strings.TrimLeft(normalizePath(filepath.Dir(f.name)), "/.")
|
||||
if strings.HasSuffix(f.name, RegoExt) {
|
||||
fpath = strings.Trim(normalizePath(f.name), "/")
|
||||
}
|
||||
|
||||
p, ok := storage.ParsePathEscaped("/" + fpath)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("storage path invalid: %v", f.name)
|
||||
}
|
||||
f.path = p
|
||||
|
||||
f.raw = item.Value
|
||||
|
||||
it.files = append(it.files, f)
|
||||
}
|
||||
|
||||
sortFilePathAscend(it.files)
|
||||
}
|
||||
|
||||
// If done reading files then just return io.EOF
|
||||
// errors for each NextFile() call
|
||||
if it.idx >= len(it.files) {
|
||||
return nil, io.EOF
|
||||
}
|
||||
|
||||
f := it.files[it.idx]
|
||||
it.idx++
|
||||
|
||||
isPolicy := false
|
||||
if strings.HasSuffix(f.name, RegoExt) {
|
||||
isPolicy = true
|
||||
}
|
||||
|
||||
return &storage.Update{
|
||||
Path: f.path,
|
||||
Value: f.raw,
|
||||
IsPolicy: isPolicy,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type iterator struct {
|
||||
raw []Raw
|
||||
files []file
|
||||
idx int
|
||||
return v1.NewTarballLoaderWithBaseURL(r, baseURL)
|
||||
}
|
||||
|
||||
func NewIterator(raw []Raw) storage.Iterator {
|
||||
it := iterator{
|
||||
raw: raw,
|
||||
}
|
||||
return &it
|
||||
}
|
||||
|
||||
func sortFilePathAscend(files []file) {
|
||||
sort.Slice(files, func(i, j int) bool {
|
||||
return len(files[i].path) < len(files[j].path)
|
||||
})
|
||||
}
|
||||
|
||||
func getdepth(path string, isDir bool) int {
|
||||
if isDir {
|
||||
cleanedPath := strings.Trim(filepath.ToSlash(path), "/")
|
||||
return len(strings.Split(cleanedPath, "/"))
|
||||
}
|
||||
|
||||
basePath := strings.Trim(filepath.Dir(filepath.ToSlash(path)), "/")
|
||||
return len(strings.Split(basePath, "/"))
|
||||
return v1.NewIterator(raw)
|
||||
}
|
||||
|
||||
127
vendor/github.com/open-policy-agent/opa/bundle/filefs.go
generated
vendored
127
vendor/github.com/open-policy-agent/opa/bundle/filefs.go
generated
vendored
@@ -4,140 +4,19 @@
|
||||
package bundle
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
|
||||
"github.com/open-policy-agent/opa/loader/filter"
|
||||
v1 "github.com/open-policy-agent/opa/v1/bundle"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultFSLoaderRoot = "."
|
||||
)
|
||||
|
||||
type dirLoaderFS struct {
|
||||
sync.Mutex
|
||||
filesystem fs.FS
|
||||
files []string
|
||||
idx int
|
||||
filter filter.LoaderFilter
|
||||
root string
|
||||
pathFormat PathFormat
|
||||
maxSizeLimitBytes int64
|
||||
followSymlinks bool
|
||||
}
|
||||
|
||||
// NewFSLoader returns a basic DirectoryLoader implementation
|
||||
// that will load files from a fs.FS interface
|
||||
func NewFSLoader(filesystem fs.FS) (DirectoryLoader, error) {
|
||||
return NewFSLoaderWithRoot(filesystem, defaultFSLoaderRoot), nil
|
||||
return v1.NewFSLoader(filesystem)
|
||||
}
|
||||
|
||||
// NewFSLoaderWithRoot returns a basic DirectoryLoader implementation
|
||||
// that will load files from a fs.FS interface at the supplied root
|
||||
func NewFSLoaderWithRoot(filesystem fs.FS, root string) DirectoryLoader {
|
||||
d := dirLoaderFS{
|
||||
filesystem: filesystem,
|
||||
root: normalizeRootDirectory(root),
|
||||
pathFormat: Chrooted,
|
||||
}
|
||||
|
||||
return &d
|
||||
}
|
||||
|
||||
func (d *dirLoaderFS) walkDir(path string, dirEntry fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if dirEntry != nil {
|
||||
info, err := dirEntry.Info()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if dirEntry.Type().IsRegular() {
|
||||
if d.filter != nil && d.filter(filepath.ToSlash(path), info, getdepth(path, false)) {
|
||||
return nil
|
||||
}
|
||||
|
||||
if d.maxSizeLimitBytes > 0 && info.Size() > d.maxSizeLimitBytes {
|
||||
return fmt.Errorf("file %s size %d exceeds limit of %d", path, info.Size(), d.maxSizeLimitBytes)
|
||||
}
|
||||
|
||||
d.files = append(d.files, path)
|
||||
} else if dirEntry.Type()&fs.ModeSymlink != 0 && d.followSymlinks {
|
||||
if d.filter != nil && d.filter(filepath.ToSlash(path), info, getdepth(path, false)) {
|
||||
return nil
|
||||
}
|
||||
|
||||
if d.maxSizeLimitBytes > 0 && info.Size() > d.maxSizeLimitBytes {
|
||||
return fmt.Errorf("file %s size %d exceeds limit of %d", path, info.Size(), d.maxSizeLimitBytes)
|
||||
}
|
||||
|
||||
d.files = append(d.files, path)
|
||||
} else if dirEntry.Type().IsDir() {
|
||||
if d.filter != nil && d.filter(filepath.ToSlash(path), info, getdepth(path, true)) {
|
||||
return fs.SkipDir
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// WithFilter specifies the filter object to use to filter files while loading bundles
|
||||
func (d *dirLoaderFS) WithFilter(filter filter.LoaderFilter) DirectoryLoader {
|
||||
d.filter = filter
|
||||
return d
|
||||
}
|
||||
|
||||
// WithPathFormat specifies how a path is formatted in a Descriptor
|
||||
func (d *dirLoaderFS) WithPathFormat(pathFormat PathFormat) DirectoryLoader {
|
||||
d.pathFormat = pathFormat
|
||||
return d
|
||||
}
|
||||
|
||||
// WithSizeLimitBytes specifies the maximum size of any file in the filesystem directory to read
|
||||
func (d *dirLoaderFS) WithSizeLimitBytes(sizeLimitBytes int64) DirectoryLoader {
|
||||
d.maxSizeLimitBytes = sizeLimitBytes
|
||||
return d
|
||||
}
|
||||
|
||||
func (d *dirLoaderFS) WithFollowSymlinks(followSymlinks bool) DirectoryLoader {
|
||||
d.followSymlinks = followSymlinks
|
||||
return d
|
||||
}
|
||||
|
||||
// NextFile iterates to the next file in the directory tree
|
||||
// and returns a file Descriptor for the file.
|
||||
func (d *dirLoaderFS) NextFile() (*Descriptor, error) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
if d.files == nil {
|
||||
err := fs.WalkDir(d.filesystem, d.root, d.walkDir)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to list files: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
// If done reading files then just return io.EOF
|
||||
// errors for each NextFile() call
|
||||
if d.idx >= len(d.files) {
|
||||
return nil, io.EOF
|
||||
}
|
||||
|
||||
fileName := d.files[d.idx]
|
||||
d.idx++
|
||||
|
||||
fh, err := d.filesystem.Open(fileName)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to open file %s: %w", fileName, err)
|
||||
}
|
||||
|
||||
cleanedPath := formatPath(fileName, d.root, d.pathFormat)
|
||||
f := NewDescriptor(cleanedPath, cleanedPath, fh).WithCloser(fh)
|
||||
return f, nil
|
||||
return v1.NewFSLoaderWithRoot(filesystem, root)
|
||||
}
|
||||
|
||||
133
vendor/github.com/open-policy-agent/opa/bundle/hash.go
generated
vendored
133
vendor/github.com/open-policy-agent/opa/bundle/hash.go
generated
vendored
@@ -5,137 +5,28 @@
|
||||
package bundle
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/md5"
|
||||
"crypto/sha1"
|
||||
"crypto/sha256"
|
||||
"crypto/sha512"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"hash"
|
||||
"io"
|
||||
"sort"
|
||||
"strings"
|
||||
v1 "github.com/open-policy-agent/opa/v1/bundle"
|
||||
)
|
||||
|
||||
// HashingAlgorithm represents a subset of hashing algorithms implemented in Go
|
||||
type HashingAlgorithm string
|
||||
type HashingAlgorithm = v1.HashingAlgorithm
|
||||
|
||||
// Supported values for HashingAlgorithm
|
||||
const (
|
||||
MD5 HashingAlgorithm = "MD5"
|
||||
SHA1 HashingAlgorithm = "SHA-1"
|
||||
SHA224 HashingAlgorithm = "SHA-224"
|
||||
SHA256 HashingAlgorithm = "SHA-256"
|
||||
SHA384 HashingAlgorithm = "SHA-384"
|
||||
SHA512 HashingAlgorithm = "SHA-512"
|
||||
SHA512224 HashingAlgorithm = "SHA-512-224"
|
||||
SHA512256 HashingAlgorithm = "SHA-512-256"
|
||||
MD5 = v1.MD5
|
||||
SHA1 = v1.SHA1
|
||||
SHA224 = v1.SHA224
|
||||
SHA256 = v1.SHA256
|
||||
SHA384 = v1.SHA384
|
||||
SHA512 = v1.SHA512
|
||||
SHA512224 = v1.SHA512224
|
||||
SHA512256 = v1.SHA512256
|
||||
)
|
||||
|
||||
// String returns the string representation of a HashingAlgorithm
|
||||
func (alg HashingAlgorithm) String() string {
|
||||
return string(alg)
|
||||
}
|
||||
|
||||
// SignatureHasher computes a signature digest for a file with (structured or unstructured) data and policy
|
||||
type SignatureHasher interface {
|
||||
HashFile(v interface{}) ([]byte, error)
|
||||
}
|
||||
|
||||
type hasher struct {
|
||||
h func() hash.Hash // hash function factory
|
||||
}
|
||||
type SignatureHasher = v1.SignatureHasher
|
||||
|
||||
// NewSignatureHasher returns a signature hasher suitable for a particular hashing algorithm
|
||||
func NewSignatureHasher(alg HashingAlgorithm) (SignatureHasher, error) {
|
||||
h := &hasher{}
|
||||
|
||||
switch alg {
|
||||
case MD5:
|
||||
h.h = md5.New
|
||||
case SHA1:
|
||||
h.h = sha1.New
|
||||
case SHA224:
|
||||
h.h = sha256.New224
|
||||
case SHA256:
|
||||
h.h = sha256.New
|
||||
case SHA384:
|
||||
h.h = sha512.New384
|
||||
case SHA512:
|
||||
h.h = sha512.New
|
||||
case SHA512224:
|
||||
h.h = sha512.New512_224
|
||||
case SHA512256:
|
||||
h.h = sha512.New512_256
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported hashing algorithm: %s", alg)
|
||||
}
|
||||
|
||||
return h, nil
|
||||
}
|
||||
|
||||
// HashFile hashes the file content, JSON or binary, both in golang native format.
|
||||
func (h *hasher) HashFile(v interface{}) ([]byte, error) {
|
||||
hf := h.h()
|
||||
walk(v, hf)
|
||||
return hf.Sum(nil), nil
|
||||
}
|
||||
|
||||
// walk hashes the file content, JSON or binary, both in golang native format.
|
||||
//
|
||||
// Computation for unstructured documents is a hash of the document.
|
||||
//
|
||||
// Computation for the types of structured JSON document is as follows:
|
||||
//
|
||||
// object: Hash {, then each key (in alphabetical order) and digest of the value, then comma (between items) and finally }.
|
||||
//
|
||||
// array: Hash [, then digest of the value, then comma (between items) and finally ].
|
||||
func walk(v interface{}, h io.Writer) {
|
||||
|
||||
switch x := v.(type) {
|
||||
case map[string]interface{}:
|
||||
_, _ = h.Write([]byte("{"))
|
||||
|
||||
var keys []string
|
||||
for k := range x {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
|
||||
for i, key := range keys {
|
||||
if i > 0 {
|
||||
_, _ = h.Write([]byte(","))
|
||||
}
|
||||
|
||||
_, _ = h.Write(encodePrimitive(key))
|
||||
_, _ = h.Write([]byte(":"))
|
||||
walk(x[key], h)
|
||||
}
|
||||
|
||||
_, _ = h.Write([]byte("}"))
|
||||
case []interface{}:
|
||||
_, _ = h.Write([]byte("["))
|
||||
|
||||
for i, e := range x {
|
||||
if i > 0 {
|
||||
_, _ = h.Write([]byte(","))
|
||||
}
|
||||
walk(e, h)
|
||||
}
|
||||
|
||||
_, _ = h.Write([]byte("]"))
|
||||
case []byte:
|
||||
_, _ = h.Write(x)
|
||||
default:
|
||||
_, _ = h.Write(encodePrimitive(x))
|
||||
}
|
||||
}
|
||||
|
||||
func encodePrimitive(v interface{}) []byte {
|
||||
var buf bytes.Buffer
|
||||
encoder := json.NewEncoder(&buf)
|
||||
encoder.SetEscapeHTML(false)
|
||||
_ = encoder.Encode(v)
|
||||
return []byte(strings.Trim(buf.String(), "\n"))
|
||||
return v1.NewSignatureHasher(alg)
|
||||
}
|
||||
|
||||
126
vendor/github.com/open-policy-agent/opa/bundle/keys.go
generated
vendored
126
vendor/github.com/open-policy-agent/opa/bundle/keys.go
generated
vendored
@@ -6,139 +6,25 @@
|
||||
package bundle
|
||||
|
||||
import (
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/open-policy-agent/opa/internal/jwx/jwa"
|
||||
"github.com/open-policy-agent/opa/internal/jwx/jws/sign"
|
||||
"github.com/open-policy-agent/opa/keys"
|
||||
|
||||
"github.com/open-policy-agent/opa/util"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultTokenSigningAlg = "RS256"
|
||||
v1 "github.com/open-policy-agent/opa/v1/bundle"
|
||||
)
|
||||
|
||||
// KeyConfig holds the keys used to sign or verify bundles and tokens
|
||||
// Moved to own package, alias kept for backwards compatibility
|
||||
type KeyConfig = keys.Config
|
||||
type KeyConfig = v1.KeyConfig
|
||||
|
||||
// VerificationConfig represents the key configuration used to verify a signed bundle
|
||||
type VerificationConfig struct {
|
||||
PublicKeys map[string]*KeyConfig
|
||||
KeyID string `json:"keyid"`
|
||||
Scope string `json:"scope"`
|
||||
Exclude []string `json:"exclude_files"`
|
||||
}
|
||||
type VerificationConfig = v1.VerificationConfig
|
||||
|
||||
// NewVerificationConfig return a new VerificationConfig
|
||||
func NewVerificationConfig(keys map[string]*KeyConfig, id, scope string, exclude []string) *VerificationConfig {
|
||||
return &VerificationConfig{
|
||||
PublicKeys: keys,
|
||||
KeyID: id,
|
||||
Scope: scope,
|
||||
Exclude: exclude,
|
||||
}
|
||||
}
|
||||
|
||||
// ValidateAndInjectDefaults validates the config and inserts default values
|
||||
func (vc *VerificationConfig) ValidateAndInjectDefaults(keys map[string]*KeyConfig) error {
|
||||
vc.PublicKeys = keys
|
||||
|
||||
if vc.KeyID != "" {
|
||||
found := false
|
||||
for key := range keys {
|
||||
if key == vc.KeyID {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !found {
|
||||
return fmt.Errorf("key id %s not found", vc.KeyID)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetPublicKey returns the public key corresponding to the given key id
|
||||
func (vc *VerificationConfig) GetPublicKey(id string) (*KeyConfig, error) {
|
||||
var kc *KeyConfig
|
||||
var ok bool
|
||||
|
||||
if kc, ok = vc.PublicKeys[id]; !ok {
|
||||
return nil, fmt.Errorf("verification key corresponding to ID %v not found", id)
|
||||
}
|
||||
return kc, nil
|
||||
return v1.NewVerificationConfig(keys, id, scope, exclude)
|
||||
}
|
||||
|
||||
// SigningConfig represents the key configuration used to generate a signed bundle
|
||||
type SigningConfig struct {
|
||||
Plugin string
|
||||
Key string
|
||||
Algorithm string
|
||||
ClaimsPath string
|
||||
}
|
||||
type SigningConfig = v1.SigningConfig
|
||||
|
||||
// NewSigningConfig return a new SigningConfig
|
||||
func NewSigningConfig(key, alg, claimsPath string) *SigningConfig {
|
||||
if alg == "" {
|
||||
alg = defaultTokenSigningAlg
|
||||
}
|
||||
|
||||
return &SigningConfig{
|
||||
Plugin: defaultSignerID,
|
||||
Key: key,
|
||||
Algorithm: alg,
|
||||
ClaimsPath: claimsPath,
|
||||
}
|
||||
}
|
||||
|
||||
// WithPlugin sets the signing plugin in the signing config
|
||||
func (s *SigningConfig) WithPlugin(plugin string) *SigningConfig {
|
||||
if plugin != "" {
|
||||
s.Plugin = plugin
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// GetPrivateKey returns the private key or secret from the signing config
|
||||
func (s *SigningConfig) GetPrivateKey() (interface{}, error) {
|
||||
|
||||
block, _ := pem.Decode([]byte(s.Key))
|
||||
if block != nil {
|
||||
return sign.GetSigningKey(s.Key, jwa.SignatureAlgorithm(s.Algorithm))
|
||||
}
|
||||
|
||||
var priv string
|
||||
if _, err := os.Stat(s.Key); err == nil {
|
||||
bs, err := os.ReadFile(s.Key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
priv = string(bs)
|
||||
} else if os.IsNotExist(err) {
|
||||
priv = s.Key
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sign.GetSigningKey(priv, jwa.SignatureAlgorithm(s.Algorithm))
|
||||
}
|
||||
|
||||
// GetClaims returns the claims by reading the file specified in the signing config
|
||||
func (s *SigningConfig) GetClaims() (map[string]interface{}, error) {
|
||||
var claims map[string]interface{}
|
||||
|
||||
bs, err := os.ReadFile(s.ClaimsPath)
|
||||
if err != nil {
|
||||
return claims, err
|
||||
}
|
||||
|
||||
if err := util.UnmarshalJSON(bs, &claims); err != nil {
|
||||
return claims, err
|
||||
}
|
||||
return claims, nil
|
||||
return v1.NewSigningConfig(key, alg, claimsPath)
|
||||
}
|
||||
|
||||
112
vendor/github.com/open-policy-agent/opa/bundle/sign.go
generated
vendored
112
vendor/github.com/open-policy-agent/opa/bundle/sign.go
generated
vendored
@@ -6,130 +6,30 @@
|
||||
package bundle
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/open-policy-agent/opa/internal/jwx/jwa"
|
||||
"github.com/open-policy-agent/opa/internal/jwx/jws"
|
||||
v1 "github.com/open-policy-agent/opa/v1/bundle"
|
||||
)
|
||||
|
||||
const defaultSignerID = "_default"
|
||||
|
||||
var signers map[string]Signer
|
||||
|
||||
// Signer is the interface expected for implementations that generate bundle signatures.
|
||||
type Signer interface {
|
||||
GenerateSignedToken([]FileInfo, *SigningConfig, string) (string, error)
|
||||
}
|
||||
type Signer v1.Signer
|
||||
|
||||
// GenerateSignedToken will retrieve the Signer implementation based on the Plugin specified
|
||||
// in SigningConfig, and call its implementation of GenerateSignedToken. The signer generates
|
||||
// a signed token given the list of files to be included in the payload and the bundle
|
||||
// signing config. The keyID if non-empty, represents the value for the "keyid" claim in the token.
|
||||
func GenerateSignedToken(files []FileInfo, sc *SigningConfig, keyID string) (string, error) {
|
||||
var plugin string
|
||||
// for backwards compatibility, check if there is no plugin specified, and use default
|
||||
if sc.Plugin == "" {
|
||||
plugin = defaultSignerID
|
||||
} else {
|
||||
plugin = sc.Plugin
|
||||
}
|
||||
signer, err := GetSigner(plugin)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return signer.GenerateSignedToken(files, sc, keyID)
|
||||
return v1.GenerateSignedToken(files, sc, keyID)
|
||||
}
|
||||
|
||||
// DefaultSigner is the default bundle signing implementation. It signs bundles by generating
|
||||
// a JWT and signing it using a locally-accessible private key.
|
||||
type DefaultSigner struct{}
|
||||
|
||||
// GenerateSignedToken generates a signed token given the list of files to be
|
||||
// included in the payload and the bundle signing config. The keyID if non-empty,
|
||||
// represents the value for the "keyid" claim in the token
|
||||
func (*DefaultSigner) GenerateSignedToken(files []FileInfo, sc *SigningConfig, keyID string) (string, error) {
|
||||
payload, err := generatePayload(files, sc, keyID)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
privateKey, err := sc.GetPrivateKey()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
var headers jws.StandardHeaders
|
||||
|
||||
if err := headers.Set(jws.AlgorithmKey, jwa.SignatureAlgorithm(sc.Algorithm)); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if keyID != "" {
|
||||
if err := headers.Set(jws.KeyIDKey, keyID); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
|
||||
hdr, err := json.Marshal(headers)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
token, err := jws.SignLiteral(payload,
|
||||
jwa.SignatureAlgorithm(sc.Algorithm),
|
||||
privateKey,
|
||||
hdr,
|
||||
rand.Reader)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(token), nil
|
||||
}
|
||||
|
||||
func generatePayload(files []FileInfo, sc *SigningConfig, keyID string) ([]byte, error) {
|
||||
payload := make(map[string]interface{})
|
||||
payload["files"] = files
|
||||
|
||||
if sc.ClaimsPath != "" {
|
||||
claims, err := sc.GetClaims()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for claim, value := range claims {
|
||||
payload[claim] = value
|
||||
}
|
||||
} else {
|
||||
if keyID != "" {
|
||||
// keyid claim is deprecated but include it for backwards compatibility.
|
||||
payload["keyid"] = keyID
|
||||
}
|
||||
}
|
||||
return json.Marshal(payload)
|
||||
}
|
||||
type DefaultSigner v1.DefaultSigner
|
||||
|
||||
// GetSigner returns the Signer registered under the given id
|
||||
func GetSigner(id string) (Signer, error) {
|
||||
signer, ok := signers[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("no signer exists under id %s", id)
|
||||
}
|
||||
return signer, nil
|
||||
return v1.GetSigner(id)
|
||||
}
|
||||
|
||||
// RegisterSigner registers a Signer under the given id
|
||||
func RegisterSigner(id string, s Signer) error {
|
||||
if id == defaultSignerID {
|
||||
return fmt.Errorf("signer id %s is reserved, use a different id", id)
|
||||
}
|
||||
signers[id] = s
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
signers = map[string]Signer{
|
||||
defaultSignerID: &DefaultSigner{},
|
||||
}
|
||||
return v1.RegisterSigner(id, s)
|
||||
}
|
||||
|
||||
955
vendor/github.com/open-policy-agent/opa/bundle/store.go
generated
vendored
955
vendor/github.com/open-policy-agent/opa/bundle/store.go
generated
vendored
File diff suppressed because it is too large
Load Diff
207
vendor/github.com/open-policy-agent/opa/bundle/verify.go
generated
vendored
207
vendor/github.com/open-policy-agent/opa/bundle/verify.go
generated
vendored
@@ -6,26 +6,11 @@
|
||||
package bundle
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/open-policy-agent/opa/internal/jwx/jwa"
|
||||
"github.com/open-policy-agent/opa/internal/jwx/jws"
|
||||
"github.com/open-policy-agent/opa/internal/jwx/jws/verify"
|
||||
"github.com/open-policy-agent/opa/util"
|
||||
v1 "github.com/open-policy-agent/opa/v1/bundle"
|
||||
)
|
||||
|
||||
const defaultVerifierID = "_default"
|
||||
|
||||
var verifiers map[string]Verifier
|
||||
|
||||
// Verifier is the interface expected for implementations that verify bundle signatures.
|
||||
type Verifier interface {
|
||||
VerifyBundleSignature(SignaturesConfig, *VerificationConfig) (map[string]FileInfo, error)
|
||||
}
|
||||
type Verifier v1.Verifier
|
||||
|
||||
// VerifyBundleSignature will retrieve the Verifier implementation based
|
||||
// on the Plugin specified in SignaturesConfig, and call its implementation
|
||||
@@ -33,199 +18,19 @@ type Verifier interface {
|
||||
// using the given public keys or secret. If a signature is verified, it keeps
|
||||
// track of the files specified in the JWT payload
|
||||
func VerifyBundleSignature(sc SignaturesConfig, bvc *VerificationConfig) (map[string]FileInfo, error) {
|
||||
// default implementation does not return a nil for map, so don't
|
||||
// do it here either
|
||||
files := make(map[string]FileInfo)
|
||||
var plugin string
|
||||
// for backwards compatibility, check if there is no plugin specified, and use default
|
||||
if sc.Plugin == "" {
|
||||
plugin = defaultVerifierID
|
||||
} else {
|
||||
plugin = sc.Plugin
|
||||
}
|
||||
verifier, err := GetVerifier(plugin)
|
||||
if err != nil {
|
||||
return files, err
|
||||
}
|
||||
return verifier.VerifyBundleSignature(sc, bvc)
|
||||
return v1.VerifyBundleSignature(sc, bvc)
|
||||
}
|
||||
|
||||
// DefaultVerifier is the default bundle verification implementation. It verifies bundles by checking
|
||||
// the JWT signature using a locally-accessible public key.
|
||||
type DefaultVerifier struct{}
|
||||
|
||||
// VerifyBundleSignature verifies the bundle signature using the given public keys or secret.
|
||||
// If a signature is verified, it keeps track of the files specified in the JWT payload
|
||||
func (*DefaultVerifier) VerifyBundleSignature(sc SignaturesConfig, bvc *VerificationConfig) (map[string]FileInfo, error) {
|
||||
files := make(map[string]FileInfo)
|
||||
|
||||
if len(sc.Signatures) == 0 {
|
||||
return files, fmt.Errorf(".signatures.json: missing JWT (expected exactly one)")
|
||||
}
|
||||
|
||||
if len(sc.Signatures) > 1 {
|
||||
return files, fmt.Errorf(".signatures.json: multiple JWTs not supported (expected exactly one)")
|
||||
}
|
||||
|
||||
for _, token := range sc.Signatures {
|
||||
payload, err := verifyJWTSignature(token, bvc)
|
||||
if err != nil {
|
||||
return files, err
|
||||
}
|
||||
|
||||
for _, file := range payload.Files {
|
||||
files[file.Name] = file
|
||||
}
|
||||
}
|
||||
return files, nil
|
||||
}
|
||||
|
||||
func verifyJWTSignature(token string, bvc *VerificationConfig) (*DecodedSignature, error) {
|
||||
// decode JWT to check if the header specifies the key to use and/or if claims have the scope.
|
||||
|
||||
parts, err := jws.SplitCompact(token)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var decodedHeader []byte
|
||||
if decodedHeader, err = base64.RawURLEncoding.DecodeString(parts[0]); err != nil {
|
||||
return nil, fmt.Errorf("failed to base64 decode JWT headers: %w", err)
|
||||
}
|
||||
|
||||
var hdr jws.StandardHeaders
|
||||
if err := json.Unmarshal(decodedHeader, &hdr); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse JWT headers: %w", err)
|
||||
}
|
||||
|
||||
payload, err := base64.RawURLEncoding.DecodeString(parts[1])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var ds DecodedSignature
|
||||
if err := json.Unmarshal(payload, &ds); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// check for the id of the key to use for JWT signature verification
|
||||
// first in the OPA config. If not found, then check the JWT kid.
|
||||
keyID := bvc.KeyID
|
||||
if keyID == "" {
|
||||
keyID = hdr.KeyID
|
||||
}
|
||||
if keyID == "" {
|
||||
// If header has no key id, check the deprecated key claim.
|
||||
keyID = ds.KeyID
|
||||
}
|
||||
|
||||
if keyID == "" {
|
||||
return nil, fmt.Errorf("verification key ID is empty")
|
||||
}
|
||||
|
||||
// now that we have the keyID, fetch the actual key
|
||||
keyConfig, err := bvc.GetPublicKey(keyID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// verify JWT signature
|
||||
alg := jwa.SignatureAlgorithm(keyConfig.Algorithm)
|
||||
key, err := verify.GetSigningKey(keyConfig.Key, alg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
_, err = jws.Verify([]byte(token), alg, key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// verify the scope
|
||||
scope := bvc.Scope
|
||||
if scope == "" {
|
||||
scope = keyConfig.Scope
|
||||
}
|
||||
|
||||
if ds.Scope != scope {
|
||||
return nil, fmt.Errorf("scope mismatch")
|
||||
}
|
||||
return &ds, nil
|
||||
}
|
||||
|
||||
// VerifyBundleFile verifies the hash of a file in the bundle matches to that provided in the bundle's signature
|
||||
func VerifyBundleFile(path string, data bytes.Buffer, files map[string]FileInfo) error {
|
||||
var file FileInfo
|
||||
var ok bool
|
||||
|
||||
if file, ok = files[path]; !ok {
|
||||
return fmt.Errorf("file %v not included in bundle signature", path)
|
||||
}
|
||||
|
||||
if file.Algorithm == "" {
|
||||
return fmt.Errorf("no hashing algorithm provided for file %v", path)
|
||||
}
|
||||
|
||||
hash, err := NewSignatureHasher(HashingAlgorithm(file.Algorithm))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// hash the file content
|
||||
// For unstructured files, hash the byte stream of the file
|
||||
// For structured files, read the byte stream and parse into a JSON structure;
|
||||
// then recursively order the fields of all objects alphabetically and then apply
|
||||
// the hash function to result to compute the hash. This ensures that the digital signature is
|
||||
// independent of whitespace and other non-semantic JSON features.
|
||||
var value interface{}
|
||||
if IsStructuredDoc(path) {
|
||||
err := util.Unmarshal(data.Bytes(), &value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
value = data.Bytes()
|
||||
}
|
||||
|
||||
bs, err := hash.HashFile(value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// compare file hash with same file in the JWT payloads
|
||||
fb, err := hex.DecodeString(file.Hash)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !bytes.Equal(fb, bs) {
|
||||
return fmt.Errorf("%v: digest mismatch (want: %x, got: %x)", path, fb, bs)
|
||||
}
|
||||
|
||||
delete(files, path)
|
||||
return nil
|
||||
}
|
||||
type DefaultVerifier = v1.DefaultVerifier
|
||||
|
||||
// GetVerifier returns the Verifier registered under the given id
|
||||
func GetVerifier(id string) (Verifier, error) {
|
||||
verifier, ok := verifiers[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("no verifier exists under id %s", id)
|
||||
}
|
||||
return verifier, nil
|
||||
return v1.GetVerifier(id)
|
||||
}
|
||||
|
||||
// RegisterVerifier registers a Verifier under the given id
|
||||
func RegisterVerifier(id string, v Verifier) error {
|
||||
if id == defaultVerifierID {
|
||||
return fmt.Errorf("verifier id %s is reserved, use a different id", id)
|
||||
}
|
||||
verifiers[id] = v
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
verifiers = map[string]Verifier{
|
||||
defaultVerifierID: &DefaultVerifier{},
|
||||
}
|
||||
return v1.RegisterVerifier(id, v)
|
||||
}
|
||||
|
||||
8
vendor/github.com/open-policy-agent/opa/capabilities/doc.go
generated
vendored
Normal file
8
vendor/github.com/open-policy-agent/opa/capabilities/doc.go
generated
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
// Copyright 2024 The OPA Authors. All rights reserved.
|
||||
// Use of this source code is governed by an Apache2
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Deprecated: This package is intended for older projects transitioning from OPA v0.x and will remain for the lifetime of OPA v1.x, but its use is not recommended.
|
||||
// For newer features and behaviours, such as defaulting to the Rego v1 syntax, use the corresponding components in the [github.com/open-policy-agent/opa/v1] package instead.
|
||||
// See https://www.openpolicyagent.org/docs/latest/v0-compatibility/ for more information.
|
||||
package capabilities
|
||||
4835
vendor/github.com/open-policy-agent/opa/capabilities/v1.0.0.json
generated
vendored
Normal file
4835
vendor/github.com/open-policy-agent/opa/capabilities/v1.0.0.json
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
4835
vendor/github.com/open-policy-agent/opa/capabilities/v1.0.1.json
generated
vendored
Normal file
4835
vendor/github.com/open-policy-agent/opa/capabilities/v1.0.1.json
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
4835
vendor/github.com/open-policy-agent/opa/capabilities/v1.1.0.json
generated
vendored
Normal file
4835
vendor/github.com/open-policy-agent/opa/capabilities/v1.1.0.json
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user