feat: kubesphere 4.0 (#6115)
* feat: kubesphere 4.0 Signed-off-by: ci-bot <ci-bot@kubesphere.io> * feat: kubesphere 4.0 Signed-off-by: ci-bot <ci-bot@kubesphere.io> --------- Signed-off-by: ci-bot <ci-bot@kubesphere.io> Co-authored-by: ks-ci-bot <ks-ci-bot@example.com> Co-authored-by: joyceliu <joyceliu@yunify.com>
This commit is contained in:
committed by
GitHub
parent
b5015ec7b9
commit
447a51f08b
100
vendor/github.com/google/go-containerregistry/pkg/authn/README.md
generated
vendored
100
vendor/github.com/google/go-containerregistry/pkg/authn/README.md
generated
vendored
@@ -4,15 +4,15 @@
|
||||
|
||||
This README outlines how we acquire and use credentials when interacting with a registry.
|
||||
|
||||
As much as possible, we attempt to emulate docker's authentication behavior and configuration so that this library "just works" if you've already configured credentials that work with docker; however, when things don't work, a basic understanding of what's going on can help with debugging.
|
||||
As much as possible, we attempt to emulate `docker`'s authentication behavior and configuration so that this library "just works" if you've already configured credentials that work with `docker`; however, when things don't work, a basic understanding of what's going on can help with debugging.
|
||||
|
||||
The official documentation for how docker authentication works is (reasonably) scattered across several different sites and GitHub repositories, so we've tried to summarize the relevant bits here.
|
||||
The official documentation for how authentication with `docker` works is (reasonably) scattered across several different sites and GitHub repositories, so we've tried to summarize the relevant bits here.
|
||||
|
||||
## tl;dr for consumers of this package
|
||||
|
||||
By default, [`pkg/v1/remote`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/remote) uses [`Anonymous`](https://godoc.org/github.com/google/go-containerregistry/pkg/authn#Anonymous) credentials (i.e. _none_), which for most registries will only allow read access to public images.
|
||||
|
||||
To use the credentials found in your docker config file, you can use the [`DefaultKeychain`](https://godoc.org/github.com/google/go-containerregistry/pkg/authn#DefaultKeychain), e.g.:
|
||||
To use the credentials found in your Docker config file, you can use the [`DefaultKeychain`](https://godoc.org/github.com/google/go-containerregistry/pkg/authn#DefaultKeychain), e.g.:
|
||||
|
||||
```go
|
||||
package main
|
||||
@@ -42,15 +42,95 @@ func main() {
|
||||
}
|
||||
```
|
||||
|
||||
(If you're only using [gcr.io](https://gcr.io), see the [`pkg/v1/google.Keychain`](https://godoc.org/github.com/google/go-containerregistry/pkg/v1/google#Keychain), which emulates [`docker-credential-gcr`](https://github.com/GoogleCloudPlatform/docker-credential-gcr).)
|
||||
The `DefaultKeychain` will use credentials as described in your Docker config file -- usually `~/.docker/config.json`, or `%USERPROFILE%\.docker\config.json` on Windows -- or the location described by the `DOCKER_CONFIG` environment variable, if set.
|
||||
|
||||
## The Config File
|
||||
If those are not found, `DefaultKeychain` will look for credentials configured using [Podman's expectation](https://docs.podman.io/en/latest/markdown/podman-login.1.html) that these are found in `${XDG_RUNTIME_DIR}/containers/auth.json`.
|
||||
|
||||
This file contains various configuration options for docker and is (by default) located at:
|
||||
* `$HOME/.docker/config.json` (on linux and darwin), or
|
||||
* `%USERPROFILE%\.docker\config.json` (on windows).
|
||||
[See below](#docker-config-auth) for more information about what is configured in this file.
|
||||
|
||||
You can override this location with the `DOCKER_CONFIG` environment variable.
|
||||
## Emulating Cloud Provider Credential Helpers
|
||||
|
||||
[`pkg/v1/google.Keychain`](https://pkg.go.dev/github.com/google/go-containerregistry/pkg/v1/google#Keychain) provides a `Keychain` implementation that emulates [`docker-credential-gcr`](https://github.com/GoogleCloudPlatform/docker-credential-gcr) to find credentials in the environment.
|
||||
See [`google.NewEnvAuthenticator`](https://pkg.go.dev/github.com/google/go-containerregistry/pkg/v1/google#NewEnvAuthenticator) and [`google.NewGcloudAuthenticator`](https://pkg.go.dev/github.com/google/go-containerregistry/pkg/v1/google#NewGcloudAuthenticator) for more information.
|
||||
|
||||
To emulate other credential helpers without requiring them to be available as executables, [`NewKeychainFromHelper`](https://pkg.go.dev/github.com/google/go-containerregistry/pkg/authn#NewKeychainFromHelper) provides an adapter that takes a Go implementation satisfying a subset of the [`credentials.Helper`](https://pkg.go.dev/github.com/docker/docker-credential-helpers/credentials#Helper) interface, and makes it available as a `Keychain`.
|
||||
|
||||
This means that you can emulate, for example, [Amazon ECR's `docker-credential-ecr-login` credential helper](https://github.com/awslabs/amazon-ecr-credential-helper) using the same implementation:
|
||||
|
||||
```go
|
||||
import (
|
||||
ecr "github.com/awslabs/amazon-ecr-credential-helper/ecr-login"
|
||||
"github.com/awslabs/amazon-ecr-credential-helper/ecr-login/api"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// ...
|
||||
ecrHelper := ecr.ECRHelper{ClientFactory: api.DefaultClientFactory{}}
|
||||
img, err := remote.Get(ref, remote.WithAuthFromKeychain(authn.NewKeychainFromHelper(ecrHelper)))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
Likewise, you can emulate [Azure's ACR `docker-credential-acr-env` credential helper](https://github.com/chrismellard/docker-credential-acr-env):
|
||||
|
||||
```go
|
||||
import (
|
||||
"github.com/chrismellard/docker-credential-acr-env/pkg/credhelper"
|
||||
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/v1/remote"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// ...
|
||||
acrHelper := credhelper.NewACRCredentialsHelper()
|
||||
img, err := remote.Get(ref, remote.WithAuthFromKeychain(authn.NewKeychainFromHelper(acrHelper)))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
<!-- TODO(jasonhall): Wrap these in docker-credential-magic and reference those from here. -->
|
||||
|
||||
## Using Multiple `Keychain`s
|
||||
|
||||
[`NewMultiKeychain`](https://pkg.go.dev/github.com/google/go-containerregistry/pkg/authn#NewMultiKeychain) allows you to specify multiple `Keychain` implementations, which will be checked in order when credentials are needed.
|
||||
|
||||
For example:
|
||||
|
||||
```go
|
||||
kc := authn.NewMultiKeychain(
|
||||
authn.DefaultKeychain,
|
||||
google.Keychain,
|
||||
authn.NewKeychainFromHelper(ecr.ECRHelper{ClientFactory: api.DefaultClientFactory{}}),
|
||||
authn.NewKeychainFromHelper(acr.ACRCredHelper{}),
|
||||
)
|
||||
```
|
||||
|
||||
This multi-keychain will:
|
||||
|
||||
- first check for credentials found in the Docker config file, as describe above, then
|
||||
- check for GCP credentials available in the environment, as described above, then
|
||||
- check for ECR credentials by emulating the ECR credential helper, then
|
||||
- check for ACR credentials by emulating the ACR credential helper.
|
||||
|
||||
If any keychain implementation is able to provide credentials for the request, they will be used, and further keychain implementations will not be consulted.
|
||||
|
||||
If no implementations are able to provide credentials, `Anonymous` credentials will be used.
|
||||
|
||||
## Docker Config Auth
|
||||
|
||||
What follows attempts to gather useful information about Docker's config.json and make it available in one place.
|
||||
|
||||
If you have questions, please [file an issue](https://github.com/google/go-containerregistry/issues/new).
|
||||
|
||||
### Plaintext
|
||||
|
||||
@@ -92,7 +172,7 @@ For what it's worth, this config file is equivalent to:
|
||||
|
||||
### Helpers
|
||||
|
||||
If you log in like this, docker will warn you that you should use a [credential helper](https://docs.docker.com/engine/reference/commandline/login/#credentials-store), and you should!
|
||||
If you log in like this, `docker` will warn you that you should use a [credential helper](https://docs.docker.com/engine/reference/commandline/login/#credentials-store), and you should!
|
||||
|
||||
To configure a global credential helper:
|
||||
```json
|
||||
|
||||
87
vendor/github.com/google/go-containerregistry/pkg/authn/authn.go
generated
vendored
87
vendor/github.com/google/go-containerregistry/pkg/authn/authn.go
generated
vendored
@@ -14,6 +14,19 @@
|
||||
|
||||
package authn
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Authenticator is used to authenticate Docker transports.
|
||||
type Authenticator interface {
|
||||
// Authorization returns the value to use in an http transport's Authorization header.
|
||||
Authorization() (*AuthConfig, error)
|
||||
}
|
||||
|
||||
// AuthConfig contains authorization information for connecting to a Registry
|
||||
// Inlined what we use from github.com/docker/cli/cli/config/types
|
||||
type AuthConfig struct {
|
||||
@@ -29,8 +42,74 @@ type AuthConfig struct {
|
||||
RegistryToken string `json:"registrytoken,omitempty"`
|
||||
}
|
||||
|
||||
// Authenticator is used to authenticate Docker transports.
|
||||
type Authenticator interface {
|
||||
// Authorization returns the value to use in an http transport's Authorization header.
|
||||
Authorization() (*AuthConfig, error)
|
||||
// This is effectively a copy of the type AuthConfig. This simplifies
|
||||
// JSON unmarshalling since AuthConfig methods are not inherited
|
||||
type authConfig AuthConfig
|
||||
|
||||
// UnmarshalJSON implements json.Unmarshaler
|
||||
func (a *AuthConfig) UnmarshalJSON(data []byte) error {
|
||||
var shadow authConfig
|
||||
err := json.Unmarshal(data, &shadow)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
*a = (AuthConfig)(shadow)
|
||||
|
||||
if len(shadow.Auth) != 0 {
|
||||
var derr error
|
||||
a.Username, a.Password, derr = decodeDockerConfigFieldAuth(shadow.Auth)
|
||||
if derr != nil {
|
||||
err = fmt.Errorf("unable to decode auth field: %w", derr)
|
||||
}
|
||||
} else if len(a.Username) != 0 && len(a.Password) != 0 {
|
||||
a.Auth = encodeDockerConfigFieldAuth(shadow.Username, shadow.Password)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// MarshalJSON implements json.Marshaler
|
||||
func (a AuthConfig) MarshalJSON() ([]byte, error) {
|
||||
shadow := (authConfig)(a)
|
||||
shadow.Auth = encodeDockerConfigFieldAuth(shadow.Username, shadow.Password)
|
||||
return json.Marshal(shadow)
|
||||
}
|
||||
|
||||
// decodeDockerConfigFieldAuth deserializes the "auth" field from dockercfg into a
|
||||
// username and a password. The format of the auth field is base64(<username>:<password>).
|
||||
//
|
||||
// From https://github.com/kubernetes/kubernetes/blob/75e49ec824b183288e1dbaccfd7dbe77d89db381/pkg/credentialprovider/config.go
|
||||
// Copyright 2014 The Kubernetes Authors.
|
||||
// SPDX-License-Identifier: Apache-2.0
|
||||
func decodeDockerConfigFieldAuth(field string) (username, password string, err error) {
|
||||
var decoded []byte
|
||||
// StdEncoding can only decode padded string
|
||||
// RawStdEncoding can only decode unpadded string
|
||||
if strings.HasSuffix(strings.TrimSpace(field), "=") {
|
||||
// decode padded data
|
||||
decoded, err = base64.StdEncoding.DecodeString(field)
|
||||
} else {
|
||||
// decode unpadded data
|
||||
decoded, err = base64.RawStdEncoding.DecodeString(field)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
parts := strings.SplitN(string(decoded), ":", 2)
|
||||
if len(parts) != 2 {
|
||||
err = fmt.Errorf("must be formatted as base64(username:password)")
|
||||
return
|
||||
}
|
||||
|
||||
username = parts[0]
|
||||
password = parts[1]
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func encodeDockerConfigFieldAuth(username, password string) string {
|
||||
return base64.StdEncoding.EncodeToString([]byte(username + ":" + password))
|
||||
}
|
||||
|
||||
117
vendor/github.com/google/go-containerregistry/pkg/authn/keychain.go
generated
vendored
117
vendor/github.com/google/go-containerregistry/pkg/authn/keychain.go
generated
vendored
@@ -16,10 +16,14 @@ package authn
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
|
||||
"github.com/docker/cli/cli/config"
|
||||
"github.com/docker/cli/cli/config/configfile"
|
||||
"github.com/docker/cli/cli/config/types"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
"github.com/mitchellh/go-homedir"
|
||||
)
|
||||
|
||||
// Resource represents a registry or repository that can be authenticated against.
|
||||
@@ -42,7 +46,9 @@ type Keychain interface {
|
||||
|
||||
// defaultKeychain implements Keychain with the semantics of the standard Docker
|
||||
// credential keychain.
|
||||
type defaultKeychain struct{}
|
||||
type defaultKeychain struct {
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
var (
|
||||
// DefaultKeychain implements Keychain by interpreting the docker config file.
|
||||
@@ -57,28 +63,78 @@ const (
|
||||
|
||||
// Resolve implements Keychain.
|
||||
func (dk *defaultKeychain) Resolve(target Resource) (Authenticator, error) {
|
||||
cf, err := config.Load(os.Getenv("DOCKER_CONFIG"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
dk.mu.Lock()
|
||||
defer dk.mu.Unlock()
|
||||
|
||||
// Podman users may have their container registry auth configured in a
|
||||
// different location, that Docker packages aren't aware of.
|
||||
// If the Docker config file isn't found, we'll fallback to look where
|
||||
// Podman configures it, and parse that as a Docker auth config instead.
|
||||
|
||||
// First, check $HOME/.docker/config.json
|
||||
foundDockerConfig := false
|
||||
home, err := homedir.Dir()
|
||||
if err == nil {
|
||||
foundDockerConfig = fileExists(filepath.Join(home, ".docker/config.json"))
|
||||
}
|
||||
// If $HOME/.docker/config.json isn't found, check $DOCKER_CONFIG (if set)
|
||||
if !foundDockerConfig && os.Getenv("DOCKER_CONFIG") != "" {
|
||||
foundDockerConfig = fileExists(filepath.Join(os.Getenv("DOCKER_CONFIG"), "config.json"))
|
||||
}
|
||||
// If either of those locations are found, load it using Docker's
|
||||
// config.Load, which may fail if the config can't be parsed.
|
||||
//
|
||||
// If neither was found, look for Podman's auth at
|
||||
// $XDG_RUNTIME_DIR/containers/auth.json and attempt to load it as a
|
||||
// Docker config.
|
||||
//
|
||||
// If neither are found, fallback to Anonymous.
|
||||
var cf *configfile.ConfigFile
|
||||
if foundDockerConfig {
|
||||
cf, err = config.Load(os.Getenv("DOCKER_CONFIG"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
f, err := os.Open(filepath.Join(os.Getenv("XDG_RUNTIME_DIR"), "containers/auth.json"))
|
||||
if err != nil {
|
||||
return Anonymous, nil
|
||||
}
|
||||
defer f.Close()
|
||||
cf, err = config.LoadFromReader(f)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// See:
|
||||
// https://github.com/google/ko/issues/90
|
||||
// https://github.com/moby/moby/blob/fc01c2b481097a6057bec3cd1ab2d7b4488c50c4/registry/config.go#L397-L404
|
||||
key := target.RegistryStr()
|
||||
if key == name.DefaultRegistry {
|
||||
key = DefaultAuthKey
|
||||
}
|
||||
var cfg, empty types.AuthConfig
|
||||
for _, key := range []string{
|
||||
target.String(),
|
||||
target.RegistryStr(),
|
||||
} {
|
||||
if key == name.DefaultRegistry {
|
||||
key = DefaultAuthKey
|
||||
}
|
||||
|
||||
cfg, err := cf.GetAuthConfig(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
cfg, err = cf.GetAuthConfig(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// cf.GetAuthConfig automatically sets the ServerAddress attribute. Since
|
||||
// we don't make use of it, clear the value for a proper "is-empty" test.
|
||||
// See: https://github.com/google/go-containerregistry/issues/1510
|
||||
cfg.ServerAddress = ""
|
||||
if cfg != empty {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
empty := types.AuthConfig{}
|
||||
if cfg == empty {
|
||||
return Anonymous, nil
|
||||
}
|
||||
|
||||
return FromConfig(AuthConfig{
|
||||
Username: cfg.Username,
|
||||
Password: cfg.Password,
|
||||
@@ -87,3 +143,38 @@ func (dk *defaultKeychain) Resolve(target Resource) (Authenticator, error) {
|
||||
RegistryToken: cfg.RegistryToken,
|
||||
}), nil
|
||||
}
|
||||
|
||||
// fileExists returns true if the given path exists and is not a directory.
|
||||
func fileExists(path string) bool {
|
||||
fi, err := os.Stat(path)
|
||||
return err == nil && !fi.IsDir()
|
||||
}
|
||||
|
||||
// Helper is a subset of the Docker credential helper credentials.Helper
|
||||
// interface used by NewKeychainFromHelper.
|
||||
//
|
||||
// See:
|
||||
// https://pkg.go.dev/github.com/docker/docker-credential-helpers/credentials#Helper
|
||||
type Helper interface {
|
||||
Get(serverURL string) (string, string, error)
|
||||
}
|
||||
|
||||
// NewKeychainFromHelper returns a Keychain based on a Docker credential helper
|
||||
// implementation that can Get username and password credentials for a given
|
||||
// server URL.
|
||||
func NewKeychainFromHelper(h Helper) Keychain { return wrapper{h} }
|
||||
|
||||
type wrapper struct{ h Helper }
|
||||
|
||||
func (w wrapper) Resolve(r Resource) (Authenticator, error) {
|
||||
u, p, err := w.h.Get(r.RegistryStr())
|
||||
if err != nil {
|
||||
return Anonymous, nil
|
||||
}
|
||||
// If the secret being stored is an identity token, the Username should be set to <token>
|
||||
// ref: https://docs.docker.com/engine/reference/commandline/login/#credential-helper-protocol
|
||||
if u == "<token>" {
|
||||
return FromConfig(AuthConfig{Username: u, IdentityToken: p}), nil
|
||||
}
|
||||
return FromConfig(AuthConfig{Username: u, Password: p}), nil
|
||||
}
|
||||
|
||||
26
vendor/github.com/google/go-containerregistry/pkg/compression/compression.go
generated
vendored
Normal file
26
vendor/github.com/google/go-containerregistry/pkg/compression/compression.go
generated
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
// Copyright 2022 Google LLC All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package compression abstracts over gzip and zstd.
|
||||
package compression
|
||||
|
||||
// Compression is an enumeration of the supported compression algorithms
|
||||
type Compression string
|
||||
|
||||
// The collection of known MediaType values.
|
||||
const (
|
||||
None Compression = "none"
|
||||
GZip Compression = "gzip"
|
||||
ZStd Compression = "zstd"
|
||||
)
|
||||
12
vendor/github.com/google/go-containerregistry/pkg/logs/logs.go
generated
vendored
12
vendor/github.com/google/go-containerregistry/pkg/logs/logs.go
generated
vendored
@@ -16,24 +16,24 @@
|
||||
package logs
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"io"
|
||||
"log"
|
||||
)
|
||||
|
||||
var (
|
||||
// Warn is used to log non-fatal errors.
|
||||
Warn = log.New(ioutil.Discard, "", log.LstdFlags)
|
||||
Warn = log.New(io.Discard, "", log.LstdFlags)
|
||||
|
||||
// Progress is used to log notable, successful events.
|
||||
Progress = log.New(ioutil.Discard, "", log.LstdFlags)
|
||||
Progress = log.New(io.Discard, "", log.LstdFlags)
|
||||
|
||||
// Debug is used to log information that is useful for debugging.
|
||||
Debug = log.New(ioutil.Discard, "", log.LstdFlags)
|
||||
Debug = log.New(io.Discard, "", log.LstdFlags)
|
||||
)
|
||||
|
||||
// Enabled checks to see if the logger's writer is set to something other
|
||||
// than ioutil.Discard. This allows callers to avoid expensive operations
|
||||
// than io.Discard. This allows callers to avoid expensive operations
|
||||
// that will end up in /dev/null anyway.
|
||||
func Enabled(l *log.Logger) bool {
|
||||
return l.Writer() != ioutil.Discard
|
||||
return l.Writer() != io.Discard
|
||||
}
|
||||
|
||||
4
vendor/github.com/google/go-containerregistry/pkg/name/check.go
generated
vendored
4
vendor/github.com/google/go-containerregistry/pkg/name/check.go
generated
vendored
@@ -35,9 +35,9 @@ func stripRunesFn(runes string) func(rune) rune {
|
||||
func checkElement(name, element, allowedRunes string, minRunes, maxRunes int) error {
|
||||
numRunes := utf8.RuneCountInString(element)
|
||||
if (numRunes < minRunes) || (maxRunes < numRunes) {
|
||||
return NewErrBadName("%s must be between %d and %d runes in length: %s", name, minRunes, maxRunes, element)
|
||||
return newErrBadName("%s must be between %d and %d characters in length: %s", name, minRunes, maxRunes, element)
|
||||
} else if len(strings.Map(stripRunesFn(allowedRunes), element)) != 0 {
|
||||
return NewErrBadName("%s can only contain the runes `%s`: %s", name, allowedRunes, element)
|
||||
return newErrBadName("%s can only contain the characters `%s`: %s", name, allowedRunes, element)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
30
vendor/github.com/google/go-containerregistry/pkg/name/digest.go
generated
vendored
30
vendor/github.com/google/go-containerregistry/pkg/name/digest.go
generated
vendored
@@ -15,15 +15,14 @@
|
||||
package name
|
||||
|
||||
import (
|
||||
// nolint: depguard
|
||||
_ "crypto/sha256" // Recommended by go-digest.
|
||||
"strings"
|
||||
|
||||
"github.com/opencontainers/go-digest"
|
||||
)
|
||||
|
||||
const (
|
||||
// These have the form: sha256:<hex string>
|
||||
// TODO(dekkagaijin): replace with opencontainers/go-digest or docker/distribution's validation.
|
||||
digestChars = "sh:0123456789abcdef"
|
||||
digestDelim = "@"
|
||||
)
|
||||
const digestDelim = "@"
|
||||
|
||||
// Digest stores a digest name in a structured form.
|
||||
type Digest struct {
|
||||
@@ -60,22 +59,21 @@ func (d Digest) String() string {
|
||||
return d.original
|
||||
}
|
||||
|
||||
func checkDigest(name string) error {
|
||||
return checkElement("digest", name, digestChars, 7+64, 7+64)
|
||||
}
|
||||
|
||||
// NewDigest returns a new Digest representing the given name.
|
||||
func NewDigest(name string, opts ...Option) (Digest, error) {
|
||||
// Split on "@"
|
||||
parts := strings.Split(name, digestDelim)
|
||||
if len(parts) != 2 {
|
||||
return Digest{}, NewErrBadName("a digest must contain exactly one '@' separator (e.g. registry/repository@digest) saw: %s", name)
|
||||
return Digest{}, newErrBadName("a digest must contain exactly one '@' separator (e.g. registry/repository@digest) saw: %s", name)
|
||||
}
|
||||
base := parts[0]
|
||||
digest := parts[1]
|
||||
|
||||
// Always check that the digest is valid.
|
||||
if err := checkDigest(digest); err != nil {
|
||||
dig := parts[1]
|
||||
prefix := digest.Canonical.String() + ":"
|
||||
if !strings.HasPrefix(dig, prefix) {
|
||||
return Digest{}, newErrBadName("unsupported digest algorithm: %s", dig)
|
||||
}
|
||||
hex := strings.TrimPrefix(dig, prefix)
|
||||
if err := digest.Canonical.Validate(hex); err != nil {
|
||||
return Digest{}, err
|
||||
}
|
||||
|
||||
@@ -90,7 +88,7 @@ func NewDigest(name string, opts ...Option) (Digest, error) {
|
||||
}
|
||||
return Digest{
|
||||
Repository: repo,
|
||||
digest: digest,
|
||||
digest: dig,
|
||||
original: name,
|
||||
}, nil
|
||||
}
|
||||
|
||||
21
vendor/github.com/google/go-containerregistry/pkg/name/errors.go
generated
vendored
21
vendor/github.com/google/go-containerregistry/pkg/name/errors.go
generated
vendored
@@ -14,7 +14,10 @@
|
||||
|
||||
package name
|
||||
|
||||
import "fmt"
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// ErrBadName is an error for when a bad docker name is supplied.
|
||||
type ErrBadName struct {
|
||||
@@ -25,13 +28,21 @@ func (e *ErrBadName) Error() string {
|
||||
return e.info
|
||||
}
|
||||
|
||||
// NewErrBadName returns a ErrBadName which returns the given formatted string from Error().
|
||||
func NewErrBadName(fmtStr string, args ...interface{}) *ErrBadName {
|
||||
// Is reports whether target is an error of type ErrBadName
|
||||
func (e *ErrBadName) Is(target error) bool {
|
||||
var berr *ErrBadName
|
||||
return errors.As(target, &berr)
|
||||
}
|
||||
|
||||
// newErrBadName returns a ErrBadName which returns the given formatted string from Error().
|
||||
func newErrBadName(fmtStr string, args ...any) *ErrBadName {
|
||||
return &ErrBadName{fmt.Sprintf(fmtStr, args...)}
|
||||
}
|
||||
|
||||
// IsErrBadName returns true if the given error is an ErrBadName.
|
||||
//
|
||||
// Deprecated: Use errors.Is.
|
||||
func IsErrBadName(err error) bool {
|
||||
_, ok := err.(*ErrBadName)
|
||||
return ok
|
||||
var berr *ErrBadName
|
||||
return errors.As(err, &berr)
|
||||
}
|
||||
|
||||
17
vendor/github.com/google/go-containerregistry/pkg/name/ref.go
generated
vendored
17
vendor/github.com/google/go-containerregistry/pkg/name/ref.go
generated
vendored
@@ -44,8 +44,7 @@ func ParseReference(s string, opts ...Option) (Reference, error) {
|
||||
if d, err := NewDigest(s, opts...); err == nil {
|
||||
return d, nil
|
||||
}
|
||||
return nil, NewErrBadName("could not parse reference: " + s)
|
||||
|
||||
return nil, newErrBadName("could not parse reference: " + s)
|
||||
}
|
||||
|
||||
type stringConst string
|
||||
@@ -57,16 +56,16 @@ type stringConst string
|
||||
// To discourage its use in scenarios where the value is not known at code
|
||||
// authoring time, it must be passed a string constant:
|
||||
//
|
||||
// const str = "valid/string"
|
||||
// MustParseReference(str)
|
||||
// MustParseReference("another/valid/string")
|
||||
// MustParseReference(str + "/and/more")
|
||||
// const str = "valid/string"
|
||||
// MustParseReference(str)
|
||||
// MustParseReference("another/valid/string")
|
||||
// MustParseReference(str + "/and/more")
|
||||
//
|
||||
// These will not compile:
|
||||
//
|
||||
// var str = "valid/string"
|
||||
// MustParseReference(str)
|
||||
// MustParseReference(strings.Join([]string{"valid", "string"}, "/"))
|
||||
// var str = "valid/string"
|
||||
// MustParseReference(str)
|
||||
// MustParseReference(strings.Join([]string{"valid", "string"}, "/"))
|
||||
func MustParseReference(s stringConst, opts ...Option) Reference {
|
||||
ref, err := ParseReference(string(s), opts...)
|
||||
if err != nil {
|
||||
|
||||
4
vendor/github.com/google/go-containerregistry/pkg/name/registry.go
generated
vendored
4
vendor/github.com/google/go-containerregistry/pkg/name/registry.go
generated
vendored
@@ -98,7 +98,7 @@ func checkRegistry(name string) error {
|
||||
// Per RFC 3986, registries (authorities) are required to be prefixed with "//"
|
||||
// url.Host == hostname[:port] == authority
|
||||
if url, err := url.Parse("//" + name); err != nil || url.Host != name {
|
||||
return NewErrBadName("registries must be valid RFC 3986 URI authorities: %s", name)
|
||||
return newErrBadName("registries must be valid RFC 3986 URI authorities: %s", name)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -108,7 +108,7 @@ func checkRegistry(name string) error {
|
||||
func NewRegistry(name string, opts ...Option) (Registry, error) {
|
||||
opt := makeOptions(opts...)
|
||||
if opt.strict && len(name) == 0 {
|
||||
return Registry{}, NewErrBadName("strict validation requires the registry to be explicitly defined")
|
||||
return Registry{}, newErrBadName("strict validation requires the registry to be explicitly defined")
|
||||
}
|
||||
|
||||
if err := checkRegistry(name); err != nil {
|
||||
|
||||
4
vendor/github.com/google/go-containerregistry/pkg/name/repository.go
generated
vendored
4
vendor/github.com/google/go-containerregistry/pkg/name/repository.go
generated
vendored
@@ -72,7 +72,7 @@ func checkRepository(repository string) error {
|
||||
func NewRepository(name string, opts ...Option) (Repository, error) {
|
||||
opt := makeOptions(opts...)
|
||||
if len(name) == 0 {
|
||||
return Repository{}, NewErrBadName("a repository name must be specified")
|
||||
return Repository{}, newErrBadName("a repository name must be specified")
|
||||
}
|
||||
|
||||
var registry string
|
||||
@@ -95,7 +95,7 @@ func NewRepository(name string, opts ...Option) (Repository, error) {
|
||||
return Repository{}, err
|
||||
}
|
||||
if hasImplicitNamespace(repo, reg) && opt.strict {
|
||||
return Repository{}, NewErrBadName("strict validation requires the full repository path (missing 'library')")
|
||||
return Repository{}, newErrBadName("strict validation requires the full repository path (missing 'library')")
|
||||
}
|
||||
return Repository{reg, repo}, nil
|
||||
}
|
||||
|
||||
22
vendor/github.com/google/go-containerregistry/pkg/v1/config.go
generated
vendored
22
vendor/github.com/google/go-containerregistry/pkg/v1/config.go
generated
vendored
@@ -37,6 +37,22 @@ type ConfigFile struct {
|
||||
RootFS RootFS `json:"rootfs"`
|
||||
Config Config `json:"config"`
|
||||
OSVersion string `json:"os.version,omitempty"`
|
||||
Variant string `json:"variant,omitempty"`
|
||||
OSFeatures []string `json:"os.features,omitempty"`
|
||||
}
|
||||
|
||||
// Platform attempts to generates a Platform from the ConfigFile fields.
|
||||
func (cf *ConfigFile) Platform() *Platform {
|
||||
if cf.OS == "" && cf.Architecture == "" && cf.OSVersion == "" && cf.Variant == "" && len(cf.OSFeatures) == 0 {
|
||||
return nil
|
||||
}
|
||||
return &Platform{
|
||||
OS: cf.OS,
|
||||
Architecture: cf.Architecture,
|
||||
OSVersion: cf.OSVersion,
|
||||
Variant: cf.Variant,
|
||||
OSFeatures: cf.OSFeatures,
|
||||
}
|
||||
}
|
||||
|
||||
// History is one entry of a list recording how this container image was built.
|
||||
@@ -89,8 +105,10 @@ type HealthConfig struct {
|
||||
}
|
||||
|
||||
// Config is a submessage of the config file described as:
|
||||
// The execution parameters which SHOULD be used as a base when running
|
||||
// a container using the image.
|
||||
//
|
||||
// The execution parameters which SHOULD be used as a base when running
|
||||
// a container using the image.
|
||||
//
|
||||
// The names of the fields in this message are chosen to reflect the JSON
|
||||
// payload of the Config as defined here:
|
||||
// https://git.io/vrAET
|
||||
|
||||
6
vendor/github.com/google/go-containerregistry/pkg/v1/hash.go
generated
vendored
6
vendor/github.com/google/go-containerregistry/pkg/v1/hash.go
generated
vendored
@@ -15,7 +15,7 @@
|
||||
package v1
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"crypto"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
@@ -78,7 +78,7 @@ func (h *Hash) UnmarshalText(text []byte) error {
|
||||
func Hasher(name string) (hash.Hash, error) {
|
||||
switch name {
|
||||
case "sha256":
|
||||
return sha256.New(), nil
|
||||
return crypto.SHA256.New(), nil
|
||||
default:
|
||||
return nil, fmt.Errorf("unsupported hash: %q", name)
|
||||
}
|
||||
@@ -111,7 +111,7 @@ func (h *Hash) parse(unquoted string) error {
|
||||
|
||||
// SHA256 computes the Hash of the provided io.Reader's content.
|
||||
func SHA256(r io.Reader) (Hash, int64, error) {
|
||||
hasher := sha256.New()
|
||||
hasher := crypto.SHA256.New()
|
||||
n, err := io.Copy(hasher, r)
|
||||
if err != nil {
|
||||
return Hash{}, 0, err
|
||||
|
||||
16
vendor/github.com/google/go-containerregistry/pkg/v1/manifest.go
generated
vendored
16
vendor/github.com/google/go-containerregistry/pkg/v1/manifest.go
generated
vendored
@@ -28,6 +28,7 @@ type Manifest struct {
|
||||
Config Descriptor `json:"config"`
|
||||
Layers []Descriptor `json:"layers"`
|
||||
Annotations map[string]string `json:"annotations,omitempty"`
|
||||
Subject *Descriptor `json:"subject,omitempty"`
|
||||
}
|
||||
|
||||
// IndexManifest represents an OCI image index in a structured way.
|
||||
@@ -36,16 +37,19 @@ type IndexManifest struct {
|
||||
MediaType types.MediaType `json:"mediaType,omitempty"`
|
||||
Manifests []Descriptor `json:"manifests"`
|
||||
Annotations map[string]string `json:"annotations,omitempty"`
|
||||
Subject *Descriptor `json:"subject,omitempty"`
|
||||
}
|
||||
|
||||
// Descriptor holds a reference from the manifest to one of its constituent elements.
|
||||
type Descriptor struct {
|
||||
MediaType types.MediaType `json:"mediaType"`
|
||||
Size int64 `json:"size"`
|
||||
Digest Hash `json:"digest"`
|
||||
URLs []string `json:"urls,omitempty"`
|
||||
Annotations map[string]string `json:"annotations,omitempty"`
|
||||
Platform *Platform `json:"platform,omitempty"`
|
||||
MediaType types.MediaType `json:"mediaType"`
|
||||
Size int64 `json:"size"`
|
||||
Digest Hash `json:"digest"`
|
||||
Data []byte `json:"data,omitempty"`
|
||||
URLs []string `json:"urls,omitempty"`
|
||||
Annotations map[string]string `json:"annotations,omitempty"`
|
||||
Platform *Platform `json:"platform,omitempty"`
|
||||
ArtifactType string `json:"artifactType,omitempty"`
|
||||
}
|
||||
|
||||
// ParseManifest parses the io.Reader's contents into a Manifest.
|
||||
|
||||
4
vendor/github.com/google/go-containerregistry/pkg/v1/match/match.go
generated
vendored
4
vendor/github.com/google/go-containerregistry/pkg/v1/match/match.go
generated
vendored
@@ -25,7 +25,9 @@ import (
|
||||
type Matcher func(desc v1.Descriptor) bool
|
||||
|
||||
// Name returns a match.Matcher that matches based on the value of the
|
||||
// "org.opencontainers.image.ref.name" annotation:
|
||||
//
|
||||
// "org.opencontainers.image.ref.name" annotation:
|
||||
//
|
||||
// github.com/opencontainers/image-spec/blob/v1.0.1/annotations.md#pre-defined-annotation-keys
|
||||
func Name(name string) Matcher {
|
||||
return Annotation(imagespec.AnnotationRefName, name)
|
||||
|
||||
2
vendor/github.com/google/go-containerregistry/pkg/v1/partial/README.md
generated
vendored
2
vendor/github.com/google/go-containerregistry/pkg/v1/partial/README.md
generated
vendored
@@ -29,7 +29,7 @@ In a tarball, blobs are (often) uncompressed, so it's easiest to implement a `v1
|
||||
of uncompressed layers. `tarball.uncompressedImage` does this by implementing `UncompressedImageCore`:
|
||||
|
||||
```go
|
||||
type CompressedImageCore interface {
|
||||
type UncompressedImageCore interface {
|
||||
RawConfigFile() ([]byte, error)
|
||||
MediaType() (types.MediaType, error)
|
||||
LayerByDiffID(v1.Hash) (UncompressedLayer, error)
|
||||
|
||||
29
vendor/github.com/google/go-containerregistry/pkg/v1/partial/compressed.go
generated
vendored
29
vendor/github.com/google/go-containerregistry/pkg/v1/partial/compressed.go
generated
vendored
@@ -17,7 +17,11 @@ package partial
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/google/go-containerregistry/internal/and"
|
||||
"github.com/google/go-containerregistry/internal/compression"
|
||||
"github.com/google/go-containerregistry/internal/gzip"
|
||||
"github.com/google/go-containerregistry/internal/zstd"
|
||||
comp "github.com/google/go-containerregistry/pkg/compression"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/types"
|
||||
)
|
||||
@@ -45,11 +49,32 @@ type compressedLayerExtender struct {
|
||||
|
||||
// Uncompressed implements v1.Layer
|
||||
func (cle *compressedLayerExtender) Uncompressed() (io.ReadCloser, error) {
|
||||
r, err := cle.Compressed()
|
||||
rc, err := cle.Compressed()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return gzip.UnzipReadCloser(r)
|
||||
|
||||
// Often, the "compressed" bytes are not actually-compressed.
|
||||
// Peek at the first two bytes to determine whether it's correct to
|
||||
// wrap this with gzip.UnzipReadCloser or zstd.UnzipReadCloser.
|
||||
cp, pr, err := compression.PeekCompression(rc)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
prc := &and.ReadCloser{
|
||||
Reader: pr,
|
||||
CloseFunc: rc.Close,
|
||||
}
|
||||
|
||||
switch cp {
|
||||
case comp.GZip:
|
||||
return gzip.UnzipReadCloser(prc)
|
||||
case comp.ZStd:
|
||||
return zstd.UnzipReadCloser(prc)
|
||||
default:
|
||||
return prc, nil
|
||||
}
|
||||
}
|
||||
|
||||
// DiffID implements v1.Layer
|
||||
|
||||
2
vendor/github.com/google/go-containerregistry/pkg/v1/partial/index.go
generated
vendored
2
vendor/github.com/google/go-containerregistry/pkg/v1/partial/index.go
generated
vendored
@@ -26,7 +26,7 @@ func FindManifests(index v1.ImageIndex, matcher match.Matcher) ([]v1.Descriptor,
|
||||
// get the actual manifest list
|
||||
indexManifest, err := index.IndexManifest()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to get raw index: %v", err)
|
||||
return nil, fmt.Errorf("unable to get raw index: %w", err)
|
||||
}
|
||||
manifests := []v1.Descriptor{}
|
||||
// try to get the root of our image
|
||||
|
||||
57
vendor/github.com/google/go-containerregistry/pkg/v1/partial/with.go
generated
vendored
57
vendor/github.com/google/go-containerregistry/pkg/v1/partial/with.go
generated
vendored
@@ -19,7 +19,6 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/types"
|
||||
@@ -67,12 +66,12 @@ func (cl *configLayer) DiffID() (v1.Hash, error) {
|
||||
|
||||
// Uncompressed implements v1.Layer
|
||||
func (cl *configLayer) Uncompressed() (io.ReadCloser, error) {
|
||||
return ioutil.NopCloser(bytes.NewBuffer(cl.content)), nil
|
||||
return io.NopCloser(bytes.NewBuffer(cl.content)), nil
|
||||
}
|
||||
|
||||
// Compressed implements v1.Layer
|
||||
func (cl *configLayer) Compressed() (io.ReadCloser, error) {
|
||||
return ioutil.NopCloser(bytes.NewBuffer(cl.content)), nil
|
||||
return io.NopCloser(bytes.NewBuffer(cl.content)), nil
|
||||
}
|
||||
|
||||
// Size implements v1.Layer
|
||||
@@ -88,9 +87,22 @@ func (cl *configLayer) MediaType() (types.MediaType, error) {
|
||||
|
||||
var _ v1.Layer = (*configLayer)(nil)
|
||||
|
||||
// withConfigLayer allows partial image implementations to provide a layer
|
||||
// for their config file.
|
||||
type withConfigLayer interface {
|
||||
ConfigLayer() (v1.Layer, error)
|
||||
}
|
||||
|
||||
// ConfigLayer implements v1.Layer from the raw config bytes.
|
||||
// This is so that clients (e.g. remote) can access the config as a blob.
|
||||
//
|
||||
// Images that want to return a specific layer implementation can implement
|
||||
// withConfigLayer.
|
||||
func ConfigLayer(i WithRawConfigFile) (v1.Layer, error) {
|
||||
if wcl, ok := unwrap(i).(withConfigLayer); ok {
|
||||
return wcl.ConfigLayer()
|
||||
}
|
||||
|
||||
h, err := ConfigName(i)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -316,10 +328,28 @@ func Descriptor(d Describable) (*v1.Descriptor, error) {
|
||||
if desc.MediaType, err = d.MediaType(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if wat, ok := d.(withArtifactType); ok {
|
||||
if desc.ArtifactType, err = wat.ArtifactType(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
if wrm, ok := d.(WithRawManifest); ok && desc.MediaType.IsImage() {
|
||||
mf, _ := Manifest(wrm)
|
||||
// Failing to parse as a manifest should just be ignored.
|
||||
// The manifest might not be valid, and that's okay.
|
||||
if mf != nil && !mf.Config.MediaType.IsConfig() {
|
||||
desc.ArtifactType = string(mf.Config.MediaType)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return &desc, nil
|
||||
}
|
||||
|
||||
type withArtifactType interface {
|
||||
ArtifactType() (string, error)
|
||||
}
|
||||
|
||||
type withUncompressedSize interface {
|
||||
UncompressedSize() (int64, error)
|
||||
}
|
||||
@@ -342,7 +372,7 @@ func UncompressedSize(l v1.Layer) (int64, error) {
|
||||
}
|
||||
defer rc.Close()
|
||||
|
||||
return io.Copy(ioutil.Discard, rc)
|
||||
return io.Copy(io.Discard, rc)
|
||||
}
|
||||
|
||||
type withExists interface {
|
||||
@@ -372,7 +402,7 @@ func Exists(l v1.Layer) (bool, error) {
|
||||
|
||||
// Recursively unwrap our wrappers so that we can check for the original implementation.
|
||||
// We might want to expose this?
|
||||
func unwrap(i interface{}) interface{} {
|
||||
func unwrap(i any) any {
|
||||
if ule, ok := i.(*uncompressedLayerExtender); ok {
|
||||
return unwrap(ule.UncompressedLayer)
|
||||
}
|
||||
@@ -387,3 +417,20 @@ func unwrap(i interface{}) interface{} {
|
||||
}
|
||||
return i
|
||||
}
|
||||
|
||||
// ArtifactType returns the artifact type for the given manifest.
|
||||
//
|
||||
// If the manifest reports its own artifact type, that's returned, otherwise
|
||||
// the manifest is parsed and, if successful, its config.mediaType is returned.
|
||||
func ArtifactType(w WithManifest) (string, error) {
|
||||
if wat, ok := w.(withArtifactType); ok {
|
||||
return wat.ArtifactType()
|
||||
}
|
||||
mf, _ := w.Manifest()
|
||||
// Failing to parse as a manifest should just be ignored.
|
||||
// The manifest might not be valid, and that's okay.
|
||||
if mf != nil && !mf.Config.MediaType.IsConfig() {
|
||||
return string(mf.Config.MediaType), nil
|
||||
}
|
||||
return "", nil
|
||||
}
|
||||
|
||||
104
vendor/github.com/google/go-containerregistry/pkg/v1/platform.go
generated
vendored
104
vendor/github.com/google/go-containerregistry/pkg/v1/platform.go
generated
vendored
@@ -15,7 +15,9 @@
|
||||
package v1
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Platform represents the target os/arch for an image.
|
||||
@@ -28,11 +30,100 @@ type Platform struct {
|
||||
Features []string `json:"features,omitempty"`
|
||||
}
|
||||
|
||||
func (p Platform) String() string {
|
||||
if p.OS == "" {
|
||||
return ""
|
||||
}
|
||||
var b strings.Builder
|
||||
b.WriteString(p.OS)
|
||||
if p.Architecture != "" {
|
||||
b.WriteString("/")
|
||||
b.WriteString(p.Architecture)
|
||||
}
|
||||
if p.Variant != "" {
|
||||
b.WriteString("/")
|
||||
b.WriteString(p.Variant)
|
||||
}
|
||||
if p.OSVersion != "" {
|
||||
b.WriteString(":")
|
||||
b.WriteString(p.OSVersion)
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// ParsePlatform parses a string representing a Platform, if possible.
|
||||
func ParsePlatform(s string) (*Platform, error) {
|
||||
var p Platform
|
||||
parts := strings.Split(strings.TrimSpace(s), ":")
|
||||
if len(parts) == 2 {
|
||||
p.OSVersion = parts[1]
|
||||
}
|
||||
parts = strings.Split(parts[0], "/")
|
||||
if len(parts) > 0 {
|
||||
p.OS = parts[0]
|
||||
}
|
||||
if len(parts) > 1 {
|
||||
p.Architecture = parts[1]
|
||||
}
|
||||
if len(parts) > 2 {
|
||||
p.Variant = parts[2]
|
||||
}
|
||||
if len(parts) > 3 {
|
||||
return nil, fmt.Errorf("too many slashes in platform spec: %s", s)
|
||||
}
|
||||
return &p, nil
|
||||
}
|
||||
|
||||
// Equals returns true if the given platform is semantically equivalent to this one.
|
||||
// The order of Features and OSFeatures is not important.
|
||||
func (p Platform) Equals(o Platform) bool {
|
||||
return p.OS == o.OS && p.Architecture == o.Architecture && p.Variant == o.Variant && p.OSVersion == o.OSVersion &&
|
||||
stringSliceEqualIgnoreOrder(p.OSFeatures, o.OSFeatures) && stringSliceEqualIgnoreOrder(p.Features, o.Features)
|
||||
return p.OS == o.OS &&
|
||||
p.Architecture == o.Architecture &&
|
||||
p.Variant == o.Variant &&
|
||||
p.OSVersion == o.OSVersion &&
|
||||
stringSliceEqualIgnoreOrder(p.OSFeatures, o.OSFeatures) &&
|
||||
stringSliceEqualIgnoreOrder(p.Features, o.Features)
|
||||
}
|
||||
|
||||
// Satisfies returns true if this Platform "satisfies" the given spec Platform.
|
||||
//
|
||||
// Note that this is different from Equals and that Satisfies is not reflexive.
|
||||
//
|
||||
// The given spec represents "requirements" such that any missing values in the
|
||||
// spec are not compared.
|
||||
//
|
||||
// For OSFeatures and Features, Satisfies will return true if this Platform's
|
||||
// fields contain a superset of the values in the spec's fields (order ignored).
|
||||
func (p Platform) Satisfies(spec Platform) bool {
|
||||
return satisfies(spec.OS, p.OS) &&
|
||||
satisfies(spec.Architecture, p.Architecture) &&
|
||||
satisfies(spec.Variant, p.Variant) &&
|
||||
satisfies(spec.OSVersion, p.OSVersion) &&
|
||||
satisfiesList(spec.OSFeatures, p.OSFeatures) &&
|
||||
satisfiesList(spec.Features, p.Features)
|
||||
}
|
||||
|
||||
func satisfies(want, have string) bool {
|
||||
return want == "" || want == have
|
||||
}
|
||||
|
||||
func satisfiesList(want, have []string) bool {
|
||||
if len(want) == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
set := map[string]struct{}{}
|
||||
for _, h := range have {
|
||||
set[h] = struct{}{}
|
||||
}
|
||||
|
||||
for _, w := range want {
|
||||
if _, ok := set[w]; !ok {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// stringSliceEqual compares 2 string slices and returns if their contents are identical.
|
||||
@@ -50,10 +141,9 @@ func stringSliceEqual(a, b []string) bool {
|
||||
|
||||
// stringSliceEqualIgnoreOrder compares 2 string slices and returns if their contents are identical, ignoring order
|
||||
func stringSliceEqualIgnoreOrder(a, b []string) bool {
|
||||
a1, b1 := a[:], b[:]
|
||||
if a1 != nil && b1 != nil {
|
||||
sort.Strings(a1)
|
||||
sort.Strings(b1)
|
||||
if a != nil && b != nil {
|
||||
sort.Strings(a)
|
||||
sort.Strings(b)
|
||||
}
|
||||
return stringSliceEqual(a1, b1)
|
||||
return stringSliceEqual(a, b)
|
||||
}
|
||||
|
||||
11
vendor/github.com/google/go-containerregistry/pkg/v1/remote/catalog.go
generated
vendored
11
vendor/github.com/google/go-containerregistry/pkg/v1/remote/catalog.go
generated
vendored
@@ -88,10 +88,13 @@ func Catalog(ctx context.Context, target name.Registry, options ...Option) ([]st
|
||||
}
|
||||
|
||||
uri := &url.URL{
|
||||
Scheme: target.Scheme(),
|
||||
Host: target.RegistryStr(),
|
||||
Path: "/v2/_catalog",
|
||||
RawQuery: "n=10000",
|
||||
Scheme: target.Scheme(),
|
||||
Host: target.RegistryStr(),
|
||||
Path: "/v2/_catalog",
|
||||
}
|
||||
|
||||
if o.pageSize > 0 {
|
||||
uri.RawQuery = fmt.Sprintf("n=%d", o.pageSize)
|
||||
}
|
||||
|
||||
client := http.Client{Transport: tr}
|
||||
|
||||
27
vendor/github.com/google/go-containerregistry/pkg/v1/remote/check.go
generated
vendored
27
vendor/github.com/google/go-containerregistry/pkg/v1/remote/check.go
generated
vendored
@@ -1,3 +1,17 @@
|
||||
// Copyright 2019 Google LLC All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package remote
|
||||
|
||||
import (
|
||||
@@ -20,13 +34,13 @@ import (
|
||||
func CheckPushPermission(ref name.Reference, kc authn.Keychain, t http.RoundTripper) error {
|
||||
auth, err := kc.Resolve(ref.Context().Registry)
|
||||
if err != nil {
|
||||
return fmt.Errorf("resolving authorization for %v failed: %v", ref.Context().Registry, err)
|
||||
return fmt.Errorf("resolving authorization for %v failed: %w", ref.Context().Registry, err)
|
||||
}
|
||||
|
||||
scopes := []string{ref.Scope(transport.PushScope)}
|
||||
tr, err := transport.New(ref.Context().Registry, auth, t, scopes)
|
||||
tr, err := transport.NewWithContext(context.TODO(), ref.Context().Registry, auth, t, scopes)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating push check transport for %v failed: %v", ref.Context().Registry, err)
|
||||
return fmt.Errorf("creating push check transport for %v failed: %w", ref.Context().Registry, err)
|
||||
}
|
||||
// TODO(jasonhall): Against GCR, just doing the token handshake is
|
||||
// enough, but this doesn't extend to Dockerhub
|
||||
@@ -35,11 +49,10 @@ func CheckPushPermission(ref name.Reference, kc authn.Keychain, t http.RoundTrip
|
||||
// authorize a push. Figure out how to return early here when we can,
|
||||
// to avoid a roundtrip for spec-compliant registries.
|
||||
w := writer{
|
||||
repo: ref.Context(),
|
||||
client: &http.Client{Transport: tr},
|
||||
context: context.Background(),
|
||||
repo: ref.Context(),
|
||||
client: &http.Client{Transport: tr},
|
||||
}
|
||||
loc, _, err := w.initiateUpload("", "")
|
||||
loc, _, err := w.initiateUpload(context.Background(), "", "", "")
|
||||
if loc != "" {
|
||||
// Since we're only initiating the upload to check whether we
|
||||
// can, we should attempt to cancel it, in case initiating
|
||||
|
||||
4
vendor/github.com/google/go-containerregistry/pkg/v1/remote/delete.go
generated
vendored
4
vendor/github.com/google/go-containerregistry/pkg/v1/remote/delete.go
generated
vendored
@@ -54,4 +54,8 @@ func Delete(ref name.Reference, options ...Option) error {
|
||||
defer resp.Body.Close()
|
||||
|
||||
return transport.CheckError(resp, http.StatusOK, http.StatusAccepted)
|
||||
|
||||
// TODO(jason): If the manifest had a `subject`, and if the registry
|
||||
// doesn't support Referrers, update the index pointed to by the
|
||||
// subject's fallback tag to remove the descriptor for this manifest.
|
||||
}
|
||||
|
||||
125
vendor/github.com/google/go-containerregistry/pkg/v1/remote/descriptor.go
generated
vendored
125
vendor/github.com/google/go-containerregistry/pkg/v1/remote/descriptor.go
generated
vendored
@@ -17,14 +17,15 @@ package remote
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/google/go-containerregistry/internal/redact"
|
||||
"github.com/google/go-containerregistry/internal/verify"
|
||||
"github.com/google/go-containerregistry/pkg/logs"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
@@ -60,7 +61,7 @@ type Descriptor struct {
|
||||
v1.Descriptor
|
||||
Manifest []byte
|
||||
|
||||
// So we can share this implementation with Image..
|
||||
// So we can share this implementation with Image.
|
||||
platform v1.Platform
|
||||
}
|
||||
|
||||
@@ -238,6 +239,56 @@ func (f *fetcher) url(resource, identifier string) url.URL {
|
||||
}
|
||||
}
|
||||
|
||||
// https://github.com/opencontainers/distribution-spec/blob/main/spec.md#referrers-tag-schema
|
||||
func fallbackTag(d name.Digest) name.Tag {
|
||||
return d.Context().Tag(strings.Replace(d.DigestStr(), ":", "-", 1))
|
||||
}
|
||||
|
||||
func (f *fetcher) fetchReferrers(ctx context.Context, filter map[string]string, d name.Digest) (*v1.IndexManifest, error) {
|
||||
// Check the Referrers API endpoint first.
|
||||
u := f.url("referrers", d.DigestStr())
|
||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req.Header.Set("Accept", string(types.OCIImageIndex))
|
||||
|
||||
resp, err := f.Client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := transport.CheckError(resp, http.StatusOK, http.StatusNotFound, http.StatusBadRequest); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
var im v1.IndexManifest
|
||||
if err := json.NewDecoder(resp.Body).Decode(&im); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return filterReferrersResponse(filter, &im), nil
|
||||
}
|
||||
|
||||
// The registry doesn't support the Referrers API endpoint, so we'll use the fallback tag scheme.
|
||||
b, _, err := f.fetchManifest(fallbackTag(d), []types.MediaType{types.OCIImageIndex})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var terr *transport.Error
|
||||
if ok := errors.As(err, &terr); ok && terr.StatusCode == http.StatusNotFound {
|
||||
// Not found just means there are no attachments yet. Start with an empty manifest.
|
||||
return &v1.IndexManifest{MediaType: types.OCIImageIndex}, nil
|
||||
}
|
||||
|
||||
var im v1.IndexManifest
|
||||
if err := json.Unmarshal(b, &im); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return filterReferrersResponse(filter, &im), nil
|
||||
}
|
||||
|
||||
func (f *fetcher) fetchManifest(ref name.Reference, acceptable []types.MediaType) ([]byte, *v1.Descriptor, error) {
|
||||
u := f.url("manifests", ref.Identifier())
|
||||
req, err := http.NewRequest(http.MethodGet, u.String(), nil)
|
||||
@@ -260,7 +311,7 @@ func (f *fetcher) fetchManifest(ref name.Reference, acceptable []types.MediaType
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
manifest, err := ioutil.ReadAll(resp.Body)
|
||||
manifest, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
@@ -284,6 +335,15 @@ func (f *fetcher) fetchManifest(ref name.Reference, acceptable []types.MediaType
|
||||
return nil, nil, fmt.Errorf("manifest digest: %q does not match requested digest: %q for %q", digest, dgst.DigestStr(), f.Ref)
|
||||
}
|
||||
}
|
||||
|
||||
var artifactType string
|
||||
mf, _ := v1.ParseManifest(bytes.NewReader(manifest))
|
||||
// Failing to parse as a manifest should just be ignored.
|
||||
// The manifest might not be valid, and that's okay.
|
||||
if mf != nil && !mf.Config.MediaType.IsConfig() {
|
||||
artifactType = string(mf.Config.MediaType)
|
||||
}
|
||||
|
||||
// Do nothing for tags; I give up.
|
||||
//
|
||||
// We'd like to validate that the "Docker-Content-Digest" header matches what is returned by the registry,
|
||||
@@ -294,9 +354,10 @@ func (f *fetcher) fetchManifest(ref name.Reference, acceptable []types.MediaType
|
||||
|
||||
// Return all this info since we have to calculate it anyway.
|
||||
desc := v1.Descriptor{
|
||||
Digest: digest,
|
||||
Size: size,
|
||||
MediaType: mediaType,
|
||||
Digest: digest,
|
||||
Size: size,
|
||||
MediaType: mediaType,
|
||||
ArtifactType: artifactType,
|
||||
}
|
||||
|
||||
return manifest, &desc, nil
|
||||
@@ -330,13 +391,9 @@ func (f *fetcher) headManifest(ref name.Reference, acceptable []types.MediaType)
|
||||
}
|
||||
mediaType := types.MediaType(mth)
|
||||
|
||||
lh := resp.Header.Get("Content-Length")
|
||||
if lh == "" {
|
||||
return nil, fmt.Errorf("HEAD %s: response did not include Content-Length header", u.String())
|
||||
}
|
||||
size, err := strconv.ParseInt(lh, 10, 64)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
size := resp.ContentLength
|
||||
if size == -1 {
|
||||
return nil, fmt.Errorf("GET %s: response did not include Content-Length header", u.String())
|
||||
}
|
||||
|
||||
dh := resp.Header.Get("Docker-Content-Digest")
|
||||
@@ -363,7 +420,7 @@ func (f *fetcher) headManifest(ref name.Reference, acceptable []types.MediaType)
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (f *fetcher) fetchBlob(ctx context.Context, h v1.Hash) (io.ReadCloser, error) {
|
||||
func (f *fetcher) fetchBlob(ctx context.Context, size int64, h v1.Hash) (io.ReadCloser, error) {
|
||||
u := f.url("blobs", h.String())
|
||||
req, err := http.NewRequest(http.MethodGet, u.String(), nil)
|
||||
if err != nil {
|
||||
@@ -372,7 +429,7 @@ func (f *fetcher) fetchBlob(ctx context.Context, h v1.Hash) (io.ReadCloser, erro
|
||||
|
||||
resp, err := f.Client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, redact.Error(err)
|
||||
}
|
||||
|
||||
if err := transport.CheckError(resp, http.StatusOK); err != nil {
|
||||
@@ -380,7 +437,18 @@ func (f *fetcher) fetchBlob(ctx context.Context, h v1.Hash) (io.ReadCloser, erro
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return verify.ReadCloser(resp.Body, h)
|
||||
// Do whatever we can.
|
||||
// If we have an expected size and Content-Length doesn't match, return an error.
|
||||
// If we don't have an expected size and we do have a Content-Length, use Content-Length.
|
||||
if hsize := resp.ContentLength; hsize != -1 {
|
||||
if size == verify.SizeUnknown {
|
||||
size = hsize
|
||||
} else if hsize != size {
|
||||
return nil, fmt.Errorf("GET %s: Content-Length header %d does not match expected size %d", u.String(), hsize, size)
|
||||
}
|
||||
}
|
||||
|
||||
return verify.ReadCloser(resp.Body, size, h)
|
||||
}
|
||||
|
||||
func (f *fetcher) headBlob(h v1.Hash) (*http.Response, error) {
|
||||
@@ -392,7 +460,7 @@ func (f *fetcher) headBlob(h v1.Hash) (*http.Response, error) {
|
||||
|
||||
resp, err := f.Client.Do(req.WithContext(f.context))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, redact.Error(err)
|
||||
}
|
||||
|
||||
if err := transport.CheckError(resp, http.StatusOK); err != nil {
|
||||
@@ -412,7 +480,7 @@ func (f *fetcher) blobExists(h v1.Hash) (bool, error) {
|
||||
|
||||
resp, err := f.Client.Do(req.WithContext(f.context))
|
||||
if err != nil {
|
||||
return false, err
|
||||
return false, redact.Error(err)
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
@@ -422,3 +490,22 @@ func (f *fetcher) blobExists(h v1.Hash) (bool, error) {
|
||||
|
||||
return resp.StatusCode == http.StatusOK, nil
|
||||
}
|
||||
|
||||
// If filter applied, filter out by artifactType.
|
||||
// See https://github.com/opencontainers/distribution-spec/blob/main/spec.md#listing-referrers
|
||||
func filterReferrersResponse(filter map[string]string, origIndex *v1.IndexManifest) *v1.IndexManifest {
|
||||
newIndex := origIndex
|
||||
if filter == nil {
|
||||
return newIndex
|
||||
}
|
||||
if v, ok := filter["artifactType"]; ok {
|
||||
tmp := []v1.Descriptor{}
|
||||
for _, desc := range newIndex.Manifests {
|
||||
if desc.ArtifactType == v {
|
||||
tmp = append(tmp, desc)
|
||||
}
|
||||
}
|
||||
newIndex.Manifests = tmp
|
||||
}
|
||||
return newIndex
|
||||
}
|
||||
|
||||
29
vendor/github.com/google/go-containerregistry/pkg/v1/remote/image.go
generated
vendored
29
vendor/github.com/google/go-containerregistry/pkg/v1/remote/image.go
generated
vendored
@@ -15,8 +15,8 @@
|
||||
package remote
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"sync"
|
||||
@@ -46,6 +46,15 @@ type remoteImage struct {
|
||||
descriptor *v1.Descriptor
|
||||
}
|
||||
|
||||
func (r *remoteImage) ArtifactType() (string, error) {
|
||||
// kind of a hack, but RawManifest does appropriate locking/memoization
|
||||
// and makes sure r.descriptor is populated.
|
||||
if _, err := r.RawManifest(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
return r.descriptor.ArtifactType, nil
|
||||
}
|
||||
|
||||
var _ partial.CompressedImageCore = (*remoteImage)(nil)
|
||||
|
||||
// Image provides access to a remote image reference.
|
||||
@@ -100,13 +109,21 @@ func (r *remoteImage) RawConfigFile() ([]byte, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
body, err := r.fetchBlob(r.context, m.Config.Digest)
|
||||
if m.Config.Data != nil {
|
||||
if err := verify.Descriptor(m.Config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
r.config = m.Config.Data
|
||||
return r.config, nil
|
||||
}
|
||||
|
||||
body, err := r.fetchBlob(r.context, m.Config.Size, m.Config.Digest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer body.Close()
|
||||
|
||||
r.config, err = ioutil.ReadAll(body)
|
||||
r.config, err = io.ReadAll(body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -143,6 +160,10 @@ func (rl *remoteImageLayer) Compressed() (io.ReadCloser, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if d.Data != nil {
|
||||
return verify.ReadCloser(io.NopCloser(bytes.NewReader(d.Data)), d.Size, d.Digest)
|
||||
}
|
||||
|
||||
// We don't want to log binary layers -- this can break terminals.
|
||||
ctx := redact.NewContext(rl.ri.context, "omitting binary blobs from logs")
|
||||
|
||||
@@ -177,7 +198,7 @@ func (rl *remoteImageLayer) Compressed() (io.ReadCloser, error) {
|
||||
continue
|
||||
}
|
||||
|
||||
return verify.ReadCloser(resp.Body, rl.digest)
|
||||
return verify.ReadCloser(resp.Body, d.Size, rl.digest)
|
||||
}
|
||||
|
||||
return nil, lastErr
|
||||
|
||||
70
vendor/github.com/google/go-containerregistry/pkg/v1/remote/index.go
generated
vendored
70
vendor/github.com/google/go-containerregistry/pkg/v1/remote/index.go
generated
vendored
@@ -19,6 +19,7 @@ import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/google/go-containerregistry/internal/verify"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial"
|
||||
@@ -146,6 +147,40 @@ func (r *remoteIndex) Layer(h v1.Hash) (v1.Layer, error) {
|
||||
return nil, fmt.Errorf("layer not found: %s", h)
|
||||
}
|
||||
|
||||
// Experiment with a better API for v1.ImageIndex. We might want to move this
|
||||
// to partial?
|
||||
func (r *remoteIndex) Manifests() ([]partial.Describable, error) {
|
||||
m, err := r.IndexManifest()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
manifests := []partial.Describable{}
|
||||
for _, desc := range m.Manifests {
|
||||
switch {
|
||||
case desc.MediaType.IsImage():
|
||||
img, err := r.Image(desc.Digest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
manifests = append(manifests, img)
|
||||
case desc.MediaType.IsIndex():
|
||||
idx, err := r.ImageIndex(desc.Digest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
manifests = append(manifests, idx)
|
||||
default:
|
||||
layer, err := r.Layer(desc.Digest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
manifests = append(manifests, layer)
|
||||
}
|
||||
}
|
||||
|
||||
return manifests, nil
|
||||
}
|
||||
|
||||
func (r *remoteIndex) imageByPlatform(platform v1.Platform) (v1.Image, error) {
|
||||
desc, err := r.childByPlatform(platform)
|
||||
if err != nil {
|
||||
@@ -159,10 +194,12 @@ func (r *remoteIndex) imageByPlatform(platform v1.Platform) (v1.Image, error) {
|
||||
// This naively matches the first manifest with matching platform attributes.
|
||||
//
|
||||
// We should probably use this instead:
|
||||
// github.com/containerd/containerd/platforms
|
||||
//
|
||||
// github.com/containerd/containerd/platforms
|
||||
//
|
||||
// But first we'd need to migrate to:
|
||||
// github.com/opencontainers/image-spec/specs-go/v1
|
||||
//
|
||||
// github.com/opencontainers/image-spec/specs-go/v1
|
||||
func (r *remoteIndex) childByPlatform(platform v1.Platform) (*Descriptor, error) {
|
||||
index, err := r.IndexManifest()
|
||||
if err != nil {
|
||||
@@ -179,7 +216,7 @@ func (r *remoteIndex) childByPlatform(platform v1.Platform) (*Descriptor, error)
|
||||
return r.childDescriptor(childDesc, platform)
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("no child with platform %s/%s in index %s", platform.OS, platform.Architecture, r.Ref)
|
||||
return nil, fmt.Errorf("no child with platform %+v in index %s", platform, r.Ref)
|
||||
}
|
||||
|
||||
func (r *remoteIndex) childByHash(h v1.Hash) (*Descriptor, error) {
|
||||
@@ -198,10 +235,31 @@ func (r *remoteIndex) childByHash(h v1.Hash) (*Descriptor, error) {
|
||||
// Convert one of this index's child's v1.Descriptor into a remote.Descriptor, with the given platform option.
|
||||
func (r *remoteIndex) childDescriptor(child v1.Descriptor, platform v1.Platform) (*Descriptor, error) {
|
||||
ref := r.Ref.Context().Digest(child.Digest.String())
|
||||
manifest, _, err := r.fetchManifest(ref, []types.MediaType{child.MediaType})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
var (
|
||||
manifest []byte
|
||||
err error
|
||||
)
|
||||
if child.Data != nil {
|
||||
if err := verify.Descriptor(child); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
manifest = child.Data
|
||||
} else {
|
||||
manifest, _, err = r.fetchManifest(ref, []types.MediaType{child.MediaType})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if child.MediaType.IsImage() {
|
||||
mf, _ := v1.ParseManifest(bytes.NewReader(manifest))
|
||||
// Failing to parse as a manifest should just be ignored.
|
||||
// The manifest might not be valid, and that's okay.
|
||||
if mf != nil && !mf.Config.MediaType.IsConfig() {
|
||||
child.ArtifactType = string(mf.Config.MediaType)
|
||||
}
|
||||
}
|
||||
|
||||
return &Descriptor{
|
||||
fetcher: fetcher{
|
||||
Ref: ref,
|
||||
|
||||
3
vendor/github.com/google/go-containerregistry/pkg/v1/remote/layer.go
generated
vendored
3
vendor/github.com/google/go-containerregistry/pkg/v1/remote/layer.go
generated
vendored
@@ -18,6 +18,7 @@ import (
|
||||
"io"
|
||||
|
||||
"github.com/google/go-containerregistry/internal/redact"
|
||||
"github.com/google/go-containerregistry/internal/verify"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
"github.com/google/go-containerregistry/pkg/v1/partial"
|
||||
@@ -34,7 +35,7 @@ type remoteLayer struct {
|
||||
func (rl *remoteLayer) Compressed() (io.ReadCloser, error) {
|
||||
// We don't want to log binary layers -- this can break terminals.
|
||||
ctx := redact.NewContext(rl.context, "omitting binary blobs from logs")
|
||||
return rl.fetchBlob(ctx, rl.digest)
|
||||
return rl.fetchBlob(ctx, verify.SizeUnknown, rl.digest)
|
||||
}
|
||||
|
||||
// Compressed implements partial.CompressedLayer
|
||||
|
||||
29
vendor/github.com/google/go-containerregistry/pkg/v1/remote/list.go
generated
vendored
29
vendor/github.com/google/go-containerregistry/pkg/v1/remote/list.go
generated
vendored
@@ -31,14 +31,16 @@ type tags struct {
|
||||
Tags []string `json:"tags"`
|
||||
}
|
||||
|
||||
// List wraps ListWithContext using the background context.
|
||||
func List(repo name.Repository, options ...Option) ([]string, error) {
|
||||
return ListWithContext(context.Background(), repo, options...)
|
||||
// ListWithContext calls List with the given context.
|
||||
//
|
||||
// Deprecated: Use List and WithContext. This will be removed in a future release.
|
||||
func ListWithContext(ctx context.Context, repo name.Repository, options ...Option) ([]string, error) {
|
||||
return List(repo, append(options, WithContext(ctx))...)
|
||||
}
|
||||
|
||||
// ListWithContext calls /tags/list for the given repository, returning the list of tags
|
||||
// List calls /tags/list for the given repository, returning the list of tags
|
||||
// in the "tags" property.
|
||||
func ListWithContext(ctx context.Context, repo name.Repository, options ...Option) ([]string, error) {
|
||||
func List(repo name.Repository, options ...Option) ([]string, error) {
|
||||
o, err := makeOptions(repo, options...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -53,16 +55,10 @@ func ListWithContext(ctx context.Context, repo name.Repository, options ...Optio
|
||||
Scheme: repo.Registry.Scheme(),
|
||||
Host: repo.Registry.RegistryStr(),
|
||||
Path: fmt.Sprintf("/v2/%s/tags/list", repo.RepositoryStr()),
|
||||
// ECR returns an error if n > 1000:
|
||||
// https://github.com/google/go-containerregistry/issues/681
|
||||
RawQuery: "n=1000",
|
||||
}
|
||||
|
||||
// This is lazy, but I want to make sure List(..., WithContext(ctx)) works
|
||||
// without calling makeOptions() twice (which can have side effects).
|
||||
// This means ListWithContext(ctx, ..., WithContext(ctx2)) prefers ctx2.
|
||||
if o.context != context.Background() {
|
||||
ctx = o.context
|
||||
if o.pageSize > 0 {
|
||||
uri.RawQuery = fmt.Sprintf("n=%d", o.pageSize)
|
||||
}
|
||||
|
||||
client := http.Client{Transport: tr}
|
||||
@@ -72,16 +68,15 @@ func ListWithContext(ctx context.Context, repo name.Repository, options ...Optio
|
||||
// get responses until there is no next page
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
case <-o.context.Done():
|
||||
return nil, o.context.Err()
|
||||
default:
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("GET", uri.String(), nil)
|
||||
req, err := http.NewRequestWithContext(o.context, "GET", uri.String(), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
req = req.WithContext(ctx)
|
||||
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
|
||||
13
vendor/github.com/google/go-containerregistry/pkg/v1/remote/mount.go
generated
vendored
13
vendor/github.com/google/go-containerregistry/pkg/v1/remote/mount.go
generated
vendored
@@ -93,3 +93,16 @@ func (mi *mountableImage) LayerByDiffID(d v1.Hash) (v1.Layer, error) {
|
||||
func (mi *mountableImage) Descriptor() (*v1.Descriptor, error) {
|
||||
return partial.Descriptor(mi.Image)
|
||||
}
|
||||
|
||||
// ConfigLayer retains the original reference so that it can be mounted.
|
||||
// See partial.ConfigLayer.
|
||||
func (mi *mountableImage) ConfigLayer() (v1.Layer, error) {
|
||||
l, err := partial.ConfigLayer(mi.Image)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &MountableLayer{
|
||||
Layer: l,
|
||||
Reference: mi.Reference,
|
||||
}, nil
|
||||
}
|
||||
|
||||
38
vendor/github.com/google/go-containerregistry/pkg/v1/remote/multi_write.go
generated
vendored
38
vendor/github.com/google/go-containerregistry/pkg/v1/remote/multi_write.go
generated
vendored
@@ -15,6 +15,7 @@
|
||||
package remote
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
||||
@@ -86,30 +87,31 @@ func MultiWrite(m map[name.Reference]Taggable, options ...Option) (rerr error) {
|
||||
return err
|
||||
}
|
||||
w := writer{
|
||||
repo: repo,
|
||||
client: &http.Client{Transport: tr},
|
||||
context: o.context,
|
||||
updates: o.updates,
|
||||
lastUpdate: &v1.Update{},
|
||||
repo: repo,
|
||||
client: &http.Client{Transport: tr},
|
||||
backoff: o.retryBackoff,
|
||||
predicate: o.retryPredicate,
|
||||
}
|
||||
|
||||
// Collect the total size of blobs and manifests we're about to write.
|
||||
if o.updates != nil {
|
||||
w.progress = &progress{updates: o.updates}
|
||||
w.progress.lastUpdate = &v1.Update{}
|
||||
defer close(o.updates)
|
||||
defer func() { sendError(o.updates, rerr) }()
|
||||
defer func() { _ = w.progress.err(rerr) }()
|
||||
for _, b := range blobs {
|
||||
size, err := b.Size()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.lastUpdate.Total += size
|
||||
w.progress.total(size)
|
||||
}
|
||||
countManifest := func(t Taggable) error {
|
||||
b, err := t.RawManifest()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.lastUpdate.Total += int64(len(b))
|
||||
w.progress.total(int64(len(b)))
|
||||
return nil
|
||||
}
|
||||
for _, i := range images {
|
||||
@@ -133,12 +135,13 @@ func MultiWrite(m map[name.Reference]Taggable, options ...Option) (rerr error) {
|
||||
|
||||
// Upload individual blobs and collect any errors.
|
||||
blobChan := make(chan v1.Layer, 2*o.jobs)
|
||||
g, ctx := errgroup.WithContext(o.context)
|
||||
ctx := o.context
|
||||
g, gctx := errgroup.WithContext(o.context)
|
||||
for i := 0; i < o.jobs; i++ {
|
||||
// Start N workers consuming blobs to upload.
|
||||
g.Go(func() error {
|
||||
for b := range blobChan {
|
||||
if err := w.uploadOne(b); err != nil {
|
||||
if err := w.uploadOne(gctx, b); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -150,8 +153,8 @@ func MultiWrite(m map[name.Reference]Taggable, options ...Option) (rerr error) {
|
||||
for _, b := range blobs {
|
||||
select {
|
||||
case blobChan <- b:
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case <-gctx.Done():
|
||||
return gctx.Err()
|
||||
}
|
||||
}
|
||||
return nil
|
||||
@@ -160,7 +163,8 @@ func MultiWrite(m map[name.Reference]Taggable, options ...Option) (rerr error) {
|
||||
return err
|
||||
}
|
||||
|
||||
commitMany := func(m map[name.Reference]Taggable) error {
|
||||
commitMany := func(ctx context.Context, m map[name.Reference]Taggable) error {
|
||||
g, ctx := errgroup.WithContext(ctx)
|
||||
// With all of the constituent elements uploaded, upload the manifests
|
||||
// to commit the images and indexes, and collect any errors.
|
||||
type task struct {
|
||||
@@ -172,7 +176,7 @@ func MultiWrite(m map[name.Reference]Taggable, options ...Option) (rerr error) {
|
||||
// Start N workers consuming tasks to upload manifests.
|
||||
g.Go(func() error {
|
||||
for t := range taskChan {
|
||||
if err := w.commitManifest(t.i, t.ref); err != nil {
|
||||
if err := w.commitManifest(ctx, t.i, t.ref); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -189,19 +193,19 @@ func MultiWrite(m map[name.Reference]Taggable, options ...Option) (rerr error) {
|
||||
}
|
||||
// Push originally requested image manifests. These have no
|
||||
// dependencies.
|
||||
if err := commitMany(images); err != nil {
|
||||
if err := commitMany(ctx, images); err != nil {
|
||||
return err
|
||||
}
|
||||
// Push new manifests from lowest levels up.
|
||||
for i := len(newManifests) - 1; i >= 0; i-- {
|
||||
if err := commitMany(newManifests[i]); err != nil {
|
||||
if err := commitMany(ctx, newManifests[i]); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Push originally requested index manifests, which might depend on
|
||||
// newly discovered manifests.
|
||||
|
||||
return commitMany(indexes)
|
||||
return commitMany(ctx, indexes)
|
||||
}
|
||||
|
||||
// addIndexBlobs adds blobs to the set of blobs we intend to upload, and
|
||||
|
||||
160
vendor/github.com/google/go-containerregistry/pkg/v1/remote/options.go
generated
vendored
160
vendor/github.com/google/go-containerregistry/pkg/v1/remote/options.go
generated
vendored
@@ -17,8 +17,13 @@ package remote
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-containerregistry/internal/retry"
|
||||
"github.com/google/go-containerregistry/pkg/authn"
|
||||
"github.com/google/go-containerregistry/pkg/logs"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
@@ -38,6 +43,10 @@ type options struct {
|
||||
userAgent string
|
||||
allowNondistributableArtifacts bool
|
||||
updates chan<- v1.Update
|
||||
pageSize int
|
||||
retryBackoff Backoff
|
||||
retryPredicate retry.Predicate
|
||||
filter map[string]string
|
||||
}
|
||||
|
||||
var defaultPlatform = v1.Platform{
|
||||
@@ -45,15 +54,75 @@ var defaultPlatform = v1.Platform{
|
||||
OS: "linux",
|
||||
}
|
||||
|
||||
const defaultJobs = 4
|
||||
// Backoff is an alias of retry.Backoff to expose this configuration option to consumers of this lib
|
||||
type Backoff = retry.Backoff
|
||||
|
||||
var defaultRetryPredicate retry.Predicate = func(err error) bool {
|
||||
// Various failure modes here, as we're often reading from and writing to
|
||||
// the network.
|
||||
if retry.IsTemporary(err) || errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, io.EOF) || errors.Is(err, syscall.EPIPE) || errors.Is(err, syscall.ECONNRESET) {
|
||||
logs.Warn.Printf("retrying %v", err)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Try this three times, waiting 1s after first failure, 3s after second.
|
||||
var defaultRetryBackoff = Backoff{
|
||||
Duration: 1.0 * time.Second,
|
||||
Factor: 3.0,
|
||||
Jitter: 0.1,
|
||||
Steps: 3,
|
||||
}
|
||||
|
||||
// Useful for tests
|
||||
var fastBackoff = Backoff{
|
||||
Duration: 1.0 * time.Millisecond,
|
||||
Factor: 3.0,
|
||||
Jitter: 0.1,
|
||||
Steps: 3,
|
||||
}
|
||||
|
||||
var retryableStatusCodes = []int{
|
||||
http.StatusRequestTimeout,
|
||||
http.StatusInternalServerError,
|
||||
http.StatusBadGateway,
|
||||
http.StatusServiceUnavailable,
|
||||
http.StatusGatewayTimeout,
|
||||
}
|
||||
|
||||
const (
|
||||
defaultJobs = 4
|
||||
|
||||
// ECR returns an error if n > 1000:
|
||||
// https://github.com/google/go-containerregistry/issues/1091
|
||||
defaultPageSize = 1000
|
||||
)
|
||||
|
||||
// DefaultTransport is based on http.DefaultTransport with modifications
|
||||
// documented inline below.
|
||||
var DefaultTransport http.RoundTripper = &http.Transport{
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
DialContext: (&net.Dialer{
|
||||
Timeout: 30 * time.Second,
|
||||
KeepAlive: 30 * time.Second,
|
||||
}).DialContext,
|
||||
ForceAttemptHTTP2: true,
|
||||
MaxIdleConns: 100,
|
||||
IdleConnTimeout: 90 * time.Second,
|
||||
TLSHandshakeTimeout: 10 * time.Second,
|
||||
ExpectContinueTimeout: 1 * time.Second,
|
||||
}
|
||||
|
||||
func makeOptions(target authn.Resource, opts ...Option) (*options, error) {
|
||||
o := &options{
|
||||
auth: authn.Anonymous,
|
||||
transport: http.DefaultTransport,
|
||||
platform: defaultPlatform,
|
||||
context: context.Background(),
|
||||
jobs: defaultJobs,
|
||||
transport: DefaultTransport,
|
||||
platform: defaultPlatform,
|
||||
context: context.Background(),
|
||||
jobs: defaultJobs,
|
||||
pageSize: defaultPageSize,
|
||||
retryPredicate: defaultRetryPredicate,
|
||||
retryBackoff: defaultRetryBackoff,
|
||||
}
|
||||
|
||||
for _, option := range opts {
|
||||
@@ -62,27 +131,38 @@ func makeOptions(target authn.Resource, opts ...Option) (*options, error) {
|
||||
}
|
||||
}
|
||||
|
||||
if o.keychain != nil {
|
||||
switch {
|
||||
case o.auth != nil && o.keychain != nil:
|
||||
// It is a better experience to explicitly tell a caller their auth is misconfigured
|
||||
// than potentially fail silently when the correct auth is overridden by option misuse.
|
||||
return nil, errors.New("provide an option for either authn.Authenticator or authn.Keychain, not both")
|
||||
case o.keychain != nil:
|
||||
auth, err := o.keychain.Resolve(target)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
o.auth = auth
|
||||
case o.auth == nil:
|
||||
o.auth = authn.Anonymous
|
||||
}
|
||||
|
||||
// Wrap the transport in something that logs requests and responses.
|
||||
// It's expensive to generate the dumps, so skip it if we're writing
|
||||
// to nothing.
|
||||
if logs.Enabled(logs.Debug) {
|
||||
o.transport = transport.NewLogger(o.transport)
|
||||
}
|
||||
// transport.Wrapper is a signal that consumers are opt-ing into providing their own transport without any additional wrapping.
|
||||
// This is to allow consumers full control over the transports logic, such as providing retry logic.
|
||||
if _, ok := o.transport.(*transport.Wrapper); !ok {
|
||||
// Wrap the transport in something that logs requests and responses.
|
||||
// It's expensive to generate the dumps, so skip it if we're writing
|
||||
// to nothing.
|
||||
if logs.Enabled(logs.Debug) {
|
||||
o.transport = transport.NewLogger(o.transport)
|
||||
}
|
||||
|
||||
// Wrap the transport in something that can retry network flakes.
|
||||
o.transport = transport.NewRetry(o.transport)
|
||||
// Wrap the transport in something that can retry network flakes.
|
||||
o.transport = transport.NewRetry(o.transport, transport.WithRetryPredicate(defaultRetryPredicate), transport.WithRetryStatusCodes(retryableStatusCodes...))
|
||||
|
||||
// Wrap this last to prevent transport.New from double-wrapping.
|
||||
if o.userAgent != "" {
|
||||
o.transport = transport.NewUserAgent(o.transport, o.userAgent)
|
||||
// Wrap this last to prevent transport.New from double-wrapping.
|
||||
if o.userAgent != "" {
|
||||
o.transport = transport.NewUserAgent(o.transport, o.userAgent)
|
||||
}
|
||||
}
|
||||
|
||||
return o, nil
|
||||
@@ -90,8 +170,10 @@ func makeOptions(target authn.Resource, opts ...Option) (*options, error) {
|
||||
|
||||
// WithTransport is a functional option for overriding the default transport
|
||||
// for remote operations.
|
||||
// If transport.Wrapper is provided, this signals that the consumer does *not* want any further wrapping to occur.
|
||||
// i.e. logging, retry and useragent
|
||||
//
|
||||
// The default transport its http.DefaultTransport.
|
||||
// The default transport is DefaultTransport.
|
||||
func WithTransport(t http.RoundTripper) Option {
|
||||
return func(o *options) error {
|
||||
o.transport = t
|
||||
@@ -101,6 +183,7 @@ func WithTransport(t http.RoundTripper) Option {
|
||||
|
||||
// WithAuth is a functional option for overriding the default authenticator
|
||||
// for remote operations.
|
||||
// It is an error to use both WithAuth and WithAuthFromKeychain in the same Option set.
|
||||
//
|
||||
// The default authenticator is authn.Anonymous.
|
||||
func WithAuth(auth authn.Authenticator) Option {
|
||||
@@ -113,6 +196,7 @@ func WithAuth(auth authn.Authenticator) Option {
|
||||
// WithAuthFromKeychain is a functional option for overriding the default
|
||||
// authenticator for remote operations, using an authn.Keychain to find
|
||||
// credentials.
|
||||
// It is an error to use both WithAuth and WithAuthFromKeychain in the same Option set.
|
||||
//
|
||||
// The default authenticator is authn.Anonymous.
|
||||
func WithAuthFromKeychain(keys authn.Keychain) Option {
|
||||
@@ -193,3 +277,41 @@ func WithProgress(updates chan<- v1.Update) Option {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithPageSize sets the given size as the value of parameter 'n' in the request.
|
||||
//
|
||||
// To omit the `n` parameter entirely, use WithPageSize(0).
|
||||
// The default value is 1000.
|
||||
func WithPageSize(size int) Option {
|
||||
return func(o *options) error {
|
||||
o.pageSize = size
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithRetryBackoff sets the httpBackoff for retry HTTP operations.
|
||||
func WithRetryBackoff(backoff Backoff) Option {
|
||||
return func(o *options) error {
|
||||
o.retryBackoff = backoff
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithRetryPredicate sets the predicate for retry HTTP operations.
|
||||
func WithRetryPredicate(predicate retry.Predicate) Option {
|
||||
return func(o *options) error {
|
||||
o.retryPredicate = predicate
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// WithFilter sets the filter querystring for HTTP operations.
|
||||
func WithFilter(key string, value string) Option {
|
||||
return func(o *options) error {
|
||||
if o.filter == nil {
|
||||
o.filter = map[string]string{}
|
||||
}
|
||||
o.filter[key] = value
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
69
vendor/github.com/google/go-containerregistry/pkg/v1/remote/progress.go
generated
vendored
Normal file
69
vendor/github.com/google/go-containerregistry/pkg/v1/remote/progress.go
generated
vendored
Normal file
@@ -0,0 +1,69 @@
|
||||
// Copyright 2022 Google LLC All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package remote
|
||||
|
||||
import (
|
||||
"io"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
)
|
||||
|
||||
type progress struct {
|
||||
sync.Mutex
|
||||
updates chan<- v1.Update
|
||||
lastUpdate *v1.Update
|
||||
}
|
||||
|
||||
func (p *progress) total(delta int64) {
|
||||
atomic.AddInt64(&p.lastUpdate.Total, delta)
|
||||
}
|
||||
|
||||
func (p *progress) complete(delta int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.updates <- v1.Update{
|
||||
Total: p.lastUpdate.Total,
|
||||
Complete: atomic.AddInt64(&p.lastUpdate.Complete, delta),
|
||||
}
|
||||
}
|
||||
|
||||
func (p *progress) err(err error) error {
|
||||
if err != nil && p.updates != nil {
|
||||
p.updates <- v1.Update{Error: err}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
type progressReader struct {
|
||||
rc io.ReadCloser
|
||||
|
||||
count *int64 // number of bytes this reader has read, to support resetting on retry.
|
||||
progress *progress
|
||||
}
|
||||
|
||||
func (r *progressReader) Read(b []byte) (int, error) {
|
||||
n, err := r.rc.Read(b)
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
atomic.AddInt64(r.count, int64(n))
|
||||
// TODO: warn/debug log if sending takes too long, or if sending is blocked while context is canceled.
|
||||
r.progress.complete(int64(n))
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (r *progressReader) Close() error { return r.rc.Close() }
|
||||
35
vendor/github.com/google/go-containerregistry/pkg/v1/remote/referrers.go
generated
vendored
Normal file
35
vendor/github.com/google/go-containerregistry/pkg/v1/remote/referrers.go
generated
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
// Copyright 2023 Google LLC All Rights Reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package remote
|
||||
|
||||
import (
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
v1 "github.com/google/go-containerregistry/pkg/v1"
|
||||
)
|
||||
|
||||
// Referrers returns a list of descriptors that refer to the given manifest digest.
|
||||
//
|
||||
// The subject manifest doesn't have to exist in the registry for there to be descriptors that refer to it.
|
||||
func Referrers(d name.Digest, options ...Option) (*v1.IndexManifest, error) {
|
||||
o, err := makeOptions(d.Context(), options...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
f, err := makeFetcher(d, o)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return f.fetchReferrers(o.context, o.filter, d)
|
||||
}
|
||||
47
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/bearer.go
generated
vendored
47
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/bearer.go
generated
vendored
@@ -17,8 +17,9 @@ package transport
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
@@ -86,26 +87,27 @@ func (bt *bearerTransport) RoundTrip(in *http.Request) (*http.Response, error) {
|
||||
|
||||
// If we hit a WWW-Authenticate challenge, it might be due to expired tokens or insufficient scope.
|
||||
if challenges := authchallenge.ResponseChallenges(res); len(challenges) != 0 {
|
||||
// close out old response, since we will not return it.
|
||||
res.Body.Close()
|
||||
|
||||
newScopes := []string{}
|
||||
for _, wac := range challenges {
|
||||
// TODO(jonjohnsonjr): Should we also update "realm" or "service"?
|
||||
if scope, ok := wac.Parameters["scope"]; ok {
|
||||
// From https://tools.ietf.org/html/rfc6750#section-3
|
||||
// The "scope" attribute is defined in Section 3.3 of [RFC6749]. The
|
||||
// "scope" attribute is a space-delimited list of case-sensitive scope
|
||||
// values indicating the required scope of the access token for
|
||||
// accessing the requested resource.
|
||||
scopes := strings.Split(scope, " ")
|
||||
|
||||
if want, ok := wac.Parameters["scope"]; ok {
|
||||
// Add any scopes that we don't already request.
|
||||
got := stringSet(bt.scopes)
|
||||
for _, want := range scopes {
|
||||
if _, ok := got[want]; !ok {
|
||||
bt.scopes = append(bt.scopes, want)
|
||||
}
|
||||
if _, ok := got[want]; !ok {
|
||||
newScopes = append(newScopes, want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Some registries seem to only look at the first scope parameter during a token exchange.
|
||||
// If a request fails because it's missing a scope, we should put those at the beginning,
|
||||
// otherwise the registry might just ignore it :/
|
||||
newScopes = append(newScopes, bt.scopes...)
|
||||
bt.scopes = newScopes
|
||||
|
||||
// TODO(jonjohnsonjr): Teach transport.Error about "error" and "error_description" from challenge.
|
||||
|
||||
// Retry the request to attempt to get a valid token.
|
||||
@@ -139,7 +141,8 @@ func (bt *bearerTransport) refresh(ctx context.Context) error {
|
||||
// the Username should be set to <token>, which indicates
|
||||
// we are using an oauth flow.
|
||||
content, err = bt.refreshOauth(ctx)
|
||||
if terr, ok := err.(*Error); ok && terr.StatusCode == http.StatusNotFound {
|
||||
var terr *Error
|
||||
if errors.As(err, &terr) && terr.StatusCode == http.StatusNotFound {
|
||||
// Note: Not all token servers implement oauth2.
|
||||
// If the request to the endpoint returns 404 using the HTTP POST method,
|
||||
// refer to Token Documentation for using the HTTP GET method supported by all token servers.
|
||||
@@ -233,7 +236,9 @@ func (bt *bearerTransport) refreshOauth(ctx context.Context) ([]byte, error) {
|
||||
|
||||
v := url.Values{}
|
||||
v.Set("scope", strings.Join(bt.scopes, " "))
|
||||
v.Set("service", bt.service)
|
||||
if bt.service != "" {
|
||||
v.Set("service", bt.service)
|
||||
}
|
||||
v.Set("client_id", defaultUserAgent)
|
||||
if auth.IdentityToken != "" {
|
||||
v.Set("grant_type", "refresh_token")
|
||||
@@ -263,11 +268,13 @@ func (bt *bearerTransport) refreshOauth(ctx context.Context) ([]byte, error) {
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp, http.StatusOK); err != nil {
|
||||
logs.Warn.Printf("No matching credentials were found for %q", bt.registry)
|
||||
if bt.basic == authn.Anonymous {
|
||||
logs.Warn.Printf("No matching credentials were found for %q", bt.registry)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ioutil.ReadAll(resp.Body)
|
||||
return io.ReadAll(resp.Body)
|
||||
}
|
||||
|
||||
// https://docs.docker.com/registry/spec/auth/token/
|
||||
@@ -303,9 +310,11 @@ func (bt *bearerTransport) refreshBasic(ctx context.Context) ([]byte, error) {
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := CheckError(resp, http.StatusOK); err != nil {
|
||||
logs.Warn.Printf("No matching credentials were found for %q", bt.registry)
|
||||
if bt.basic == authn.Anonymous {
|
||||
logs.Warn.Printf("No matching credentials were found for %q", bt.registry)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return ioutil.ReadAll(resp.Body)
|
||||
return io.ReadAll(resp.Body)
|
||||
}
|
||||
|
||||
66
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/error.go
generated
vendored
66
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/error.go
generated
vendored
@@ -17,28 +17,12 @@ package transport
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// The set of query string keys that we expect to send as part of the registry
|
||||
// protocol. Anything else is potentially dangerous to leak, as it's probably
|
||||
// from a redirect. These redirects often included tokens or signed URLs.
|
||||
var paramAllowlist = map[string]struct{}{
|
||||
// Token exchange
|
||||
"scope": {},
|
||||
"service": {},
|
||||
// Cross-repo mounting
|
||||
"mount": {},
|
||||
"from": {},
|
||||
// Layer PUT
|
||||
"digest": {},
|
||||
// Listing tags and catalog
|
||||
"n": {},
|
||||
"last": {},
|
||||
}
|
||||
"github.com/google/go-containerregistry/internal/redact"
|
||||
)
|
||||
|
||||
// Error implements error to support the following error specification:
|
||||
// https://github.com/docker/distribution/blob/master/docs/spec/api.md#errors
|
||||
@@ -46,10 +30,10 @@ type Error struct {
|
||||
Errors []Diagnostic `json:"errors,omitempty"`
|
||||
// The http status code returned.
|
||||
StatusCode int
|
||||
// The request that failed.
|
||||
Request *http.Request
|
||||
// The raw body if we couldn't understand it.
|
||||
rawBody string
|
||||
// The request that failed.
|
||||
request *http.Request
|
||||
}
|
||||
|
||||
// Check that Error implements error
|
||||
@@ -58,8 +42,8 @@ var _ error = (*Error)(nil)
|
||||
// Error implements error
|
||||
func (e *Error) Error() string {
|
||||
prefix := ""
|
||||
if e.request != nil {
|
||||
prefix = fmt.Sprintf("%s %s: ", e.request.Method, redactURL(e.request.URL))
|
||||
if e.Request != nil {
|
||||
prefix = fmt.Sprintf("%s %s: ", e.Request.Method, redact.URL(e.Request.URL))
|
||||
}
|
||||
return prefix + e.responseErr()
|
||||
}
|
||||
@@ -68,7 +52,7 @@ func (e *Error) responseErr() string {
|
||||
switch len(e.Errors) {
|
||||
case 0:
|
||||
if len(e.rawBody) == 0 {
|
||||
if e.request != nil && e.request.Method == http.MethodHead {
|
||||
if e.Request != nil && e.Request.Method == http.MethodHead {
|
||||
return fmt.Sprintf("unexpected status code %d %s (HEAD responses have no body, use GET for details)", e.StatusCode, http.StatusText(e.StatusCode))
|
||||
}
|
||||
return fmt.Sprintf("unexpected status code %d %s", e.StatusCode, http.StatusText(e.StatusCode))
|
||||
@@ -100,27 +84,11 @@ func (e *Error) Temporary() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// TODO(jonjohnsonjr): Consider moving to internal/redact.
|
||||
func redactURL(original *url.URL) *url.URL {
|
||||
qs := original.Query()
|
||||
for k, v := range qs {
|
||||
for i := range v {
|
||||
if _, ok := paramAllowlist[k]; !ok {
|
||||
// key is not in the Allowlist
|
||||
v[i] = "REDACTED"
|
||||
}
|
||||
}
|
||||
}
|
||||
redacted := *original
|
||||
redacted.RawQuery = qs.Encode()
|
||||
return &redacted
|
||||
}
|
||||
|
||||
// Diagnostic represents a single error returned by a Docker registry interaction.
|
||||
type Diagnostic struct {
|
||||
Code ErrorCode `json:"code"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Detail interface{} `json:"detail,omitempty"`
|
||||
Code ErrorCode `json:"code"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Detail any `json:"detail,omitempty"`
|
||||
}
|
||||
|
||||
// String stringifies the Diagnostic in the form: $Code: $Message[; $Detail]
|
||||
@@ -154,12 +122,19 @@ const (
|
||||
DeniedErrorCode ErrorCode = "DENIED"
|
||||
UnsupportedErrorCode ErrorCode = "UNSUPPORTED"
|
||||
TooManyRequestsErrorCode ErrorCode = "TOOMANYREQUESTS"
|
||||
UnknownErrorCode ErrorCode = "UNKNOWN"
|
||||
|
||||
// This isn't defined by either docker or OCI spec, but is defined by docker/distribution:
|
||||
// https://github.com/distribution/distribution/blob/6a977a5a754baa213041443f841705888107362a/registry/api/errcode/register.go#L60
|
||||
UnavailableErrorCode ErrorCode = "UNAVAILABLE"
|
||||
)
|
||||
|
||||
// TODO: Include other error types.
|
||||
var temporaryErrorCodes = map[ErrorCode]struct{}{
|
||||
BlobUploadInvalidErrorCode: {},
|
||||
TooManyRequestsErrorCode: {},
|
||||
UnknownErrorCode: {},
|
||||
UnavailableErrorCode: {},
|
||||
}
|
||||
|
||||
var temporaryStatusCodes = map[int]struct{}{
|
||||
@@ -167,6 +142,7 @@ var temporaryStatusCodes = map[int]struct{}{
|
||||
http.StatusInternalServerError: {},
|
||||
http.StatusBadGateway: {},
|
||||
http.StatusServiceUnavailable: {},
|
||||
http.StatusGatewayTimeout: {},
|
||||
}
|
||||
|
||||
// CheckError returns a structured error if the response status is not in codes.
|
||||
@@ -177,7 +153,7 @@ func CheckError(resp *http.Response, codes ...int) error {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
b, err := ioutil.ReadAll(resp.Body)
|
||||
b, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -191,7 +167,7 @@ func CheckError(resp *http.Response, codes ...int) error {
|
||||
|
||||
structuredError.rawBody = string(b)
|
||||
structuredError.StatusCode = resp.StatusCode
|
||||
structuredError.request = resp.Request
|
||||
structuredError.Request = resp.Request
|
||||
|
||||
return structuredError
|
||||
}
|
||||
|
||||
222
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/ping.go
generated
vendored
222
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/ping.go
generated
vendored
@@ -19,11 +19,12 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
authchallenge "github.com/docker/distribution/registry/client/auth/challenge"
|
||||
"github.com/google/go-containerregistry/pkg/logs"
|
||||
"github.com/google/go-containerregistry/pkg/name"
|
||||
)
|
||||
|
||||
@@ -35,6 +36,9 @@ const (
|
||||
bearer challenge = "bearer"
|
||||
)
|
||||
|
||||
// 300ms is the default fallback period for go's DNS dialer but we could make this configurable.
|
||||
var fallbackDelay = 300 * time.Millisecond
|
||||
|
||||
type pingResp struct {
|
||||
challenge challenge
|
||||
|
||||
@@ -50,27 +54,7 @@ func (c challenge) Canonical() challenge {
|
||||
return challenge(strings.ToLower(string(c)))
|
||||
}
|
||||
|
||||
func parseChallenge(suffix string) map[string]string {
|
||||
kv := make(map[string]string)
|
||||
for _, token := range strings.Split(suffix, ",") {
|
||||
// Trim any whitespace around each token.
|
||||
token = strings.Trim(token, " ")
|
||||
|
||||
// Break the token into a key/value pair
|
||||
if parts := strings.SplitN(token, "=", 2); len(parts) == 2 {
|
||||
// Unquote the value, if it is quoted.
|
||||
kv[parts[0]] = strings.Trim(parts[1], `"`)
|
||||
} else {
|
||||
// If there was only one part, treat is as a key with an empty value
|
||||
kv[token] = ""
|
||||
}
|
||||
}
|
||||
return kv
|
||||
}
|
||||
|
||||
func ping(ctx context.Context, reg name.Registry, t http.RoundTripper) (*pingResp, error) {
|
||||
client := http.Client{Transport: t}
|
||||
|
||||
// This first attempts to use "https" for every request, falling back to http
|
||||
// if the registry matches our localhost heuristic or if it is intentionally
|
||||
// set to insecure via name.NewInsecureRegistry.
|
||||
@@ -78,52 +62,166 @@ func ping(ctx context.Context, reg name.Registry, t http.RoundTripper) (*pingRes
|
||||
if reg.Scheme() == "http" {
|
||||
schemes = append(schemes, "http")
|
||||
}
|
||||
if len(schemes) == 1 {
|
||||
return pingSingle(ctx, reg, t, schemes[0])
|
||||
}
|
||||
return pingParallel(ctx, reg, t, schemes)
|
||||
}
|
||||
|
||||
var errs []string
|
||||
for _, scheme := range schemes {
|
||||
url := fmt.Sprintf("%s://%s/v2/", scheme, reg.Name())
|
||||
req, err := http.NewRequest(http.MethodGet, url, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
resp, err := client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
errs = append(errs, err.Error())
|
||||
// Potentially retry with http.
|
||||
continue
|
||||
}
|
||||
defer func() {
|
||||
// By draining the body, make sure to reuse the connection made by
|
||||
// the ping for the following access to the registry
|
||||
io.Copy(ioutil.Discard, resp.Body)
|
||||
resp.Body.Close()
|
||||
}()
|
||||
func pingSingle(ctx context.Context, reg name.Registry, t http.RoundTripper, scheme string) (*pingResp, error) {
|
||||
client := http.Client{Transport: t}
|
||||
url := fmt.Sprintf("%s://%s/v2/", scheme, reg.Name())
|
||||
req, err := http.NewRequest(http.MethodGet, url, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
resp, err := client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer func() {
|
||||
// By draining the body, make sure to reuse the connection made by
|
||||
// the ping for the following access to the registry
|
||||
io.Copy(io.Discard, resp.Body)
|
||||
resp.Body.Close()
|
||||
}()
|
||||
|
||||
switch resp.StatusCode {
|
||||
case http.StatusOK:
|
||||
// If we get a 200, then no authentication is needed.
|
||||
switch resp.StatusCode {
|
||||
case http.StatusOK:
|
||||
// If we get a 200, then no authentication is needed.
|
||||
return &pingResp{
|
||||
challenge: anonymous,
|
||||
scheme: scheme,
|
||||
}, nil
|
||||
case http.StatusUnauthorized:
|
||||
if challenges := authchallenge.ResponseChallenges(resp); len(challenges) != 0 {
|
||||
// If we hit more than one, let's try to find one that we know how to handle.
|
||||
wac := pickFromMultipleChallenges(challenges)
|
||||
return &pingResp{
|
||||
challenge: anonymous,
|
||||
scheme: scheme,
|
||||
challenge: challenge(wac.Scheme).Canonical(),
|
||||
parameters: wac.Parameters,
|
||||
scheme: scheme,
|
||||
}, nil
|
||||
case http.StatusUnauthorized:
|
||||
if challenges := authchallenge.ResponseChallenges(resp); len(challenges) != 0 {
|
||||
// If we hit more than one, I'm not even sure what to do.
|
||||
wac := challenges[0]
|
||||
return &pingResp{
|
||||
challenge: challenge(wac.Scheme).Canonical(),
|
||||
parameters: wac.Parameters,
|
||||
scheme: scheme,
|
||||
}, nil
|
||||
}
|
||||
// Otherwise, just return the challenge without parameters.
|
||||
return &pingResp{
|
||||
challenge: challenge(resp.Header.Get("WWW-Authenticate")).Canonical(),
|
||||
scheme: scheme,
|
||||
}, nil
|
||||
default:
|
||||
return nil, CheckError(resp, http.StatusOK, http.StatusUnauthorized)
|
||||
}
|
||||
}
|
||||
|
||||
// Based on the golang happy eyeballs dialParallel impl in net/dial.go.
|
||||
func pingParallel(ctx context.Context, reg name.Registry, t http.RoundTripper, schemes []string) (*pingResp, error) {
|
||||
returned := make(chan struct{})
|
||||
defer close(returned)
|
||||
|
||||
type pingResult struct {
|
||||
*pingResp
|
||||
error
|
||||
primary bool
|
||||
done bool
|
||||
}
|
||||
|
||||
results := make(chan pingResult)
|
||||
|
||||
startRacer := func(ctx context.Context, scheme string) {
|
||||
pr, err := pingSingle(ctx, reg, t, scheme)
|
||||
select {
|
||||
case results <- pingResult{pingResp: pr, error: err, primary: scheme == "https", done: true}:
|
||||
case <-returned:
|
||||
if pr != nil {
|
||||
logs.Debug.Printf("%s lost race", scheme)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var primary, fallback pingResult
|
||||
|
||||
primaryCtx, primaryCancel := context.WithCancel(ctx)
|
||||
defer primaryCancel()
|
||||
go startRacer(primaryCtx, schemes[0])
|
||||
|
||||
fallbackTimer := time.NewTimer(fallbackDelay)
|
||||
defer fallbackTimer.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-fallbackTimer.C:
|
||||
fallbackCtx, fallbackCancel := context.WithCancel(ctx)
|
||||
defer fallbackCancel()
|
||||
go startRacer(fallbackCtx, schemes[1])
|
||||
|
||||
case res := <-results:
|
||||
if res.error == nil {
|
||||
return res.pingResp, nil
|
||||
}
|
||||
if res.primary {
|
||||
primary = res
|
||||
} else {
|
||||
fallback = res
|
||||
}
|
||||
if primary.done && fallback.done {
|
||||
return nil, multierrs([]error{primary.error, fallback.error})
|
||||
}
|
||||
if res.primary && fallbackTimer.Stop() {
|
||||
// Primary failed and we haven't started the fallback,
|
||||
// reset time to start fallback immediately.
|
||||
fallbackTimer.Reset(0)
|
||||
}
|
||||
// Otherwise, just return the challenge without parameters.
|
||||
return &pingResp{
|
||||
challenge: challenge(resp.Header.Get("WWW-Authenticate")).Canonical(),
|
||||
scheme: scheme,
|
||||
}, nil
|
||||
default:
|
||||
return nil, CheckError(resp, http.StatusOK, http.StatusUnauthorized)
|
||||
}
|
||||
}
|
||||
return nil, errors.New(strings.Join(errs, "; "))
|
||||
}
|
||||
|
||||
func pickFromMultipleChallenges(challenges []authchallenge.Challenge) authchallenge.Challenge {
|
||||
// It might happen there are multiple www-authenticate headers, e.g. `Negotiate` and `Basic`.
|
||||
// Picking simply the first one could result eventually in `unrecognized challenge` error,
|
||||
// that's why we're looping through the challenges in search for one that can be handled.
|
||||
allowedSchemes := []string{"basic", "bearer"}
|
||||
|
||||
for _, wac := range challenges {
|
||||
currentScheme := strings.ToLower(wac.Scheme)
|
||||
for _, allowed := range allowedSchemes {
|
||||
if allowed == currentScheme {
|
||||
return wac
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return challenges[0]
|
||||
}
|
||||
|
||||
type multierrs []error
|
||||
|
||||
func (m multierrs) Error() string {
|
||||
var b strings.Builder
|
||||
hasWritten := false
|
||||
for _, err := range m {
|
||||
if hasWritten {
|
||||
b.WriteString("; ")
|
||||
}
|
||||
hasWritten = true
|
||||
b.WriteString(err.Error())
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
func (m multierrs) As(target any) bool {
|
||||
for _, err := range m {
|
||||
if errors.As(err, target) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m multierrs) Is(target error) bool {
|
||||
for _, err := range m {
|
||||
if errors.Is(err, target) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
29
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/retry.go
generated
vendored
29
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/retry.go
generated
vendored
@@ -21,12 +21,12 @@ import (
|
||||
"github.com/google/go-containerregistry/internal/retry"
|
||||
)
|
||||
|
||||
// Sleep for 0.1, 0.3, 0.9, 2.7 seconds. This should cover networking blips.
|
||||
// Sleep for 0.1 then 0.3 seconds. This should cover networking blips.
|
||||
var defaultBackoff = retry.Backoff{
|
||||
Duration: 100 * time.Millisecond,
|
||||
Factor: 3.0,
|
||||
Jitter: 0.1,
|
||||
Steps: 5,
|
||||
Steps: 3,
|
||||
}
|
||||
|
||||
var _ http.RoundTripper = (*retryTransport)(nil)
|
||||
@@ -36,6 +36,7 @@ type retryTransport struct {
|
||||
inner http.RoundTripper
|
||||
backoff retry.Backoff
|
||||
predicate retry.Predicate
|
||||
codes []int
|
||||
}
|
||||
|
||||
// Option is a functional option for retryTransport.
|
||||
@@ -44,10 +45,14 @@ type Option func(*options)
|
||||
type options struct {
|
||||
backoff retry.Backoff
|
||||
predicate retry.Predicate
|
||||
codes []int
|
||||
}
|
||||
|
||||
// Backoff is an alias of retry.Backoff to expose this configuration option to consumers of this lib
|
||||
type Backoff = retry.Backoff
|
||||
|
||||
// WithRetryBackoff sets the backoff for retry operations.
|
||||
func WithRetryBackoff(backoff retry.Backoff) Option {
|
||||
func WithRetryBackoff(backoff Backoff) Option {
|
||||
return func(o *options) {
|
||||
o.backoff = backoff
|
||||
}
|
||||
@@ -60,6 +65,13 @@ func WithRetryPredicate(predicate func(error) bool) Option {
|
||||
}
|
||||
}
|
||||
|
||||
// WithRetryStatusCodes sets which http response codes will be retried.
|
||||
func WithRetryStatusCodes(codes ...int) Option {
|
||||
return func(o *options) {
|
||||
o.codes = codes
|
||||
}
|
||||
}
|
||||
|
||||
// NewRetry returns a transport that retries errors.
|
||||
func NewRetry(inner http.RoundTripper, opts ...Option) http.RoundTripper {
|
||||
o := &options{
|
||||
@@ -75,12 +87,23 @@ func NewRetry(inner http.RoundTripper, opts ...Option) http.RoundTripper {
|
||||
inner: inner,
|
||||
backoff: o.backoff,
|
||||
predicate: o.predicate,
|
||||
codes: o.codes,
|
||||
}
|
||||
}
|
||||
|
||||
func (t *retryTransport) RoundTrip(in *http.Request) (out *http.Response, err error) {
|
||||
roundtrip := func() error {
|
||||
out, err = t.inner.RoundTrip(in)
|
||||
if !retry.Ever(in.Context()) {
|
||||
return nil
|
||||
}
|
||||
if out != nil {
|
||||
for _, code := range t.codes {
|
||||
if out.StatusCode == code {
|
||||
return CheckError(out)
|
||||
}
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
retry.Retry(roundtrip, t.predicate, t.backoff)
|
||||
|
||||
39
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/transport.go
generated
vendored
39
vendor/github.com/google/go-containerregistry/pkg/v1/remote/transport/transport.go
generated
vendored
@@ -27,15 +27,24 @@ import (
|
||||
// setup to authenticate with the remote registry "reg", in the capacity
|
||||
// laid out by the specified scopes.
|
||||
//
|
||||
// TODO(jonjohnsonjr): Deprecate this.
|
||||
// Deprecated: Use NewWithContext.
|
||||
func New(reg name.Registry, auth authn.Authenticator, t http.RoundTripper, scopes []string) (http.RoundTripper, error) {
|
||||
return NewWithContext(context.Background(), reg, auth, t, scopes)
|
||||
}
|
||||
|
||||
// NewWithContext returns a new RoundTripper based on the provided RoundTripper that has been
|
||||
// setup to authenticate with the remote registry "reg", in the capacity
|
||||
// set up to authenticate with the remote registry "reg", in the capacity
|
||||
// laid out by the specified scopes.
|
||||
// In case the RoundTripper is already of the type Wrapper it assumes
|
||||
// authentication was already done prior to this call, so it just returns
|
||||
// the provided RoundTripper without further action
|
||||
func NewWithContext(ctx context.Context, reg name.Registry, auth authn.Authenticator, t http.RoundTripper, scopes []string) (http.RoundTripper, error) {
|
||||
// When the transport provided is of the type Wrapper this function assumes that the caller already
|
||||
// executed the necessary login and check.
|
||||
switch t.(type) {
|
||||
case *Wrapper:
|
||||
return t, nil
|
||||
}
|
||||
// The handshake:
|
||||
// 1. Use "t" to ping() the registry for the authentication challenge.
|
||||
//
|
||||
@@ -68,22 +77,15 @@ func NewWithContext(ctx context.Context, reg name.Registry, auth authn.Authentic
|
||||
}
|
||||
|
||||
switch pr.challenge.Canonical() {
|
||||
case anonymous:
|
||||
return t, nil
|
||||
case basic:
|
||||
return &basicTransport{inner: t, auth: auth, target: reg.RegistryStr()}, nil
|
||||
case anonymous, basic:
|
||||
return &Wrapper{&basicTransport{inner: t, auth: auth, target: reg.RegistryStr()}}, nil
|
||||
case bearer:
|
||||
// We require the realm, which tells us where to send our Basic auth to turn it into Bearer auth.
|
||||
realm, ok := pr.parameters["realm"]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("malformed www-authenticate, missing realm: %v", pr.parameters)
|
||||
}
|
||||
service, ok := pr.parameters["service"]
|
||||
if !ok {
|
||||
// If the service parameter is not specified, then default it to the registry
|
||||
// with which we are talking.
|
||||
service = reg.String()
|
||||
}
|
||||
service := pr.parameters["service"]
|
||||
bt := &bearerTransport{
|
||||
inner: t,
|
||||
basic: auth,
|
||||
@@ -96,8 +98,19 @@ func NewWithContext(ctx context.Context, reg name.Registry, auth authn.Authentic
|
||||
if err := bt.refresh(ctx); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return bt, nil
|
||||
return &Wrapper{bt}, nil
|
||||
default:
|
||||
return nil, fmt.Errorf("unrecognized challenge: %s", pr.challenge)
|
||||
}
|
||||
}
|
||||
|
||||
// Wrapper results in *not* wrapping supplied transport with additional logic such as retries, useragent and debug logging
|
||||
// Consumers are opt-ing into providing their own transport without any additional wrapping.
|
||||
type Wrapper struct {
|
||||
inner http.RoundTripper
|
||||
}
|
||||
|
||||
// RoundTrip delegates to the inner RoundTripper
|
||||
func (w *Wrapper) RoundTrip(in *http.Request) (*http.Response, error) {
|
||||
return w.inner.RoundTrip(in)
|
||||
}
|
||||
|
||||
450
vendor/github.com/google/go-containerregistry/pkg/v1/remote/write.go
generated
vendored
450
vendor/github.com/google/go-containerregistry/pkg/v1/remote/write.go
generated
vendored
@@ -17,15 +17,14 @@ package remote
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-containerregistry/internal/redact"
|
||||
"github.com/google/go-containerregistry/internal/retry"
|
||||
@@ -51,20 +50,21 @@ func Write(ref name.Reference, img v1.Image, options ...Option) (rerr error) {
|
||||
return err
|
||||
}
|
||||
|
||||
var lastUpdate *v1.Update
|
||||
var p *progress
|
||||
if o.updates != nil {
|
||||
lastUpdate = &v1.Update{}
|
||||
lastUpdate.Total, err = countImage(img, o.allowNondistributableArtifacts)
|
||||
p = &progress{updates: o.updates}
|
||||
p.lastUpdate = &v1.Update{}
|
||||
p.lastUpdate.Total, err = countImage(img, o.allowNondistributableArtifacts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer close(o.updates)
|
||||
defer func() { sendError(o.updates, rerr) }()
|
||||
defer func() { _ = p.err(rerr) }()
|
||||
}
|
||||
return writeImage(ref, img, o, lastUpdate)
|
||||
return writeImage(o.context, ref, img, o, p)
|
||||
}
|
||||
|
||||
func writeImage(ref name.Reference, img v1.Image, o *options, lastUpdate *v1.Update) error {
|
||||
func writeImage(ctx context.Context, ref name.Reference, img v1.Image, o *options, progress *progress) error {
|
||||
ls, err := img.Layers()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -75,21 +75,21 @@ func writeImage(ref name.Reference, img v1.Image, o *options, lastUpdate *v1.Upd
|
||||
return err
|
||||
}
|
||||
w := writer{
|
||||
repo: ref.Context(),
|
||||
client: &http.Client{Transport: tr},
|
||||
context: o.context,
|
||||
updates: o.updates,
|
||||
lastUpdate: lastUpdate,
|
||||
repo: ref.Context(),
|
||||
client: &http.Client{Transport: tr},
|
||||
progress: progress,
|
||||
backoff: o.retryBackoff,
|
||||
predicate: o.retryPredicate,
|
||||
}
|
||||
|
||||
// Upload individual blobs and collect any errors.
|
||||
blobChan := make(chan v1.Layer, 2*o.jobs)
|
||||
g, ctx := errgroup.WithContext(o.context)
|
||||
g, gctx := errgroup.WithContext(ctx)
|
||||
for i := 0; i < o.jobs; i++ {
|
||||
// Start N workers consuming blobs to upload.
|
||||
g.Go(func() error {
|
||||
for b := range blobChan {
|
||||
if err := w.uploadOne(b); err != nil {
|
||||
if err := w.uploadOne(gctx, b); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -128,15 +128,12 @@ func writeImage(ref name.Reference, img v1.Image, o *options, lastUpdate *v1.Upd
|
||||
}
|
||||
select {
|
||||
case blobChan <- l:
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
case <-gctx.Done():
|
||||
return gctx.Err()
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err := g.Wait(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if l, err := partial.ConfigLayer(img); err != nil {
|
||||
// We can't read the ConfigLayer, possibly because of streaming layers,
|
||||
@@ -151,13 +148,13 @@ func writeImage(ref name.Reference, img v1.Image, o *options, lastUpdate *v1.Upd
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.uploadOne(l); err != nil {
|
||||
if err := w.uploadOne(ctx, l); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
// We *can* read the ConfigLayer, so upload it concurrently with the layers.
|
||||
g.Go(func() error {
|
||||
return w.uploadOne(l)
|
||||
return w.uploadOne(gctx, l)
|
||||
})
|
||||
|
||||
// Wait for the layers + config.
|
||||
@@ -168,24 +165,17 @@ func writeImage(ref name.Reference, img v1.Image, o *options, lastUpdate *v1.Upd
|
||||
|
||||
// With all of the constituent elements uploaded, upload the manifest
|
||||
// to commit the image.
|
||||
return w.commitManifest(img, ref)
|
||||
return w.commitManifest(ctx, img, ref)
|
||||
}
|
||||
|
||||
// writer writes the elements of an image to a remote image reference.
|
||||
type writer struct {
|
||||
repo name.Repository
|
||||
client *http.Client
|
||||
context context.Context
|
||||
repo name.Repository
|
||||
client *http.Client
|
||||
|
||||
updates chan<- v1.Update
|
||||
lastUpdate *v1.Update
|
||||
}
|
||||
|
||||
func sendError(ch chan<- v1.Update, err error) error {
|
||||
if err != nil && ch != nil {
|
||||
ch <- v1.Update{Error: err}
|
||||
}
|
||||
return err
|
||||
progress *progress
|
||||
backoff Backoff
|
||||
predicate retry.Predicate
|
||||
}
|
||||
|
||||
// url returns a url.Url for the specified path in the context of this remote image reference.
|
||||
@@ -217,7 +207,7 @@ func (w *writer) nextLocation(resp *http.Response) (string, error) {
|
||||
// HEAD request to the blob store API. GCR performs an existence check on the
|
||||
// initiation if "mount" is specified, even if no "from" sources are specified.
|
||||
// However, this is not broadly applicable to all registries, e.g. ECR.
|
||||
func (w *writer) checkExistingBlob(h v1.Hash) (bool, error) {
|
||||
func (w *writer) checkExistingBlob(ctx context.Context, h v1.Hash) (bool, error) {
|
||||
u := w.url(fmt.Sprintf("/v2/%s/blobs/%s", w.repo.RepositoryStr(), h.String()))
|
||||
|
||||
req, err := http.NewRequest(http.MethodHead, u.String(), nil)
|
||||
@@ -225,7 +215,7 @@ func (w *writer) checkExistingBlob(h v1.Hash) (bool, error) {
|
||||
return false, err
|
||||
}
|
||||
|
||||
resp, err := w.client.Do(req.WithContext(w.context))
|
||||
resp, err := w.client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
@@ -240,7 +230,7 @@ func (w *writer) checkExistingBlob(h v1.Hash) (bool, error) {
|
||||
|
||||
// checkExistingManifest checks if a manifest exists already in the repository
|
||||
// by making a HEAD request to the manifest API.
|
||||
func (w *writer) checkExistingManifest(h v1.Hash, mt types.MediaType) (bool, error) {
|
||||
func (w *writer) checkExistingManifest(ctx context.Context, h v1.Hash, mt types.MediaType) (bool, error) {
|
||||
u := w.url(fmt.Sprintf("/v2/%s/manifests/%s", w.repo.RepositoryStr(), h.String()))
|
||||
|
||||
req, err := http.NewRequest(http.MethodHead, u.String(), nil)
|
||||
@@ -249,7 +239,7 @@ func (w *writer) checkExistingManifest(h v1.Hash, mt types.MediaType) (bool, err
|
||||
}
|
||||
req.Header.Set("Accept", string(mt))
|
||||
|
||||
resp, err := w.client.Do(req.WithContext(w.context))
|
||||
resp, err := w.client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
@@ -268,13 +258,16 @@ func (w *writer) checkExistingManifest(h v1.Hash, mt types.MediaType) (bool, err
|
||||
// On success, the layer was either mounted (nothing more to do) or a blob
|
||||
// upload was initiated and the body of that blob should be sent to the returned
|
||||
// location.
|
||||
func (w *writer) initiateUpload(from, mount string) (location string, mounted bool, err error) {
|
||||
func (w *writer) initiateUpload(ctx context.Context, from, mount, origin string) (location string, mounted bool, err error) {
|
||||
u := w.url(fmt.Sprintf("/v2/%s/blobs/uploads/", w.repo.RepositoryStr()))
|
||||
uv := url.Values{}
|
||||
if mount != "" && from != "" {
|
||||
// Quay will fail if we specify a "mount" without a "from".
|
||||
uv["mount"] = []string{mount}
|
||||
uv["from"] = []string{from}
|
||||
uv.Set("mount", mount)
|
||||
uv.Set("from", from)
|
||||
if origin != "" {
|
||||
uv.Set("origin", origin)
|
||||
}
|
||||
}
|
||||
u.RawQuery = uv.Encode()
|
||||
|
||||
@@ -284,13 +277,18 @@ func (w *writer) initiateUpload(from, mount string) (location string, mounted bo
|
||||
return "", false, err
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
resp, err := w.client.Do(req.WithContext(w.context))
|
||||
resp, err := w.client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
return "", false, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := transport.CheckError(resp, http.StatusCreated, http.StatusAccepted); err != nil {
|
||||
if origin != "" && origin != w.repo.RegistryStr() {
|
||||
// https://github.com/google/go-containerregistry/issues/1404
|
||||
logs.Warn.Printf("retrying without mount: %v", err)
|
||||
return w.initiateUpload(ctx, "", "", "")
|
||||
}
|
||||
return "", false, err
|
||||
}
|
||||
|
||||
@@ -308,46 +306,34 @@ func (w *writer) initiateUpload(from, mount string) (location string, mounted bo
|
||||
}
|
||||
}
|
||||
|
||||
type progressReader struct {
|
||||
rc io.ReadCloser
|
||||
|
||||
count *int64 // number of bytes this reader has read, to support resetting on retry.
|
||||
updates chan<- v1.Update
|
||||
lastUpdate *v1.Update
|
||||
}
|
||||
|
||||
func (r *progressReader) Read(b []byte) (int, error) {
|
||||
n, err := r.rc.Read(b)
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
atomic.AddInt64(r.count, int64(n))
|
||||
// TODO: warn/debug log if sending takes too long, or if sending is blocked while context is cancelled.
|
||||
r.updates <- v1.Update{
|
||||
Total: r.lastUpdate.Total,
|
||||
Complete: atomic.AddInt64(&r.lastUpdate.Complete, int64(n)),
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
func (r *progressReader) Close() error { return r.rc.Close() }
|
||||
|
||||
// streamBlob streams the contents of the blob to the specified location.
|
||||
// On failure, this will return an error. On success, this will return the location
|
||||
// header indicating how to commit the streamed blob.
|
||||
func (w *writer) streamBlob(ctx context.Context, blob io.ReadCloser, streamLocation string) (commitLocation string, rerr error) {
|
||||
func (w *writer) streamBlob(ctx context.Context, layer v1.Layer, streamLocation string) (commitLocation string, rerr error) {
|
||||
reset := func() {}
|
||||
defer func() {
|
||||
if rerr != nil {
|
||||
reset()
|
||||
}
|
||||
}()
|
||||
if w.updates != nil {
|
||||
blob, err := layer.Compressed()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
getBody := layer.Compressed
|
||||
if w.progress != nil {
|
||||
var count int64
|
||||
blob = &progressReader{rc: blob, updates: w.updates, lastUpdate: w.lastUpdate, count: &count}
|
||||
blob = &progressReader{rc: blob, progress: w.progress, count: &count}
|
||||
getBody = func() (io.ReadCloser, error) {
|
||||
blob, err := layer.Compressed()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &progressReader{rc: blob, progress: w.progress, count: &count}, nil
|
||||
}
|
||||
reset = func() {
|
||||
atomic.AddInt64(&w.lastUpdate.Complete, -count)
|
||||
w.updates <- *w.lastUpdate
|
||||
w.progress.complete(-count)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -355,6 +341,11 @@ func (w *writer) streamBlob(ctx context.Context, blob io.ReadCloser, streamLocat
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if _, ok := layer.(*stream.Layer); !ok {
|
||||
// We can't retry streaming layers.
|
||||
req.GetBody = getBody
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/octet-stream")
|
||||
|
||||
resp, err := w.client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
@@ -373,7 +364,7 @@ func (w *writer) streamBlob(ctx context.Context, blob io.ReadCloser, streamLocat
|
||||
|
||||
// commitBlob commits this blob by sending a PUT to the location returned from
|
||||
// streaming the blob.
|
||||
func (w *writer) commitBlob(location, digest string) error {
|
||||
func (w *writer) commitBlob(ctx context.Context, location, digest string) error {
|
||||
u, err := url.Parse(location)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -386,8 +377,9 @@ func (w *writer) commitBlob(location, digest string) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/octet-stream")
|
||||
|
||||
resp, err := w.client.Do(req.WithContext(w.context))
|
||||
resp, err := w.client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -398,57 +390,42 @@ func (w *writer) commitBlob(location, digest string) error {
|
||||
|
||||
// incrProgress increments and sends a progress update, if WithProgress is used.
|
||||
func (w *writer) incrProgress(written int64) {
|
||||
if w.updates == nil {
|
||||
if w.progress == nil {
|
||||
return
|
||||
}
|
||||
w.updates <- v1.Update{
|
||||
Total: w.lastUpdate.Total,
|
||||
Complete: atomic.AddInt64(&w.lastUpdate.Complete, int64(written)),
|
||||
}
|
||||
w.progress.complete(written)
|
||||
}
|
||||
|
||||
// uploadOne performs a complete upload of a single layer.
|
||||
func (w *writer) uploadOne(l v1.Layer) error {
|
||||
var from, mount string
|
||||
if h, err := l.Digest(); err == nil {
|
||||
// If we know the digest, this isn't a streaming layer. Do an existence
|
||||
// check so we can skip uploading the layer if possible.
|
||||
existing, err := w.checkExistingBlob(h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if existing {
|
||||
size, err := l.Size()
|
||||
func (w *writer) uploadOne(ctx context.Context, l v1.Layer) error {
|
||||
tryUpload := func() error {
|
||||
ctx := retry.Never(ctx)
|
||||
var from, mount, origin string
|
||||
if h, err := l.Digest(); err == nil {
|
||||
// If we know the digest, this isn't a streaming layer. Do an existence
|
||||
// check so we can skip uploading the layer if possible.
|
||||
existing, err := w.checkExistingBlob(ctx, h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.incrProgress(size)
|
||||
logs.Progress.Printf("existing blob: %v", h)
|
||||
return nil
|
||||
}
|
||||
if existing {
|
||||
size, err := l.Size()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.incrProgress(size)
|
||||
logs.Progress.Printf("existing blob: %v", h)
|
||||
return nil
|
||||
}
|
||||
|
||||
mount = h.String()
|
||||
}
|
||||
if ml, ok := l.(*MountableLayer); ok {
|
||||
if w.repo.RegistryStr() == ml.Reference.Context().RegistryStr() {
|
||||
mount = h.String()
|
||||
}
|
||||
if ml, ok := l.(*MountableLayer); ok {
|
||||
from = ml.Reference.Context().RepositoryStr()
|
||||
origin = ml.Reference.Context().RegistryStr()
|
||||
}
|
||||
}
|
||||
|
||||
ctx := w.context
|
||||
|
||||
shouldRetry := func(err error) bool {
|
||||
// Various failure modes here, as we're often reading from and writing to
|
||||
// the network.
|
||||
if retry.IsTemporary(err) || errors.Is(err, io.ErrUnexpectedEOF) || errors.Is(err, syscall.EPIPE) {
|
||||
logs.Warn.Printf("retrying %v", err)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
tryUpload := func() error {
|
||||
location, mounted, err := w.initiateUpload(from, mount)
|
||||
location, mounted, err := w.initiateUpload(ctx, from, mount, origin)
|
||||
if err != nil {
|
||||
return err
|
||||
} else if mounted {
|
||||
@@ -476,11 +453,7 @@ func (w *writer) uploadOne(l v1.Layer) error {
|
||||
ctx = redact.NewContext(ctx, "omitting binary blobs from logs")
|
||||
}
|
||||
|
||||
blob, err := l.Compressed()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
location, err = w.streamBlob(ctx, blob, location)
|
||||
location, err = w.streamBlob(ctx, l, location)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -491,29 +464,21 @@ func (w *writer) uploadOne(l v1.Layer) error {
|
||||
}
|
||||
digest := h.String()
|
||||
|
||||
if err := w.commitBlob(location, digest); err != nil {
|
||||
if err := w.commitBlob(ctx, location, digest); err != nil {
|
||||
return err
|
||||
}
|
||||
logs.Progress.Printf("pushed blob: %s", digest)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Try this three times, waiting 1s after first failure, 3s after second.
|
||||
backoff := retry.Backoff{
|
||||
Duration: 1.0 * time.Second,
|
||||
Factor: 3.0,
|
||||
Jitter: 0.1,
|
||||
Steps: 3,
|
||||
}
|
||||
|
||||
return retry.Retry(tryUpload, shouldRetry, backoff)
|
||||
return retry.Retry(tryUpload, w.predicate, w.backoff)
|
||||
}
|
||||
|
||||
type withLayer interface {
|
||||
Layer(v1.Hash) (v1.Layer, error)
|
||||
}
|
||||
|
||||
func (w *writer) writeIndex(ref name.Reference, ii v1.ImageIndex, options ...Option) error {
|
||||
func (w *writer) writeIndex(ctx context.Context, ref name.Reference, ii v1.ImageIndex, options ...Option) error {
|
||||
index, err := ii.IndexManifest()
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -527,7 +492,7 @@ func (w *writer) writeIndex(ref name.Reference, ii v1.ImageIndex, options ...Opt
|
||||
// TODO(#803): Pipe through remote.WithJobs and upload these in parallel.
|
||||
for _, desc := range index.Manifests {
|
||||
ref := ref.Context().Digest(desc.Digest.String())
|
||||
exists, err := w.checkExistingManifest(desc.Digest, desc.MediaType)
|
||||
exists, err := w.checkExistingManifest(ctx, desc.Digest, desc.MediaType)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -542,7 +507,7 @@ func (w *writer) writeIndex(ref name.Reference, ii v1.ImageIndex, options ...Opt
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.writeIndex(ref, ii); err != nil {
|
||||
if err := w.writeIndex(ctx, ref, ii, options...); err != nil {
|
||||
return err
|
||||
}
|
||||
case types.OCIManifestSchema1, types.DockerManifestSchema2:
|
||||
@@ -550,7 +515,7 @@ func (w *writer) writeIndex(ref name.Reference, ii v1.ImageIndex, options ...Opt
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := writeImage(ref, img, o, w.lastUpdate); err != nil {
|
||||
if err := writeImage(ctx, ref, img, o, w.progress); err != nil {
|
||||
return err
|
||||
}
|
||||
default:
|
||||
@@ -560,7 +525,7 @@ func (w *writer) writeIndex(ref name.Reference, ii v1.ImageIndex, options ...Opt
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := w.uploadOne(layer); err != nil {
|
||||
if err := w.uploadOne(ctx, layer); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
@@ -569,7 +534,7 @@ func (w *writer) writeIndex(ref name.Reference, ii v1.ImageIndex, options ...Opt
|
||||
|
||||
// With all of the constituent elements uploaded, upload the manifest
|
||||
// to commit the image.
|
||||
return w.commitManifest(ii, ref)
|
||||
return w.commitManifest(ctx, ii, ref)
|
||||
}
|
||||
|
||||
type withMediaType interface {
|
||||
@@ -614,36 +579,166 @@ func unpackTaggable(t Taggable) ([]byte, *v1.Descriptor, error) {
|
||||
}, nil
|
||||
}
|
||||
|
||||
// commitManifest does a PUT of the image's manifest.
|
||||
func (w *writer) commitManifest(t Taggable, ref name.Reference) error {
|
||||
raw, desc, err := unpackTaggable(t)
|
||||
// commitSubjectReferrers is responsible for updating the fallback tag manifest to track descriptors referring to a subject for registries that don't yet support the Referrers API.
|
||||
// TODO: use conditional requests to avoid race conditions
|
||||
func (w *writer) commitSubjectReferrers(ctx context.Context, sub name.Digest, add v1.Descriptor) error {
|
||||
// Check if the registry supports Referrers API.
|
||||
// TODO: This should be done once per registry, not once per subject.
|
||||
u := w.url(fmt.Sprintf("/v2/%s/referrers/%s", w.repo.RepositoryStr(), sub.DigestStr()))
|
||||
req, err := http.NewRequest(http.MethodGet, u.String(), nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
u := w.url(fmt.Sprintf("/v2/%s/manifests/%s", w.repo.RepositoryStr(), ref.Identifier()))
|
||||
|
||||
// Make the request to PUT the serialized manifest
|
||||
req, err := http.NewRequest(http.MethodPut, u.String(), bytes.NewBuffer(raw))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
req.Header.Set("Content-Type", string(desc.MediaType))
|
||||
|
||||
resp, err := w.client.Do(req.WithContext(w.context))
|
||||
req.Header.Set("Accept", string(types.OCIImageIndex))
|
||||
resp, err := w.client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := transport.CheckError(resp, http.StatusOK, http.StatusCreated, http.StatusAccepted); err != nil {
|
||||
if err := transport.CheckError(resp, http.StatusOK, http.StatusNotFound, http.StatusBadRequest); err != nil {
|
||||
return err
|
||||
}
|
||||
if resp.StatusCode == http.StatusOK {
|
||||
// The registry supports Referrers API. The registry is responsible for updating the referrers list.
|
||||
return nil
|
||||
}
|
||||
|
||||
// The registry doesn't support Referrers API, we need to update the manifest tagged with the fallback tag.
|
||||
// Make the request to GET the current manifest.
|
||||
t := fallbackTag(sub)
|
||||
u = w.url(fmt.Sprintf("/v2/%s/manifests/%s", w.repo.RepositoryStr(), t.Identifier()))
|
||||
req, err = http.NewRequest(http.MethodGet, u.String(), nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
req.Header.Set("Accept", string(types.OCIImageIndex))
|
||||
resp, err = w.client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var im v1.IndexManifest
|
||||
if err := transport.CheckError(resp, http.StatusOK, http.StatusNotFound); err != nil {
|
||||
return err
|
||||
} else if resp.StatusCode == http.StatusNotFound {
|
||||
// Not found just means there are no attachments. Start with an empty index.
|
||||
im = v1.IndexManifest{
|
||||
SchemaVersion: 2,
|
||||
MediaType: types.OCIImageIndex,
|
||||
Manifests: []v1.Descriptor{add},
|
||||
}
|
||||
} else {
|
||||
if err := json.NewDecoder(resp.Body).Decode(&im); err != nil {
|
||||
return err
|
||||
}
|
||||
if im.SchemaVersion != 2 {
|
||||
return fmt.Errorf("fallback tag manifest is not a schema version 2: %d", im.SchemaVersion)
|
||||
}
|
||||
if im.MediaType != types.OCIImageIndex {
|
||||
return fmt.Errorf("fallback tag manifest is not an OCI image index: %s", im.MediaType)
|
||||
}
|
||||
for _, desc := range im.Manifests {
|
||||
if desc.Digest == add.Digest {
|
||||
// The digest is already attached, nothing to do.
|
||||
logs.Progress.Printf("fallback tag %s already had referrer", t.Identifier())
|
||||
return nil
|
||||
}
|
||||
}
|
||||
// Append the new descriptor to the index.
|
||||
im.Manifests = append(im.Manifests, add)
|
||||
}
|
||||
|
||||
// Sort the manifests for reproducibility.
|
||||
sort.Slice(im.Manifests, func(i, j int) bool {
|
||||
return im.Manifests[i].Digest.String() < im.Manifests[j].Digest.String()
|
||||
})
|
||||
logs.Progress.Printf("updating fallback tag %s with new referrer", t.Identifier())
|
||||
if err := w.commitManifest(ctx, fallbackTaggable{im}, t); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type fallbackTaggable struct {
|
||||
im v1.IndexManifest
|
||||
}
|
||||
|
||||
func (f fallbackTaggable) RawManifest() ([]byte, error) { return json.Marshal(f.im) }
|
||||
func (f fallbackTaggable) MediaType() (types.MediaType, error) { return types.OCIImageIndex, nil }
|
||||
|
||||
// commitManifest does a PUT of the image's manifest.
|
||||
func (w *writer) commitManifest(ctx context.Context, t Taggable, ref name.Reference) error {
|
||||
// If the manifest refers to a subject, we need to check whether we need to update the fallback tag manifest.
|
||||
raw, err := t.RawManifest()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var mf struct {
|
||||
MediaType types.MediaType `json:"mediaType"`
|
||||
Subject *v1.Descriptor `json:"subject,omitempty"`
|
||||
Config struct {
|
||||
MediaType types.MediaType `json:"mediaType"`
|
||||
} `json:"config"`
|
||||
}
|
||||
if err := json.Unmarshal(raw, &mf); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// The image was successfully pushed!
|
||||
logs.Progress.Printf("%v: digest: %v size: %d", ref, desc.Digest, desc.Size)
|
||||
w.incrProgress(int64(len(raw)))
|
||||
return nil
|
||||
tryUpload := func() error {
|
||||
ctx := retry.Never(ctx)
|
||||
raw, desc, err := unpackTaggable(t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
u := w.url(fmt.Sprintf("/v2/%s/manifests/%s", w.repo.RepositoryStr(), ref.Identifier()))
|
||||
|
||||
// Make the request to PUT the serialized manifest
|
||||
req, err := http.NewRequest(http.MethodPut, u.String(), bytes.NewBuffer(raw))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
req.Header.Set("Content-Type", string(desc.MediaType))
|
||||
|
||||
resp, err := w.client.Do(req.WithContext(ctx))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := transport.CheckError(resp, http.StatusOK, http.StatusCreated, http.StatusAccepted); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// If the manifest referred to a subject, we may need to update the fallback tag manifest.
|
||||
// TODO: If this fails, we'll retry the whole upload. We should retry just this part.
|
||||
if mf.Subject != nil {
|
||||
h, size, err := v1.SHA256(bytes.NewReader(raw))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
desc := v1.Descriptor{
|
||||
ArtifactType: string(mf.Config.MediaType),
|
||||
MediaType: mf.MediaType,
|
||||
Digest: h,
|
||||
Size: size,
|
||||
}
|
||||
if err := w.commitSubjectReferrers(ctx,
|
||||
ref.Context().Digest(mf.Subject.Digest.String()),
|
||||
desc); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// The image was successfully pushed!
|
||||
logs.Progress.Printf("%v: digest: %v size: %d", ref, desc.Digest, desc.Size)
|
||||
w.incrProgress(int64(len(raw)))
|
||||
return nil
|
||||
}
|
||||
|
||||
return retry.Retry(tryUpload, w.predicate, w.backoff)
|
||||
}
|
||||
|
||||
func scopesForUploadingImage(repo name.Repository, layers []v1.Layer) []string {
|
||||
@@ -686,23 +781,26 @@ func WriteIndex(ref name.Reference, ii v1.ImageIndex, options ...Option) (rerr e
|
||||
return err
|
||||
}
|
||||
w := writer{
|
||||
repo: ref.Context(),
|
||||
client: &http.Client{Transport: tr},
|
||||
context: o.context,
|
||||
updates: o.updates,
|
||||
repo: ref.Context(),
|
||||
client: &http.Client{Transport: tr},
|
||||
backoff: o.retryBackoff,
|
||||
predicate: o.retryPredicate,
|
||||
}
|
||||
|
||||
if o.updates != nil {
|
||||
w.lastUpdate = &v1.Update{}
|
||||
w.lastUpdate.Total, err = countIndex(ii, o.allowNondistributableArtifacts)
|
||||
w.progress = &progress{updates: o.updates}
|
||||
w.progress.lastUpdate = &v1.Update{}
|
||||
|
||||
defer close(o.updates)
|
||||
defer func() { w.progress.err(rerr) }()
|
||||
|
||||
w.progress.lastUpdate.Total, err = countIndex(ii, o.allowNondistributableArtifacts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer close(o.updates)
|
||||
defer func() { sendError(o.updates, rerr) }()
|
||||
}
|
||||
|
||||
return w.writeIndex(ref, ii, options...)
|
||||
return w.writeIndex(o.context, ref, ii, options...)
|
||||
}
|
||||
|
||||
// countImage counts the total size of all layers + config blob + manifest for
|
||||
@@ -825,15 +923,18 @@ func WriteLayer(repo name.Repository, layer v1.Layer, options ...Option) (rerr e
|
||||
return err
|
||||
}
|
||||
w := writer{
|
||||
repo: repo,
|
||||
client: &http.Client{Transport: tr},
|
||||
context: o.context,
|
||||
updates: o.updates,
|
||||
repo: repo,
|
||||
client: &http.Client{Transport: tr},
|
||||
backoff: o.retryBackoff,
|
||||
predicate: o.retryPredicate,
|
||||
}
|
||||
|
||||
if o.updates != nil {
|
||||
w.progress = &progress{updates: o.updates}
|
||||
w.progress.lastUpdate = &v1.Update{}
|
||||
|
||||
defer close(o.updates)
|
||||
defer func() { sendError(o.updates, rerr) }()
|
||||
defer func() { w.progress.err(rerr) }()
|
||||
|
||||
// TODO: support streaming layers which update the total count as they write.
|
||||
if _, ok := layer.(*stream.Layer); ok {
|
||||
@@ -843,9 +944,9 @@ func WriteLayer(repo name.Repository, layer v1.Layer, options ...Option) (rerr e
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
w.lastUpdate = &v1.Update{Total: size}
|
||||
w.progress.total(size)
|
||||
}
|
||||
return w.uploadOne(layer)
|
||||
return w.uploadOne(o.context, layer)
|
||||
}
|
||||
|
||||
// Tag adds a tag to the given Taggable via PUT /v2/.../manifests/<tag>
|
||||
@@ -892,10 +993,11 @@ func Put(ref name.Reference, t Taggable, options ...Option) error {
|
||||
return err
|
||||
}
|
||||
w := writer{
|
||||
repo: ref.Context(),
|
||||
client: &http.Client{Transport: tr},
|
||||
context: o.context,
|
||||
repo: ref.Context(),
|
||||
client: &http.Client{Transport: tr},
|
||||
backoff: o.retryBackoff,
|
||||
predicate: o.retryPredicate,
|
||||
}
|
||||
|
||||
return w.commitManifest(t, ref)
|
||||
return w.commitManifest(o.context, t, ref)
|
||||
}
|
||||
|
||||
179
vendor/github.com/google/go-containerregistry/pkg/v1/stream/layer.go
generated
vendored
179
vendor/github.com/google/go-containerregistry/pkg/v1/stream/layer.go
generated
vendored
@@ -12,12 +12,13 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package stream implements a single-pass streaming v1.Layer.
|
||||
package stream
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"compress/gzip"
|
||||
"crypto/sha256"
|
||||
"crypto"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"hash"
|
||||
@@ -48,6 +49,7 @@ type Layer struct {
|
||||
mu sync.Mutex
|
||||
digest, diffID *v1.Hash
|
||||
size int64
|
||||
mediaType types.MediaType
|
||||
}
|
||||
|
||||
var _ v1.Layer = (*Layer)(nil)
|
||||
@@ -62,11 +64,21 @@ func WithCompressionLevel(level int) LayerOption {
|
||||
}
|
||||
}
|
||||
|
||||
// WithMediaType is a functional option for overriding the layer's media type.
|
||||
func WithMediaType(mt types.MediaType) LayerOption {
|
||||
return func(l *Layer) {
|
||||
l.mediaType = mt
|
||||
}
|
||||
}
|
||||
|
||||
// NewLayer creates a Layer from an io.ReadCloser.
|
||||
func NewLayer(rc io.ReadCloser, opts ...LayerOption) *Layer {
|
||||
layer := &Layer{
|
||||
blob: rc,
|
||||
compression: gzip.BestSpeed,
|
||||
// We use DockerLayer for now as uncompressed layers
|
||||
// are unimplemented
|
||||
mediaType: types.DockerLayer,
|
||||
}
|
||||
|
||||
for _, opt := range opts {
|
||||
@@ -108,9 +120,7 @@ func (l *Layer) Size() (int64, error) {
|
||||
|
||||
// MediaType implements v1.Layer
|
||||
func (l *Layer) MediaType() (types.MediaType, error) {
|
||||
// We return DockerLayer for now as uncompressed layers
|
||||
// are unimplemented
|
||||
return types.DockerLayer, nil
|
||||
return l.mediaType, nil
|
||||
}
|
||||
|
||||
// Uncompressed implements v1.Layer.
|
||||
@@ -126,20 +136,38 @@ func (l *Layer) Compressed() (io.ReadCloser, error) {
|
||||
return newCompressedReader(l)
|
||||
}
|
||||
|
||||
// finalize sets the layer to consumed and computes all hash and size values.
|
||||
func (l *Layer) finalize(uncompressed, compressed hash.Hash, size int64) error {
|
||||
l.mu.Lock()
|
||||
defer l.mu.Unlock()
|
||||
|
||||
diffID, err := v1.NewHash("sha256:" + hex.EncodeToString(uncompressed.Sum(nil)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.diffID = &diffID
|
||||
|
||||
digest, err := v1.NewHash("sha256:" + hex.EncodeToString(compressed.Sum(nil)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.digest = &digest
|
||||
|
||||
l.size = size
|
||||
l.consumed = true
|
||||
return nil
|
||||
}
|
||||
|
||||
type compressedReader struct {
|
||||
closer io.Closer // original blob's Closer.
|
||||
|
||||
h, zh hash.Hash // collects digests of compressed and uncompressed stream.
|
||||
pr io.Reader
|
||||
bw *bufio.Writer
|
||||
count *countWriter
|
||||
|
||||
l *Layer // stream.Layer to update upon Close.
|
||||
pr io.Reader
|
||||
closer func() error
|
||||
}
|
||||
|
||||
func newCompressedReader(l *Layer) (*compressedReader, error) {
|
||||
h := sha256.New()
|
||||
zh := sha256.New()
|
||||
// Collect digests of compressed and uncompressed stream and size of
|
||||
// compressed stream.
|
||||
h := crypto.SHA256.New()
|
||||
zh := crypto.SHA256.New()
|
||||
count := &countWriter{}
|
||||
|
||||
// gzip.Writer writes to the output stream via pipe, a hasher to
|
||||
@@ -158,24 +186,74 @@ func newCompressedReader(l *Layer) (*compressedReader, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
doneDigesting := make(chan struct{})
|
||||
|
||||
cr := &compressedReader{
|
||||
closer: newMultiCloser(zw, l.blob),
|
||||
pr: pr,
|
||||
bw: bw,
|
||||
h: h,
|
||||
zh: zh,
|
||||
count: count,
|
||||
l: l,
|
||||
pr: pr,
|
||||
closer: func() error {
|
||||
// Immediately close pw without error. There are three ways to get
|
||||
// here.
|
||||
//
|
||||
// 1. There was a copy error due from the underlying reader, in which
|
||||
// case the error will not be overwritten.
|
||||
// 2. Copying from the underlying reader completed successfully.
|
||||
// 3. Close has been called before the underlying reader has been
|
||||
// fully consumed. In this case pw must be closed in order to
|
||||
// keep the flush of bw from blocking indefinitely.
|
||||
//
|
||||
// NOTE: pw.Close never returns an error. The signature is only to
|
||||
// implement io.Closer.
|
||||
_ = pw.Close()
|
||||
|
||||
// Close the inner ReadCloser.
|
||||
//
|
||||
// NOTE: net/http will call close on success, so if we've already
|
||||
// closed the inner rc, it's not an error.
|
||||
if err := l.blob.Close(); err != nil && !errors.Is(err, os.ErrClosed) {
|
||||
return err
|
||||
}
|
||||
|
||||
// Finalize layer with its digest and size values.
|
||||
<-doneDigesting
|
||||
return l.finalize(h, zh, count.n)
|
||||
},
|
||||
}
|
||||
go func() {
|
||||
if _, err := io.Copy(io.MultiWriter(h, zw), l.blob); err != nil {
|
||||
// Copy blob into the gzip writer, which also hashes and counts the
|
||||
// size of the compressed output, and hasher of the raw contents.
|
||||
_, copyErr := io.Copy(io.MultiWriter(h, zw), l.blob)
|
||||
|
||||
// Close the gzip writer once copying is done. If this is done in the
|
||||
// Close method of compressedReader instead, then it can cause a panic
|
||||
// when the compressedReader is closed before the blob is fully
|
||||
// consumed and io.Copy in this goroutine is still blocking.
|
||||
closeErr := zw.Close()
|
||||
|
||||
// Check errors from writing and closing streams.
|
||||
if copyErr != nil {
|
||||
close(doneDigesting)
|
||||
pw.CloseWithError(copyErr)
|
||||
return
|
||||
}
|
||||
if closeErr != nil {
|
||||
close(doneDigesting)
|
||||
pw.CloseWithError(closeErr)
|
||||
return
|
||||
}
|
||||
|
||||
// Flush the buffer once all writes are complete to the gzip writer.
|
||||
if err := bw.Flush(); err != nil {
|
||||
close(doneDigesting)
|
||||
pw.CloseWithError(err)
|
||||
return
|
||||
}
|
||||
// Now close the compressed reader, to flush the gzip stream
|
||||
// and calculate digest/diffID/size. This will cause pr to
|
||||
// return EOF which will cause readers of the Compressed stream
|
||||
// to finish reading.
|
||||
|
||||
// Notify closer that digests are done being written.
|
||||
close(doneDigesting)
|
||||
|
||||
// Close the compressed reader to calculate digest/diffID/size. This
|
||||
// will cause pr to return EOF which will cause readers of the
|
||||
// Compressed stream to finish reading.
|
||||
pw.CloseWithError(cr.Close())
|
||||
}()
|
||||
|
||||
@@ -184,36 +262,7 @@ func newCompressedReader(l *Layer) (*compressedReader, error) {
|
||||
|
||||
func (cr *compressedReader) Read(b []byte) (int, error) { return cr.pr.Read(b) }
|
||||
|
||||
func (cr *compressedReader) Close() error {
|
||||
cr.l.mu.Lock()
|
||||
defer cr.l.mu.Unlock()
|
||||
|
||||
// Close the inner ReadCloser.
|
||||
if err := cr.closer.Close(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Flush the buffer.
|
||||
if err := cr.bw.Flush(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
diffID, err := v1.NewHash("sha256:" + hex.EncodeToString(cr.h.Sum(nil)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cr.l.diffID = &diffID
|
||||
|
||||
digest, err := v1.NewHash("sha256:" + hex.EncodeToString(cr.zh.Sum(nil)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cr.l.digest = &digest
|
||||
|
||||
cr.l.size = cr.count.n
|
||||
cr.l.consumed = true
|
||||
return nil
|
||||
}
|
||||
func (cr *compressedReader) Close() error { return cr.closer() }
|
||||
|
||||
// countWriter counts bytes written to it.
|
||||
type countWriter struct{ n int64 }
|
||||
@@ -222,21 +271,3 @@ func (c *countWriter) Write(p []byte) (int, error) {
|
||||
c.n += int64(len(p))
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
// multiCloser is a Closer that collects multiple Closers and Closes them in order.
|
||||
type multiCloser []io.Closer
|
||||
|
||||
var _ io.Closer = (multiCloser)(nil)
|
||||
|
||||
func newMultiCloser(c ...io.Closer) multiCloser { return multiCloser(c) }
|
||||
|
||||
func (m multiCloser) Close() error {
|
||||
for _, c := range m {
|
||||
// NOTE: net/http will call close on success, so if we've already
|
||||
// closed the inner rc, it's not an error.
|
||||
if err := c.Close(); err != nil && !errors.Is(err, os.ErrClosed) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
11
vendor/github.com/google/go-containerregistry/pkg/v1/types/types.go
generated
vendored
11
vendor/github.com/google/go-containerregistry/pkg/v1/types/types.go
generated
vendored
@@ -12,6 +12,7 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// Package types holds common OCI media types.
|
||||
package types
|
||||
|
||||
// MediaType is an enumeration of the supported mime types that an element of an image might have.
|
||||
@@ -24,6 +25,7 @@ const (
|
||||
OCIManifestSchema1 MediaType = "application/vnd.oci.image.manifest.v1+json"
|
||||
OCIConfigJSON MediaType = "application/vnd.oci.image.config.v1+json"
|
||||
OCILayer MediaType = "application/vnd.oci.image.layer.v1.tar+gzip"
|
||||
OCILayerZStd MediaType = "application/vnd.oci.image.layer.v1.tar+zstd"
|
||||
OCIRestrictedLayer MediaType = "application/vnd.oci.image.layer.nondistributable.v1.tar+gzip"
|
||||
OCIUncompressedLayer MediaType = "application/vnd.oci.image.layer.v1.tar"
|
||||
OCIUncompressedRestrictedLayer MediaType = "application/vnd.oci.image.layer.nondistributable.v1.tar"
|
||||
@@ -69,3 +71,12 @@ func (m MediaType) IsIndex() bool {
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// IsConfig returns true if the mediaType represents a config, as opposed to something else, like an image.
|
||||
func (m MediaType) IsConfig() bool {
|
||||
switch m {
|
||||
case OCIConfigJSON, DockerConfigJSON:
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
21
vendor/github.com/google/go-containerregistry/pkg/v1/zz_deepcopy_generated.go
generated
vendored
21
vendor/github.com/google/go-containerregistry/pkg/v1/zz_deepcopy_generated.go
generated
vendored
@@ -1,3 +1,4 @@
|
||||
//go:build !ignore_autogenerated
|
||||
// +build !ignore_autogenerated
|
||||
|
||||
// Copyright 2018 Google LLC All Rights Reserved.
|
||||
@@ -98,6 +99,11 @@ func (in *ConfigFile) DeepCopyInto(out *ConfigFile) {
|
||||
}
|
||||
in.RootFS.DeepCopyInto(&out.RootFS)
|
||||
in.Config.DeepCopyInto(&out.Config)
|
||||
if in.OSFeatures != nil {
|
||||
in, out := &in.OSFeatures, &out.OSFeatures
|
||||
*out = make([]string, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -115,6 +121,11 @@ func (in *ConfigFile) DeepCopy() *ConfigFile {
|
||||
func (in *Descriptor) DeepCopyInto(out *Descriptor) {
|
||||
*out = *in
|
||||
out.Digest = in.Digest
|
||||
if in.Data != nil {
|
||||
in, out := &in.Data, &out.Data
|
||||
*out = make([]byte, len(*in))
|
||||
copy(*out, *in)
|
||||
}
|
||||
if in.URLs != nil {
|
||||
in, out := &in.URLs, &out.URLs
|
||||
*out = make([]string, len(*in))
|
||||
@@ -216,6 +227,11 @@ func (in *IndexManifest) DeepCopyInto(out *IndexManifest) {
|
||||
(*out)[key] = val
|
||||
}
|
||||
}
|
||||
if in.Subject != nil {
|
||||
in, out := &in.Subject, &out.Subject
|
||||
*out = new(Descriptor)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
@@ -247,6 +263,11 @@ func (in *Manifest) DeepCopyInto(out *Manifest) {
|
||||
(*out)[key] = val
|
||||
}
|
||||
}
|
||||
if in.Subject != nil {
|
||||
in, out := &in.Subject, &out.Subject
|
||||
*out = new(Descriptor)
|
||||
(*in).DeepCopyInto(*out)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user