Use apiextension v1
- upgrade from apiextension v1beta1 to v1 - generate yaml manifest for crd intead of applying it at runtime - users will have to apply the manifest with kubectl - kg and kgctl log an error if the crd is not present - now validation should actually work Signed-off-by: leonnicolas <leonloechner@gmx.de>
This commit is contained in:
201
vendor/sigs.k8s.io/controller-tools/LICENSE
generated
vendored
Normal file
201
vendor/sigs.k8s.io/controller-tools/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
263
vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go
generated
vendored
Normal file
263
vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go
generated
vendored
Normal file
@@ -0,0 +1,263 @@
|
||||
/*
|
||||
Copyright 2018 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/crd"
|
||||
"sigs.k8s.io/controller-tools/pkg/deepcopy"
|
||||
"sigs.k8s.io/controller-tools/pkg/genall"
|
||||
"sigs.k8s.io/controller-tools/pkg/genall/help"
|
||||
prettyhelp "sigs.k8s.io/controller-tools/pkg/genall/help/pretty"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
"sigs.k8s.io/controller-tools/pkg/rbac"
|
||||
"sigs.k8s.io/controller-tools/pkg/schemapatcher"
|
||||
"sigs.k8s.io/controller-tools/pkg/version"
|
||||
"sigs.k8s.io/controller-tools/pkg/webhook"
|
||||
)
|
||||
|
||||
//go:generate go run ../helpgen/main.go paths=../../pkg/... generate:headerFile=../../boilerplate.go.txt,year=2019
|
||||
|
||||
// Options are specified to controller-gen by turning generators and output rules into
|
||||
// markers, and then parsing them using the standard registry logic (without the "+").
|
||||
// Each marker and output rule should thus be usable as a marker target.
|
||||
|
||||
var (
|
||||
// allGenerators maintains the list of all known generators, giving
|
||||
// them names for use on the command line.
|
||||
// each turns into a command line option,
|
||||
// and has options for output forms.
|
||||
allGenerators = map[string]genall.Generator{
|
||||
"crd": crd.Generator{},
|
||||
"rbac": rbac.Generator{},
|
||||
"object": deepcopy.Generator{},
|
||||
"webhook": webhook.Generator{},
|
||||
"schemapatch": schemapatcher.Generator{},
|
||||
}
|
||||
|
||||
// allOutputRules defines the list of all known output rules, giving
|
||||
// them names for use on the command line.
|
||||
// Each output rule turns into two command line options:
|
||||
// - output:<generator>:<form> (per-generator output)
|
||||
// - output:<form> (default output)
|
||||
allOutputRules = map[string]genall.OutputRule{
|
||||
"dir": genall.OutputToDirectory(""),
|
||||
"none": genall.OutputToNothing,
|
||||
"stdout": genall.OutputToStdout,
|
||||
"artifacts": genall.OutputArtifacts{},
|
||||
}
|
||||
|
||||
// optionsRegistry contains all the marker definitions used to process command line options
|
||||
optionsRegistry = &markers.Registry{}
|
||||
)
|
||||
|
||||
func init() {
|
||||
for genName, gen := range allGenerators {
|
||||
// make the generator options marker itself
|
||||
defn := markers.Must(markers.MakeDefinition(genName, markers.DescribesPackage, gen))
|
||||
if err := optionsRegistry.Register(defn); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if helpGiver, hasHelp := gen.(genall.HasHelp); hasHelp {
|
||||
if help := helpGiver.Help(); help != nil {
|
||||
optionsRegistry.AddHelp(defn, help)
|
||||
}
|
||||
}
|
||||
|
||||
// make per-generation output rule markers
|
||||
for ruleName, rule := range allOutputRules {
|
||||
ruleMarker := markers.Must(markers.MakeDefinition(fmt.Sprintf("output:%s:%s", genName, ruleName), markers.DescribesPackage, rule))
|
||||
if err := optionsRegistry.Register(ruleMarker); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if helpGiver, hasHelp := rule.(genall.HasHelp); hasHelp {
|
||||
if help := helpGiver.Help(); help != nil {
|
||||
optionsRegistry.AddHelp(ruleMarker, help)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// make "default output" output rule markers
|
||||
for ruleName, rule := range allOutputRules {
|
||||
ruleMarker := markers.Must(markers.MakeDefinition("output:"+ruleName, markers.DescribesPackage, rule))
|
||||
if err := optionsRegistry.Register(ruleMarker); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if helpGiver, hasHelp := rule.(genall.HasHelp); hasHelp {
|
||||
if help := helpGiver.Help(); help != nil {
|
||||
optionsRegistry.AddHelp(ruleMarker, help)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// add in the common options markers
|
||||
if err := genall.RegisterOptionsMarkers(optionsRegistry); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
// noUsageError suppresses usage printing when it occurs
|
||||
// (since cobra doesn't provide a good way to avoid printing
|
||||
// out usage in only certain situations).
|
||||
type noUsageError struct{ error }
|
||||
|
||||
func main() {
|
||||
helpLevel := 0
|
||||
whichLevel := 0
|
||||
showVersion := false
|
||||
|
||||
cmd := &cobra.Command{
|
||||
Use: "controller-gen",
|
||||
Short: "Generate Kubernetes API extension resources and code.",
|
||||
Long: "Generate Kubernetes API extension resources and code.",
|
||||
Example: ` # Generate RBAC manifests and crds for all types under apis/,
|
||||
# outputting crds to /tmp/crds and everything else to stdout
|
||||
controller-gen rbac:roleName=<role name> crd paths=./apis/... output:crd:dir=/tmp/crds output:stdout
|
||||
|
||||
# Generate deepcopy/runtime.Object implementations for a particular file
|
||||
controller-gen object paths=./apis/v1beta1/some_types.go
|
||||
|
||||
# Generate OpenAPI v3 schemas for API packages and merge them into existing CRD manifests
|
||||
controller-gen schemapatch:manifests=./manifests output:dir=./manifests paths=./pkg/apis/...
|
||||
|
||||
# Run all the generators for a given project
|
||||
controller-gen paths=./apis/...
|
||||
|
||||
# Explain the markers for generating CRDs, and their arguments
|
||||
controller-gen crd -ww
|
||||
`,
|
||||
RunE: func(c *cobra.Command, rawOpts []string) error {
|
||||
// print version if asked for it
|
||||
if showVersion {
|
||||
version.Print()
|
||||
return nil
|
||||
}
|
||||
|
||||
// print the help if we asked for it (since we've got a different help flag :-/), then bail
|
||||
if helpLevel > 0 {
|
||||
return c.Usage()
|
||||
}
|
||||
|
||||
// print the marker docs if we asked for them, then bail
|
||||
if whichLevel > 0 {
|
||||
return printMarkerDocs(c, rawOpts, whichLevel)
|
||||
}
|
||||
|
||||
// otherwise, set up the runtime for actually running the generators
|
||||
rt, err := genall.FromOptions(optionsRegistry, rawOpts)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(rt.Generators) == 0 {
|
||||
return fmt.Errorf("no generators specified")
|
||||
}
|
||||
|
||||
if hadErrs := rt.Run(); hadErrs {
|
||||
// don't obscure the actual error with a bunch of usage
|
||||
return noUsageError{fmt.Errorf("not all generators ran successfully")}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
SilenceUsage: true, // silence the usage, then print it out ourselves if it wasn't suppressed
|
||||
}
|
||||
cmd.Flags().CountVarP(&whichLevel, "which-markers", "w", "print out all markers available with the requested generators\n(up to -www for the most detailed output, or -wwww for json output)")
|
||||
cmd.Flags().CountVarP(&helpLevel, "detailed-help", "h", "print out more detailed help\n(up to -hhh for the most detailed output, or -hhhh for json output)")
|
||||
cmd.Flags().BoolVar(&showVersion, "version", false, "show version")
|
||||
cmd.Flags().Bool("help", false, "print out usage and a summary of options")
|
||||
oldUsage := cmd.UsageFunc()
|
||||
cmd.SetUsageFunc(func(c *cobra.Command) error {
|
||||
if err := oldUsage(c); err != nil {
|
||||
return err
|
||||
}
|
||||
if helpLevel == 0 {
|
||||
helpLevel = summaryHelp
|
||||
}
|
||||
fmt.Fprintf(c.OutOrStderr(), "\n\nOptions\n\n")
|
||||
return helpForLevels(c.OutOrStdout(), c.OutOrStderr(), helpLevel, optionsRegistry, help.SortByOption)
|
||||
})
|
||||
|
||||
if err := cmd.Execute(); err != nil {
|
||||
if _, noUsage := err.(noUsageError); !noUsage {
|
||||
// print the usage unless we suppressed it
|
||||
if err := cmd.Usage(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
fmt.Fprintf(cmd.OutOrStderr(), "run `%[1]s %[2]s -w` to see all available markers, or `%[1]s %[2]s -h` for usage\n", cmd.CalledAs(), strings.Join(os.Args[1:], " "))
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// printMarkerDocs prints out marker help for the given generators specified in
|
||||
// the rawOptions, at the given level.
|
||||
func printMarkerDocs(c *cobra.Command, rawOptions []string, whichLevel int) error {
|
||||
// just grab a registry so we don't lag while trying to load roots
|
||||
// (like we'd do if we just constructed the full runtime).
|
||||
reg, err := genall.RegistryFromOptions(optionsRegistry, rawOptions)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return helpForLevels(c.OutOrStdout(), c.OutOrStderr(), whichLevel, reg, help.SortByCategory)
|
||||
}
|
||||
|
||||
func helpForLevels(mainOut io.Writer, errOut io.Writer, whichLevel int, reg *markers.Registry, sorter help.SortGroup) error {
|
||||
helpInfo := help.ByCategory(reg, sorter)
|
||||
switch whichLevel {
|
||||
case jsonHelp:
|
||||
if err := json.NewEncoder(mainOut).Encode(helpInfo); err != nil {
|
||||
return err
|
||||
}
|
||||
case detailedHelp, fullHelp:
|
||||
fullDetail := whichLevel == fullHelp
|
||||
for _, cat := range helpInfo {
|
||||
if cat.Category == "" {
|
||||
continue
|
||||
}
|
||||
contents := prettyhelp.MarkersDetails(fullDetail, cat.Category, cat.Markers)
|
||||
if err := contents.WriteTo(errOut); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
case summaryHelp:
|
||||
for _, cat := range helpInfo {
|
||||
if cat.Category == "" {
|
||||
continue
|
||||
}
|
||||
contents := prettyhelp.MarkersSummary(cat.Category, cat.Markers)
|
||||
if err := contents.WriteTo(errOut); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
const (
|
||||
_ = iota
|
||||
summaryHelp
|
||||
detailedHelp
|
||||
fullHelp
|
||||
jsonHelp
|
||||
)
|
122
vendor/sigs.k8s.io/controller-tools/pkg/crd/conv.go
generated
vendored
Normal file
122
vendor/sigs.k8s.io/controller-tools/pkg/crd/conv.go
generated
vendored
Normal file
@@ -0,0 +1,122 @@
|
||||
package crd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
apiextinternal "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions"
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
apiextv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
|
||||
"k8s.io/apimachinery/pkg/api/equality"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
)
|
||||
|
||||
var (
|
||||
conversionScheme = runtime.NewScheme()
|
||||
)
|
||||
|
||||
func init() {
|
||||
if err := apiextinternal.AddToScheme(conversionScheme); err != nil {
|
||||
panic("must be able to add internal apiextensions to the CRD conversion Scheme")
|
||||
}
|
||||
if err := apiext.AddToScheme(conversionScheme); err != nil {
|
||||
panic("must be able to add apiextensions/v1 to the CRD conversion Scheme")
|
||||
}
|
||||
if err := apiextv1beta1.AddToScheme(conversionScheme); err != nil {
|
||||
panic("must be able to add apiextensions/v1beta1 to the CRD conversion Scheme")
|
||||
}
|
||||
}
|
||||
|
||||
// AsVersion converts a CRD from the canonical internal form (currently v1) to some external form.
|
||||
func AsVersion(original apiext.CustomResourceDefinition, gv schema.GroupVersion) (runtime.Object, error) {
|
||||
// We can use the internal versions an existing conversions from kubernetes, since they're not in k/k itself.
|
||||
// This punts the problem of conversion down the road for a future maintainer (or future instance of @directxman12)
|
||||
// when we have to support older versions that get removed, or when API machinery decides to yell at us for this
|
||||
// questionable decision.
|
||||
intVer, err := conversionScheme.ConvertToVersion(&original, apiextinternal.SchemeGroupVersion)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to convert to internal CRD version: %w", err)
|
||||
}
|
||||
|
||||
return conversionScheme.ConvertToVersion(intVer, gv)
|
||||
}
|
||||
|
||||
// mergeIdenticalSubresources checks to see if subresources are identical across
|
||||
// all versions, and if so, merges them into a top-level version.
|
||||
//
|
||||
// This assumes you're not using trivial versions.
|
||||
func mergeIdenticalSubresources(crd *apiextv1beta1.CustomResourceDefinition) {
|
||||
subres := crd.Spec.Versions[0].Subresources
|
||||
for _, ver := range crd.Spec.Versions {
|
||||
if ver.Subresources == nil || !equality.Semantic.DeepEqual(subres, ver.Subresources) {
|
||||
// either all nil, or not identical
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// things are identical if we've gotten this far, so move the subresources up
|
||||
// and discard the identical per-version ones
|
||||
crd.Spec.Subresources = subres
|
||||
for i := range crd.Spec.Versions {
|
||||
crd.Spec.Versions[i].Subresources = nil
|
||||
}
|
||||
}
|
||||
|
||||
// mergeIdenticalSchemata checks to see if schemata are identical across
|
||||
// all versions, and if so, merges them into a top-level version.
|
||||
//
|
||||
// This assumes you're not using trivial versions.
|
||||
func mergeIdenticalSchemata(crd *apiextv1beta1.CustomResourceDefinition) {
|
||||
schema := crd.Spec.Versions[0].Schema
|
||||
for _, ver := range crd.Spec.Versions {
|
||||
if ver.Schema == nil || !equality.Semantic.DeepEqual(schema, ver.Schema) {
|
||||
// either all nil, or not identical
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// things are identical if we've gotten this far, so move the schemata up
|
||||
// to a single schema and discard the identical per-version ones
|
||||
crd.Spec.Validation = schema
|
||||
for i := range crd.Spec.Versions {
|
||||
crd.Spec.Versions[i].Schema = nil
|
||||
}
|
||||
}
|
||||
|
||||
// mergeIdenticalPrinterColumns checks to see if schemata are identical across
|
||||
// all versions, and if so, merges them into a top-level version.
|
||||
//
|
||||
// This assumes you're not using trivial versions.
|
||||
func mergeIdenticalPrinterColumns(crd *apiextv1beta1.CustomResourceDefinition) {
|
||||
cols := crd.Spec.Versions[0].AdditionalPrinterColumns
|
||||
for _, ver := range crd.Spec.Versions {
|
||||
if len(ver.AdditionalPrinterColumns) == 0 || !equality.Semantic.DeepEqual(cols, ver.AdditionalPrinterColumns) {
|
||||
// either all nil, or not identical
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// things are identical if we've gotten this far, so move the printer columns up
|
||||
// and discard the identical per-version ones
|
||||
crd.Spec.AdditionalPrinterColumns = cols
|
||||
for i := range crd.Spec.Versions {
|
||||
crd.Spec.Versions[i].AdditionalPrinterColumns = nil
|
||||
}
|
||||
}
|
||||
|
||||
// MergeIdenticalVersionInfo makes sure that components of the Versions field that are identical
|
||||
// across all versions get merged into the top-level fields in v1beta1.
|
||||
//
|
||||
// This is required by the Kubernetes API server validation.
|
||||
//
|
||||
// The reason is that a v1beta1 -> v1 -> v1beta1 conversion cycle would need to
|
||||
// round-trip identically, v1 doesn't have top-level subresources, and without
|
||||
// this restriction it would be ambiguous how a v1-with-identical-subresources
|
||||
// converts into a v1beta1).
|
||||
func MergeIdenticalVersionInfo(crd *apiextv1beta1.CustomResourceDefinition) {
|
||||
if len(crd.Spec.Versions) > 0 {
|
||||
mergeIdenticalSubresources(crd)
|
||||
mergeIdenticalSchemata(crd)
|
||||
mergeIdenticalPrinterColumns(crd)
|
||||
}
|
||||
}
|
78
vendor/sigs.k8s.io/controller-tools/pkg/crd/desc_visitor.go
generated
vendored
Normal file
78
vendor/sigs.k8s.io/controller-tools/pkg/crd/desc_visitor.go
generated
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package crd
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
)
|
||||
|
||||
// TruncateDescription truncates the description of fields in given schema if it
|
||||
// exceeds maxLen.
|
||||
// It tries to chop off the description at the closest sentence boundary.
|
||||
func TruncateDescription(schema *apiext.JSONSchemaProps, maxLen int) {
|
||||
EditSchema(schema, descVisitor{maxLen: maxLen})
|
||||
}
|
||||
|
||||
// descVisitor recursively visits all fields in the schema and truncates the
|
||||
// description of the fields to specified maxLen.
|
||||
type descVisitor struct {
|
||||
// maxLen is the maximum allowed length for decription of a field
|
||||
maxLen int
|
||||
}
|
||||
|
||||
func (v descVisitor) Visit(schema *apiext.JSONSchemaProps) SchemaVisitor {
|
||||
if schema == nil {
|
||||
return v
|
||||
}
|
||||
if v.maxLen < 0 {
|
||||
return nil /* no further work to be done for this schema */
|
||||
}
|
||||
if v.maxLen == 0 {
|
||||
schema.Description = ""
|
||||
return v
|
||||
}
|
||||
if len(schema.Description) > v.maxLen {
|
||||
schema.Description = truncateString(schema.Description, v.maxLen)
|
||||
return v
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// truncateString truncates given desc string if it exceeds maxLen. It may
|
||||
// return string with length less than maxLen even in cases where original desc
|
||||
// exceeds maxLen because it tries to chop off the desc at the closest sentence
|
||||
// boundary to avoid incomplete sentences.
|
||||
func truncateString(desc string, maxLen int) string {
|
||||
desc = desc[0:maxLen]
|
||||
|
||||
// Trying to chop off at closest sentence boundary.
|
||||
if n := strings.LastIndexFunc(desc, isSentenceTerminal); n > 0 {
|
||||
return desc[0 : n+1]
|
||||
}
|
||||
// TODO(droot): Improve the logic to chop off at closest word boundary
|
||||
// or add elipses (...) to indicate that it's chopped incase no closest
|
||||
// sentence found within maxLen.
|
||||
return desc
|
||||
}
|
||||
|
||||
// helper function to determine if given rune is a sentence terminal or not.
|
||||
func isSentenceTerminal(r rune) bool {
|
||||
return unicode.Is(unicode.STerm, r)
|
||||
}
|
63
vendor/sigs.k8s.io/controller-tools/pkg/crd/doc.go
generated
vendored
Normal file
63
vendor/sigs.k8s.io/controller-tools/pkg/crd/doc.go
generated
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package crd contains utilities for generating CustomResourceDefinitions and
|
||||
// their corresponding OpenAPI validation schemata.
|
||||
//
|
||||
// Markers
|
||||
//
|
||||
// Markers live under the markers subpackage. Two types of markers exist:
|
||||
// those that modify schema generation (for validation), and those that modify
|
||||
// the rest of the CRD. See the subpackage for more information and all
|
||||
// supported markers.
|
||||
//
|
||||
// Collecting Types and Generating CRDs
|
||||
//
|
||||
// The Parser is the entrypoint for collecting the information required to
|
||||
// generate CRDs. Like loader and collector, its methods are idemptotent, not
|
||||
// doing extra work if called multiple times.
|
||||
//
|
||||
// Parser's method start with Need. Calling NeedXYZ indicates that XYZ should
|
||||
// be made present in the eqivalent field in the Parser, where it can then be
|
||||
// loaded from. Each Need method will in turn call Need on anything it needs.
|
||||
//
|
||||
// In general, root packages should first be loaded into the Parser with
|
||||
// NeedPackage. Then, CRDs can be generated with NeedCRDFor.
|
||||
//
|
||||
// Errors are generally attached directly to the relevant Package with
|
||||
// AddError.
|
||||
//
|
||||
// Known Packages
|
||||
//
|
||||
// There are a few types from Kubernetes that have special meaning, but don't
|
||||
// have validation markers attached. Those specific types have overrides
|
||||
// listed in KnownPackages that can be added as overrides to any parser.
|
||||
//
|
||||
// Flattening
|
||||
//
|
||||
// Once schemata are generated, they can be used directly by external tooling
|
||||
// (like JSONSchema validators), but must first be "flattened" to not contain
|
||||
// references before use in a CRD (Kubernetes doesn't allow references in the
|
||||
// CRD's validation schema).
|
||||
//
|
||||
// The Flattener built in to the Parser takes care of flattening out references
|
||||
// when requesting the CRDs, but can be invoked manually. It will not modify
|
||||
// the input schemata.
|
||||
//
|
||||
// Flattened schemata may further be passed to FlattenEmbedded to remove the
|
||||
// use of AllOf (which is used to describe embedded struct fields when
|
||||
// references are in use). This done automatically when fetching CRDs.
|
||||
package crd
|
441
vendor/sigs.k8s.io/controller-tools/pkg/crd/flatten.go
generated
vendored
Normal file
441
vendor/sigs.k8s.io/controller-tools/pkg/crd/flatten.go
generated
vendored
Normal file
@@ -0,0 +1,441 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package crd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
)
|
||||
|
||||
// ErrorRecorder knows how to record errors. It wraps the part of
|
||||
// pkg/loader.Package that we need to record errors in places were it might not
|
||||
// make sense to have a loader.Package
|
||||
type ErrorRecorder interface {
|
||||
// AddError records that the given error occurred.
|
||||
// See the documentation on loader.Package.AddError for more information.
|
||||
AddError(error)
|
||||
}
|
||||
|
||||
// isOrNil checks if val is nil if val is of a nillable type, otherwise,
|
||||
// it compares val to valInt (which should probably be the zero value).
|
||||
func isOrNil(val reflect.Value, valInt interface{}, zeroInt interface{}) bool {
|
||||
switch valKind := val.Kind(); valKind {
|
||||
case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
|
||||
return val.IsNil()
|
||||
default:
|
||||
return valInt == zeroInt
|
||||
}
|
||||
}
|
||||
|
||||
// flattenAllOfInto copies properties from src to dst, then copies the properties
|
||||
// of each item in src's allOf to dst's properties as well.
|
||||
func flattenAllOfInto(dst *apiext.JSONSchemaProps, src apiext.JSONSchemaProps, errRec ErrorRecorder) {
|
||||
if len(src.AllOf) > 0 {
|
||||
for _, embedded := range src.AllOf {
|
||||
flattenAllOfInto(dst, embedded, errRec)
|
||||
}
|
||||
}
|
||||
|
||||
dstVal := reflect.Indirect(reflect.ValueOf(dst))
|
||||
srcVal := reflect.ValueOf(src)
|
||||
typ := dstVal.Type()
|
||||
|
||||
srcRemainder := apiext.JSONSchemaProps{}
|
||||
srcRemVal := reflect.Indirect(reflect.ValueOf(&srcRemainder))
|
||||
dstRemainder := apiext.JSONSchemaProps{}
|
||||
dstRemVal := reflect.Indirect(reflect.ValueOf(&dstRemainder))
|
||||
hoisted := false
|
||||
|
||||
for i := 0; i < srcVal.NumField(); i++ {
|
||||
fieldName := typ.Field(i).Name
|
||||
switch fieldName {
|
||||
case "AllOf":
|
||||
// don't merge because we deal with it above
|
||||
continue
|
||||
case "Title", "Description", "Example", "ExternalDocs":
|
||||
// don't merge because we pre-merge to properly preserve field docs
|
||||
continue
|
||||
}
|
||||
srcField := srcVal.Field(i)
|
||||
fldTyp := srcField.Type()
|
||||
zeroVal := reflect.Zero(fldTyp)
|
||||
zeroInt := zeroVal.Interface()
|
||||
srcInt := srcField.Interface()
|
||||
|
||||
if isOrNil(srcField, srcInt, zeroInt) {
|
||||
// nothing to copy from src, continue
|
||||
continue
|
||||
}
|
||||
|
||||
dstField := dstVal.Field(i)
|
||||
dstInt := dstField.Interface()
|
||||
if isOrNil(dstField, dstInt, zeroInt) {
|
||||
// dst is empty, continue
|
||||
dstField.Set(srcField)
|
||||
continue
|
||||
}
|
||||
|
||||
if fldTyp.Comparable() && srcInt == dstInt {
|
||||
// same value, continue
|
||||
continue
|
||||
}
|
||||
|
||||
// resolve conflict
|
||||
switch fieldName {
|
||||
case "Properties":
|
||||
// merge if possible, use all of otherwise
|
||||
srcMap := srcInt.(map[string]apiext.JSONSchemaProps)
|
||||
dstMap := dstInt.(map[string]apiext.JSONSchemaProps)
|
||||
|
||||
for k, v := range srcMap {
|
||||
dstProp, exists := dstMap[k]
|
||||
if !exists {
|
||||
dstMap[k] = v
|
||||
continue
|
||||
}
|
||||
flattenAllOfInto(&dstProp, v, errRec)
|
||||
dstMap[k] = dstProp
|
||||
}
|
||||
case "Required":
|
||||
// merge
|
||||
dstField.Set(reflect.AppendSlice(dstField, srcField))
|
||||
case "Type":
|
||||
if srcInt != dstInt {
|
||||
// TODO(directxman12): figure out how to attach this back to a useful point in the Go source or in the schema
|
||||
errRec.AddError(fmt.Errorf("conflicting types in allOf branches in schema: %s vs %s", dstInt, srcInt))
|
||||
}
|
||||
// keep the destination value, for now
|
||||
// TODO(directxman12): Default -- use field?
|
||||
// TODO(directxman12):
|
||||
// - Dependencies: if field x is present, then either schema validates or all props are present
|
||||
// - AdditionalItems: like AdditionalProperties
|
||||
// - Definitions: common named validation sets that can be references (merge, bail if duplicate)
|
||||
case "AdditionalProperties":
|
||||
// as of the time of writing, `allows: false` is not allowed, so we don't have to handle it
|
||||
srcProps := srcInt.(*apiext.JSONSchemaPropsOrBool)
|
||||
if srcProps.Schema == nil {
|
||||
// nothing to merge
|
||||
continue
|
||||
}
|
||||
dstProps := dstInt.(*apiext.JSONSchemaPropsOrBool)
|
||||
if dstProps.Schema == nil {
|
||||
dstProps.Schema = &apiext.JSONSchemaProps{}
|
||||
}
|
||||
flattenAllOfInto(dstProps.Schema, *srcProps.Schema, errRec)
|
||||
// NB(directxman12): no need to explicitly handle nullable -- false is considered to be the zero value
|
||||
// TODO(directxman12): src isn't necessarily the field value -- it's just the most recent allOf entry
|
||||
default:
|
||||
// hoist into allOf...
|
||||
hoisted = true
|
||||
|
||||
srcRemVal.Field(i).Set(srcField)
|
||||
dstRemVal.Field(i).Set(dstField)
|
||||
// ...and clear the original
|
||||
dstField.Set(zeroVal)
|
||||
}
|
||||
}
|
||||
|
||||
if hoisted {
|
||||
dst.AllOf = append(dst.AllOf, dstRemainder, srcRemainder)
|
||||
}
|
||||
|
||||
// dedup required
|
||||
if len(dst.Required) > 0 {
|
||||
reqUniq := make(map[string]struct{})
|
||||
for _, req := range dst.Required {
|
||||
reqUniq[req] = struct{}{}
|
||||
}
|
||||
dst.Required = make([]string, 0, len(reqUniq))
|
||||
for req := range reqUniq {
|
||||
dst.Required = append(dst.Required, req)
|
||||
}
|
||||
// be deterministic
|
||||
sort.Strings(dst.Required)
|
||||
}
|
||||
}
|
||||
|
||||
// allOfVisitor recursively visits allOf fields in the schema,
|
||||
// merging nested allOf properties into the root schema.
|
||||
type allOfVisitor struct {
|
||||
// errRec is used to record errors while flattening (like two conflicting
|
||||
// field values used in an allOf)
|
||||
errRec ErrorRecorder
|
||||
}
|
||||
|
||||
func (v *allOfVisitor) Visit(schema *apiext.JSONSchemaProps) SchemaVisitor {
|
||||
if schema == nil {
|
||||
return v
|
||||
}
|
||||
|
||||
// clear this now so that we can safely preserve edits made my flattenAllOfInto
|
||||
origAllOf := schema.AllOf
|
||||
schema.AllOf = nil
|
||||
|
||||
for _, embedded := range origAllOf {
|
||||
flattenAllOfInto(schema, embedded, v.errRec)
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// NB(directxman12): FlattenEmbedded is separate from Flattener because
|
||||
// some tooling wants to flatten out embedded fields, but only actually
|
||||
// flatten a few specific types first.
|
||||
|
||||
// FlattenEmbedded flattens embedded fields (represented via AllOf) which have
|
||||
// already had their references resolved into simple properties in the containing
|
||||
// schema.
|
||||
func FlattenEmbedded(schema *apiext.JSONSchemaProps, errRec ErrorRecorder) *apiext.JSONSchemaProps {
|
||||
outSchema := schema.DeepCopy()
|
||||
EditSchema(outSchema, &allOfVisitor{errRec: errRec})
|
||||
return outSchema
|
||||
}
|
||||
|
||||
// Flattener knows how to take a root type, and flatten all references in it
|
||||
// into a single, flat type. Flattened types are cached, so it's relatively
|
||||
// cheap to make repeated calls with the same type.
|
||||
type Flattener struct {
|
||||
// Parser is used to lookup package and type details, and parse in new packages.
|
||||
Parser *Parser
|
||||
|
||||
LookupReference func(ref string, contextPkg *loader.Package) (TypeIdent, error)
|
||||
|
||||
// flattenedTypes hold the flattened version of each seen type for later reuse.
|
||||
flattenedTypes map[TypeIdent]apiext.JSONSchemaProps
|
||||
initOnce sync.Once
|
||||
}
|
||||
|
||||
func (f *Flattener) init() {
|
||||
f.initOnce.Do(func() {
|
||||
f.flattenedTypes = make(map[TypeIdent]apiext.JSONSchemaProps)
|
||||
if f.LookupReference == nil {
|
||||
f.LookupReference = identFromRef
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// cacheType saves the flattened version of the given type for later reuse
|
||||
func (f *Flattener) cacheType(typ TypeIdent, schema apiext.JSONSchemaProps) {
|
||||
f.init()
|
||||
f.flattenedTypes[typ] = schema
|
||||
}
|
||||
|
||||
// loadUnflattenedSchema fetches a fresh, unflattened schema from the parser.
|
||||
func (f *Flattener) loadUnflattenedSchema(typ TypeIdent) (*apiext.JSONSchemaProps, error) {
|
||||
f.Parser.NeedSchemaFor(typ)
|
||||
|
||||
baseSchema, found := f.Parser.Schemata[typ]
|
||||
if !found {
|
||||
return nil, fmt.Errorf("unable to locate schema for type %s", typ)
|
||||
}
|
||||
return &baseSchema, nil
|
||||
}
|
||||
|
||||
// FlattenType flattens the given pre-loaded type, removing any references from it.
|
||||
// It deep-copies the schema first, so it won't affect the parser's version of the schema.
|
||||
func (f *Flattener) FlattenType(typ TypeIdent) *apiext.JSONSchemaProps {
|
||||
f.init()
|
||||
if cachedSchema, isCached := f.flattenedTypes[typ]; isCached {
|
||||
return &cachedSchema
|
||||
}
|
||||
baseSchema, err := f.loadUnflattenedSchema(typ)
|
||||
if err != nil {
|
||||
typ.Package.AddError(err)
|
||||
return nil
|
||||
}
|
||||
resSchema := f.FlattenSchema(*baseSchema, typ.Package)
|
||||
f.cacheType(typ, *resSchema)
|
||||
return resSchema
|
||||
}
|
||||
|
||||
// FlattenSchema flattens the given schema, removing any references.
|
||||
// It deep-copies the schema first, so the input schema won't be affected.
|
||||
func (f *Flattener) FlattenSchema(baseSchema apiext.JSONSchemaProps, currentPackage *loader.Package) *apiext.JSONSchemaProps {
|
||||
resSchema := baseSchema.DeepCopy()
|
||||
EditSchema(resSchema, &flattenVisitor{
|
||||
Flattener: f,
|
||||
currentPackage: currentPackage,
|
||||
})
|
||||
|
||||
return resSchema
|
||||
}
|
||||
|
||||
// RefParts splits a reference produced by the schema generator into its component
|
||||
// type name and package name (if it's a cross-package reference). Note that
|
||||
// referenced packages *must* be looked up relative to the current package.
|
||||
func RefParts(ref string) (typ string, pkgName string, err error) {
|
||||
if !strings.HasPrefix(ref, defPrefix) {
|
||||
return "", "", fmt.Errorf("non-standard reference link %q", ref)
|
||||
}
|
||||
ref = ref[len(defPrefix):]
|
||||
// decode the json pointer encodings
|
||||
ref = strings.Replace(ref, "~1", "/", -1)
|
||||
ref = strings.Replace(ref, "~0", "~", -1)
|
||||
nameParts := strings.SplitN(ref, "~", 2)
|
||||
|
||||
if len(nameParts) == 1 {
|
||||
// local reference
|
||||
return nameParts[0], "", nil
|
||||
}
|
||||
// cross-package reference
|
||||
return nameParts[1], nameParts[0], nil
|
||||
}
|
||||
|
||||
// identFromRef converts the given schema ref from the given package back
|
||||
// into the TypeIdent that it represents.
|
||||
func identFromRef(ref string, contextPkg *loader.Package) (TypeIdent, error) {
|
||||
typ, pkgName, err := RefParts(ref)
|
||||
if err != nil {
|
||||
return TypeIdent{}, err
|
||||
}
|
||||
|
||||
if pkgName == "" {
|
||||
// a local reference
|
||||
return TypeIdent{
|
||||
Name: typ,
|
||||
Package: contextPkg,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// an external reference
|
||||
return TypeIdent{
|
||||
Name: typ,
|
||||
Package: contextPkg.Imports()[pkgName],
|
||||
}, nil
|
||||
}
|
||||
|
||||
// preserveFields copies documentation fields from src into dst, preserving
|
||||
// field-level documentation when flattening, and preserving field-level validation
|
||||
// as allOf entries.
|
||||
func preserveFields(dst *apiext.JSONSchemaProps, src apiext.JSONSchemaProps) {
|
||||
srcDesc := src.Description
|
||||
srcTitle := src.Title
|
||||
srcExDoc := src.ExternalDocs
|
||||
srcEx := src.Example
|
||||
|
||||
src.Description, src.Title, src.ExternalDocs, src.Example = "", "", nil, nil
|
||||
|
||||
src.Ref = nil
|
||||
*dst = apiext.JSONSchemaProps{
|
||||
AllOf: []apiext.JSONSchemaProps{*dst, src},
|
||||
|
||||
// keep these, in case the source field doesn't specify anything useful
|
||||
Description: dst.Description,
|
||||
Title: dst.Title,
|
||||
ExternalDocs: dst.ExternalDocs,
|
||||
Example: dst.Example,
|
||||
}
|
||||
|
||||
if srcDesc != "" {
|
||||
dst.Description = srcDesc
|
||||
}
|
||||
if srcTitle != "" {
|
||||
dst.Title = srcTitle
|
||||
}
|
||||
if srcExDoc != nil {
|
||||
dst.ExternalDocs = srcExDoc
|
||||
}
|
||||
if srcEx != nil {
|
||||
dst.Example = srcEx
|
||||
}
|
||||
}
|
||||
|
||||
// flattenVisitor visits each node in the schema, recursively flattening references.
|
||||
type flattenVisitor struct {
|
||||
*Flattener
|
||||
|
||||
currentPackage *loader.Package
|
||||
currentType *TypeIdent
|
||||
currentSchema *apiext.JSONSchemaProps
|
||||
originalField apiext.JSONSchemaProps
|
||||
}
|
||||
|
||||
func (f *flattenVisitor) Visit(baseSchema *apiext.JSONSchemaProps) SchemaVisitor {
|
||||
if baseSchema == nil {
|
||||
// end-of-node marker, cache the results
|
||||
if f.currentType != nil {
|
||||
f.cacheType(*f.currentType, *f.currentSchema)
|
||||
// preserve field information *after* caching so that we don't
|
||||
// accidentally cache field-level information onto the schema for
|
||||
// the type in general.
|
||||
preserveFields(f.currentSchema, f.originalField)
|
||||
}
|
||||
return f
|
||||
}
|
||||
|
||||
// if we get a type that's just a ref, resolve it
|
||||
if baseSchema.Ref != nil && len(*baseSchema.Ref) > 0 {
|
||||
// resolve this ref
|
||||
refIdent, err := f.LookupReference(*baseSchema.Ref, f.currentPackage)
|
||||
if err != nil {
|
||||
f.currentPackage.AddError(err)
|
||||
return nil
|
||||
}
|
||||
|
||||
// load and potentially flatten the schema
|
||||
|
||||
// check the cache first...
|
||||
if refSchemaCached, isCached := f.flattenedTypes[refIdent]; isCached {
|
||||
// shallow copy is fine, it's just to avoid overwriting the doc fields
|
||||
preserveFields(&refSchemaCached, *baseSchema)
|
||||
*baseSchema = refSchemaCached
|
||||
return nil // don't recurse, we're done
|
||||
}
|
||||
|
||||
// ...otherwise, we need to flatten
|
||||
refSchema, err := f.loadUnflattenedSchema(refIdent)
|
||||
if err != nil {
|
||||
f.currentPackage.AddError(err)
|
||||
return nil
|
||||
}
|
||||
refSchema = refSchema.DeepCopy()
|
||||
|
||||
// keep field around to preserve field-level validation, docs, etc
|
||||
origField := *baseSchema
|
||||
*baseSchema = *refSchema
|
||||
|
||||
// avoid loops (which shouldn't exist, but just in case)
|
||||
// by marking a nil cached pointer before we start recursing
|
||||
f.cacheType(refIdent, apiext.JSONSchemaProps{})
|
||||
|
||||
return &flattenVisitor{
|
||||
Flattener: f.Flattener,
|
||||
|
||||
currentPackage: refIdent.Package,
|
||||
currentType: &refIdent,
|
||||
currentSchema: baseSchema,
|
||||
originalField: origField,
|
||||
}
|
||||
}
|
||||
|
||||
// otherwise, continue recursing...
|
||||
if f.currentType != nil {
|
||||
// ...but don't accidentally end this node early (for caching purposes)
|
||||
return &flattenVisitor{
|
||||
Flattener: f.Flattener,
|
||||
currentPackage: f.currentPackage,
|
||||
}
|
||||
}
|
||||
|
||||
return f
|
||||
}
|
416
vendor/sigs.k8s.io/controller-tools/pkg/crd/gen.go
generated
vendored
Normal file
416
vendor/sigs.k8s.io/controller-tools/pkg/crd/gen.go
generated
vendored
Normal file
@@ -0,0 +1,416 @@
|
||||
/*
|
||||
Copyright 2018 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package crd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"go/ast"
|
||||
"go/types"
|
||||
"os"
|
||||
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
apiextlegacy "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
crdmarkers "sigs.k8s.io/controller-tools/pkg/crd/markers"
|
||||
"sigs.k8s.io/controller-tools/pkg/genall"
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
"sigs.k8s.io/controller-tools/pkg/version"
|
||||
)
|
||||
|
||||
// The default CustomResourceDefinition version to generate.
|
||||
const defaultVersion = "v1"
|
||||
|
||||
// +controllertools:marker:generateHelp
|
||||
|
||||
// Generator generates CustomResourceDefinition objects.
|
||||
type Generator struct {
|
||||
// TrivialVersions indicates that we should produce a single-version CRD.
|
||||
//
|
||||
// Single "trivial-version" CRDs are compatible with older (pre 1.13)
|
||||
// Kubernetes API servers. The storage version's schema will be used as
|
||||
// the CRD's schema.
|
||||
//
|
||||
// Only works with the v1beta1 CRD version.
|
||||
TrivialVersions bool `marker:",optional"`
|
||||
|
||||
// PreserveUnknownFields indicates whether or not we should turn off pruning.
|
||||
//
|
||||
// Left unspecified, it'll default to true when only a v1beta1 CRD is
|
||||
// generated (to preserve compatibility with older versions of this tool),
|
||||
// or false otherwise.
|
||||
//
|
||||
// It's required to be false for v1 CRDs.
|
||||
PreserveUnknownFields *bool `marker:",optional"`
|
||||
|
||||
// AllowDangerousTypes allows types which are usually omitted from CRD generation
|
||||
// because they are not recommended.
|
||||
//
|
||||
// Currently the following additional types are allowed when this is true:
|
||||
// float32
|
||||
// float64
|
||||
//
|
||||
// Left unspecified, the default is false
|
||||
AllowDangerousTypes *bool `marker:",optional"`
|
||||
|
||||
// MaxDescLen specifies the maximum description length for fields in CRD's OpenAPI schema.
|
||||
//
|
||||
// 0 indicates drop the description for all fields completely.
|
||||
// n indicates limit the description to at most n characters and truncate the description to
|
||||
// closest sentence boundary if it exceeds n characters.
|
||||
MaxDescLen *int `marker:",optional"`
|
||||
|
||||
// CRDVersions specifies the target API versions of the CRD type itself to
|
||||
// generate. Defaults to v1.
|
||||
//
|
||||
// The first version listed will be assumed to be the "default" version and
|
||||
// will not get a version suffix in the output filename.
|
||||
//
|
||||
// You'll need to use "v1" to get support for features like defaulting,
|
||||
// along with an API server that supports it (Kubernetes 1.16+).
|
||||
CRDVersions []string `marker:"crdVersions,optional"`
|
||||
|
||||
// GenerateEmbeddedObjectMeta specifies if any embedded ObjectMeta in the CRD should be generated
|
||||
GenerateEmbeddedObjectMeta *bool `marker:",optional"`
|
||||
}
|
||||
|
||||
func (Generator) CheckFilter() loader.NodeFilter {
|
||||
return filterTypesForCRDs
|
||||
}
|
||||
func (Generator) RegisterMarkers(into *markers.Registry) error {
|
||||
return crdmarkers.Register(into)
|
||||
}
|
||||
func (g Generator) Generate(ctx *genall.GenerationContext) error {
|
||||
parser := &Parser{
|
||||
Collector: ctx.Collector,
|
||||
Checker: ctx.Checker,
|
||||
// Perform defaulting here to avoid ambiguity later
|
||||
AllowDangerousTypes: g.AllowDangerousTypes != nil && *g.AllowDangerousTypes == true,
|
||||
// Indicates the parser on whether to register the ObjectMeta type or not
|
||||
GenerateEmbeddedObjectMeta: g.GenerateEmbeddedObjectMeta != nil && *g.GenerateEmbeddedObjectMeta == true,
|
||||
}
|
||||
|
||||
AddKnownTypes(parser)
|
||||
for _, root := range ctx.Roots {
|
||||
parser.NeedPackage(root)
|
||||
}
|
||||
|
||||
metav1Pkg := FindMetav1(ctx.Roots)
|
||||
if metav1Pkg == nil {
|
||||
// no objects in the roots, since nothing imported metav1
|
||||
return nil
|
||||
}
|
||||
|
||||
// TODO: allow selecting a specific object
|
||||
kubeKinds := FindKubeKinds(parser, metav1Pkg)
|
||||
if len(kubeKinds) == 0 {
|
||||
// no objects in the roots
|
||||
return nil
|
||||
}
|
||||
|
||||
crdVersions := g.CRDVersions
|
||||
|
||||
if len(crdVersions) == 0 {
|
||||
crdVersions = []string{defaultVersion}
|
||||
}
|
||||
|
||||
for groupKind := range kubeKinds {
|
||||
parser.NeedCRDFor(groupKind, g.MaxDescLen)
|
||||
crdRaw := parser.CustomResourceDefinitions[groupKind]
|
||||
addAttribution(&crdRaw)
|
||||
|
||||
// Prevent the top level metadata for the CRD to be generate regardless of the intention in the arguments
|
||||
FixTopLevelMetadata(crdRaw)
|
||||
|
||||
versionedCRDs := make([]interface{}, len(crdVersions))
|
||||
for i, ver := range crdVersions {
|
||||
conv, err := AsVersion(crdRaw, schema.GroupVersion{Group: apiext.SchemeGroupVersion.Group, Version: ver})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
versionedCRDs[i] = conv
|
||||
}
|
||||
|
||||
if g.TrivialVersions {
|
||||
for i, crd := range versionedCRDs {
|
||||
if crdVersions[i] == "v1beta1" {
|
||||
toTrivialVersions(crd.(*apiextlegacy.CustomResourceDefinition))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// *If* we're only generating v1beta1 CRDs, default to `preserveUnknownFields: (unset)`
|
||||
// for compatibility purposes. In any other case, default to false, since that's
|
||||
// the sensible default and is required for v1.
|
||||
v1beta1Only := len(crdVersions) == 1 && crdVersions[0] == "v1beta1"
|
||||
switch {
|
||||
case (g.PreserveUnknownFields == nil || *g.PreserveUnknownFields) && v1beta1Only:
|
||||
crd := versionedCRDs[0].(*apiextlegacy.CustomResourceDefinition)
|
||||
crd.Spec.PreserveUnknownFields = nil
|
||||
case g.PreserveUnknownFields == nil, g.PreserveUnknownFields != nil && !*g.PreserveUnknownFields:
|
||||
// it'll be false here (coming from v1) -- leave it as such
|
||||
default:
|
||||
return fmt.Errorf("you may only set PreserveUnknownFields to true with v1beta1 CRDs")
|
||||
}
|
||||
|
||||
for i, crd := range versionedCRDs {
|
||||
// defaults are not allowed to be specified in v1beta1 CRDs and
|
||||
// decriptions are not allowed on the metadata regardless of version
|
||||
// strip them before writing to a file
|
||||
if crdVersions[i] == "v1beta1" {
|
||||
removeDefaultsFromSchemas(crd.(*apiextlegacy.CustomResourceDefinition))
|
||||
removeDescriptionFromMetadataLegacy(crd.(*apiextlegacy.CustomResourceDefinition))
|
||||
} else {
|
||||
removeDescriptionFromMetadata(crd.(*apiext.CustomResourceDefinition))
|
||||
}
|
||||
var fileName string
|
||||
if i == 0 {
|
||||
fileName = fmt.Sprintf("%s_%s.yaml", crdRaw.Spec.Group, crdRaw.Spec.Names.Plural)
|
||||
} else {
|
||||
fileName = fmt.Sprintf("%s_%s.%s.yaml", crdRaw.Spec.Group, crdRaw.Spec.Names.Plural, crdVersions[i])
|
||||
}
|
||||
if err := ctx.WriteYAML(fileName, crd); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func removeDescriptionFromMetadata(crd *apiext.CustomResourceDefinition) {
|
||||
for _, versionSpec := range crd.Spec.Versions {
|
||||
if versionSpec.Schema != nil {
|
||||
removeDescriptionFromMetadataProps(versionSpec.Schema.OpenAPIV3Schema)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func removeDescriptionFromMetadataProps(v *apiext.JSONSchemaProps) {
|
||||
if m, ok := v.Properties["metadata"]; ok {
|
||||
meta := &m
|
||||
if meta.Description != "" {
|
||||
meta.Description = ""
|
||||
v.Properties["metadata"] = m
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func removeDescriptionFromMetadataLegacy(crd *apiextlegacy.CustomResourceDefinition) {
|
||||
if crd.Spec.Validation != nil {
|
||||
removeDescriptionFromMetadataPropsLegacy(crd.Spec.Validation.OpenAPIV3Schema)
|
||||
}
|
||||
for _, versionSpec := range crd.Spec.Versions {
|
||||
if versionSpec.Schema != nil {
|
||||
removeDescriptionFromMetadataPropsLegacy(versionSpec.Schema.OpenAPIV3Schema)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func removeDescriptionFromMetadataPropsLegacy(v *apiextlegacy.JSONSchemaProps) {
|
||||
if m, ok := v.Properties["metadata"]; ok {
|
||||
meta := &m
|
||||
if meta.Description != "" {
|
||||
meta.Description = ""
|
||||
v.Properties["metadata"] = m
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// removeDefaultsFromSchemas will remove all instances of default values being
|
||||
// specified across all defined API versions
|
||||
func removeDefaultsFromSchemas(crd *apiextlegacy.CustomResourceDefinition) {
|
||||
if crd.Spec.Validation != nil {
|
||||
removeDefaultsFromSchemaProps(crd.Spec.Validation.OpenAPIV3Schema)
|
||||
}
|
||||
|
||||
for _, versionSpec := range crd.Spec.Versions {
|
||||
if versionSpec.Schema != nil {
|
||||
removeDefaultsFromSchemaProps(versionSpec.Schema.OpenAPIV3Schema)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// removeDefaultsFromSchemaProps will recurse into JSONSchemaProps to remove
|
||||
// all instances of default values being specified
|
||||
func removeDefaultsFromSchemaProps(v *apiextlegacy.JSONSchemaProps) {
|
||||
if v == nil {
|
||||
return
|
||||
}
|
||||
|
||||
if v.Default != nil {
|
||||
fmt.Fprintln(os.Stderr, "Warning: default unsupported in CRD version v1beta1, v1 required. Removing defaults.")
|
||||
}
|
||||
|
||||
// nil-out the default field
|
||||
v.Default = nil
|
||||
for name, prop := range v.Properties {
|
||||
// iter var reference is fine -- we handle the persistence of the modfications on the line below
|
||||
//nolint:gosec
|
||||
removeDefaultsFromSchemaProps(&prop)
|
||||
v.Properties[name] = prop
|
||||
}
|
||||
if v.Items != nil {
|
||||
removeDefaultsFromSchemaProps(v.Items.Schema)
|
||||
for i := range v.Items.JSONSchemas {
|
||||
props := v.Items.JSONSchemas[i]
|
||||
removeDefaultsFromSchemaProps(&props)
|
||||
v.Items.JSONSchemas[i] = props
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// FixTopLevelMetadata resets the schema for the top-level metadata field which is needed for CRD validation
|
||||
func FixTopLevelMetadata(crd apiext.CustomResourceDefinition) {
|
||||
for _, v := range crd.Spec.Versions {
|
||||
if v.Schema != nil && v.Schema.OpenAPIV3Schema != nil && v.Schema.OpenAPIV3Schema.Properties != nil {
|
||||
schemaProperties := v.Schema.OpenAPIV3Schema.Properties
|
||||
if _, ok := schemaProperties["metadata"]; ok {
|
||||
schemaProperties["metadata"] = apiext.JSONSchemaProps{Type: "object"}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// toTrivialVersions strips out all schemata except for the storage schema,
|
||||
// and moves that up into the root object. This makes the CRD compatible
|
||||
// with pre 1.13 clusters.
|
||||
func toTrivialVersions(crd *apiextlegacy.CustomResourceDefinition) {
|
||||
var canonicalSchema *apiextlegacy.CustomResourceValidation
|
||||
var canonicalSubresources *apiextlegacy.CustomResourceSubresources
|
||||
var canonicalColumns []apiextlegacy.CustomResourceColumnDefinition
|
||||
for i, ver := range crd.Spec.Versions {
|
||||
if ver.Storage == true {
|
||||
canonicalSchema = ver.Schema
|
||||
canonicalSubresources = ver.Subresources
|
||||
canonicalColumns = ver.AdditionalPrinterColumns
|
||||
}
|
||||
crd.Spec.Versions[i].Schema = nil
|
||||
crd.Spec.Versions[i].Subresources = nil
|
||||
crd.Spec.Versions[i].AdditionalPrinterColumns = nil
|
||||
}
|
||||
if canonicalSchema == nil {
|
||||
return
|
||||
}
|
||||
|
||||
crd.Spec.Validation = canonicalSchema
|
||||
crd.Spec.Subresources = canonicalSubresources
|
||||
crd.Spec.AdditionalPrinterColumns = canonicalColumns
|
||||
}
|
||||
|
||||
// addAttribution adds attribution info to indicate controller-gen tool was used
|
||||
// to generate this CRD definition along with the version info.
|
||||
func addAttribution(crd *apiext.CustomResourceDefinition) {
|
||||
if crd.ObjectMeta.Annotations == nil {
|
||||
crd.ObjectMeta.Annotations = map[string]string{}
|
||||
}
|
||||
crd.ObjectMeta.Annotations["controller-gen.kubebuilder.io/version"] = version.Version()
|
||||
}
|
||||
|
||||
// FindMetav1 locates the actual package representing metav1 amongst
|
||||
// the imports of the roots.
|
||||
func FindMetav1(roots []*loader.Package) *loader.Package {
|
||||
for _, root := range roots {
|
||||
pkg := root.Imports()["k8s.io/apimachinery/pkg/apis/meta/v1"]
|
||||
if pkg != nil {
|
||||
return pkg
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// FindKubeKinds locates all types that contain TypeMeta and ObjectMeta
|
||||
// (and thus may be a Kubernetes object), and returns the corresponding
|
||||
// group-kinds.
|
||||
func FindKubeKinds(parser *Parser, metav1Pkg *loader.Package) map[schema.GroupKind]struct{} {
|
||||
// TODO(directxman12): technically, we should be finding metav1 per-package
|
||||
kubeKinds := map[schema.GroupKind]struct{}{}
|
||||
for typeIdent, info := range parser.Types {
|
||||
hasObjectMeta := false
|
||||
hasTypeMeta := false
|
||||
|
||||
pkg := typeIdent.Package
|
||||
pkg.NeedTypesInfo()
|
||||
typesInfo := pkg.TypesInfo
|
||||
|
||||
for _, field := range info.Fields {
|
||||
if field.Name != "" {
|
||||
// type and object meta are embedded,
|
||||
// so they can't be this
|
||||
continue
|
||||
}
|
||||
|
||||
fieldType := typesInfo.TypeOf(field.RawField.Type)
|
||||
namedField, isNamed := fieldType.(*types.Named)
|
||||
if !isNamed {
|
||||
// ObjectMeta and TypeMeta are named types
|
||||
continue
|
||||
}
|
||||
if namedField.Obj().Pkg() == nil {
|
||||
// Embedded non-builtin universe type (specifically, it's probably `error`),
|
||||
// so it can't be ObjectMeta or TypeMeta
|
||||
continue
|
||||
}
|
||||
fieldPkgPath := loader.NonVendorPath(namedField.Obj().Pkg().Path())
|
||||
fieldPkg := pkg.Imports()[fieldPkgPath]
|
||||
if fieldPkg != metav1Pkg {
|
||||
continue
|
||||
}
|
||||
|
||||
switch namedField.Obj().Name() {
|
||||
case "ObjectMeta":
|
||||
hasObjectMeta = true
|
||||
case "TypeMeta":
|
||||
hasTypeMeta = true
|
||||
}
|
||||
}
|
||||
|
||||
if !hasObjectMeta || !hasTypeMeta {
|
||||
continue
|
||||
}
|
||||
|
||||
groupKind := schema.GroupKind{
|
||||
Group: parser.GroupVersions[pkg].Group,
|
||||
Kind: typeIdent.Name,
|
||||
}
|
||||
kubeKinds[groupKind] = struct{}{}
|
||||
}
|
||||
|
||||
return kubeKinds
|
||||
}
|
||||
|
||||
// filterTypesForCRDs filters out all nodes that aren't used in CRD generation,
|
||||
// like interfaces and struct fields without JSON tag.
|
||||
func filterTypesForCRDs(node ast.Node) bool {
|
||||
switch node := node.(type) {
|
||||
case *ast.InterfaceType:
|
||||
// skip interfaces, we never care about references in them
|
||||
return false
|
||||
case *ast.StructType:
|
||||
return true
|
||||
case *ast.Field:
|
||||
_, hasTag := loader.ParseAstTag(node.Tag).Lookup("json")
|
||||
// fields without JSON tags mean we have custom serialization,
|
||||
// so only visit fields with tags.
|
||||
return hasTag
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
179
vendor/sigs.k8s.io/controller-tools/pkg/crd/known_types.go
generated
vendored
Normal file
179
vendor/sigs.k8s.io/controller-tools/pkg/crd/known_types.go
generated
vendored
Normal file
@@ -0,0 +1,179 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
package crd
|
||||
|
||||
import (
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
)
|
||||
|
||||
// KnownPackages overrides types in some comment packages that have custom validation
|
||||
// but don't have validation markers on them (since they're from core Kubernetes).
|
||||
var KnownPackages = map[string]PackageOverride{
|
||||
"k8s.io/api/core/v1": func(p *Parser, pkg *loader.Package) {
|
||||
// Explicit defaulting for the corev1.Protocol type in lieu of https://github.com/kubernetes/enhancements/pull/1928
|
||||
p.Schemata[TypeIdent{Name: "Protocol", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
Type: "string",
|
||||
Default: &apiext.JSON{Raw: []byte(`"TCP"`)},
|
||||
}
|
||||
p.AddPackage(pkg)
|
||||
},
|
||||
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1": func(p *Parser, pkg *loader.Package) {
|
||||
p.Schemata[TypeIdent{Name: "ObjectMeta", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
Type: "object",
|
||||
}
|
||||
p.Schemata[TypeIdent{Name: "Time", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
Type: "string",
|
||||
Format: "date-time",
|
||||
}
|
||||
p.Schemata[TypeIdent{Name: "MicroTime", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
Type: "string",
|
||||
Format: "date-time",
|
||||
}
|
||||
p.Schemata[TypeIdent{Name: "Duration", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
// TODO(directxman12): regexp validation for this (or get kube to support it as a format value)
|
||||
Type: "string",
|
||||
}
|
||||
p.Schemata[TypeIdent{Name: "Fields", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
// this is a recursive structure that can't be flattened or, for that matter, properly generated.
|
||||
// so just treat it as an arbitrary map
|
||||
Type: "object",
|
||||
AdditionalProperties: &apiext.JSONSchemaPropsOrBool{Allows: true},
|
||||
}
|
||||
p.AddPackage(pkg) // get the rest of the types
|
||||
},
|
||||
|
||||
"k8s.io/apimachinery/pkg/api/resource": func(p *Parser, pkg *loader.Package) {
|
||||
p.Schemata[TypeIdent{Name: "Quantity", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
// TODO(directxman12): regexp validation for this (or get kube to support it as a format value)
|
||||
XIntOrString: true,
|
||||
AnyOf: []apiext.JSONSchemaProps{
|
||||
{Type: "integer"},
|
||||
{Type: "string"},
|
||||
},
|
||||
Pattern: "^(\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\\+|-)?(([0-9]+(\\.[0-9]*)?)|(\\.[0-9]+))))?$",
|
||||
}
|
||||
// No point in calling AddPackage, this is the sole inhabitant
|
||||
},
|
||||
|
||||
"k8s.io/apimachinery/pkg/runtime": func(p *Parser, pkg *loader.Package) {
|
||||
p.Schemata[TypeIdent{Name: "RawExtension", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
// TODO(directxman12): regexp validation for this (or get kube to support it as a format value)
|
||||
Type: "object",
|
||||
}
|
||||
p.AddPackage(pkg) // get the rest of the types
|
||||
},
|
||||
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured": func(p *Parser, pkg *loader.Package) {
|
||||
p.Schemata[TypeIdent{Name: "Unstructured", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
Type: "object",
|
||||
}
|
||||
p.AddPackage(pkg) // get the rest of the types
|
||||
},
|
||||
|
||||
"k8s.io/apimachinery/pkg/util/intstr": func(p *Parser, pkg *loader.Package) {
|
||||
p.Schemata[TypeIdent{Name: "IntOrString", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
XIntOrString: true,
|
||||
AnyOf: []apiext.JSONSchemaProps{
|
||||
{Type: "integer"},
|
||||
{Type: "string"},
|
||||
},
|
||||
}
|
||||
// No point in calling AddPackage, this is the sole inhabitant
|
||||
},
|
||||
|
||||
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1": func(p *Parser, pkg *loader.Package) {
|
||||
p.Schemata[TypeIdent{Name: "JSON", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
XPreserveUnknownFields: boolPtr(true),
|
||||
}
|
||||
p.AddPackage(pkg) // get the rest of the types
|
||||
},
|
||||
"k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1": func(p *Parser, pkg *loader.Package) {
|
||||
p.Schemata[TypeIdent{Name: "JSON", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
XPreserveUnknownFields: boolPtr(true),
|
||||
}
|
||||
p.AddPackage(pkg) // get the rest of the types
|
||||
},
|
||||
}
|
||||
|
||||
// ObjectMetaPackages overrides the ObjectMeta in all types
|
||||
var ObjectMetaPackages = map[string]PackageOverride{
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1": func(p *Parser, pkg *loader.Package) {
|
||||
// execute the KnowPackages for `k8s.io/apimachinery/pkg/apis/meta/v1` if any
|
||||
if f, ok := KnownPackages["k8s.io/apimachinery/pkg/apis/meta/v1"]; ok {
|
||||
f(p, pkg)
|
||||
}
|
||||
// This is an allow-listed set of properties of ObjectMeta, other runtime properties are not part of this list
|
||||
// See more discussion: https://github.com/kubernetes-sigs/controller-tools/pull/395#issuecomment-691919433
|
||||
p.Schemata[TypeIdent{Name: "ObjectMeta", Package: pkg}] = apiext.JSONSchemaProps{
|
||||
Type: "object",
|
||||
Properties: map[string]apiext.JSONSchemaProps{
|
||||
"name": {
|
||||
Type: "string",
|
||||
},
|
||||
"namespace": {
|
||||
Type: "string",
|
||||
},
|
||||
"annotations": {
|
||||
Type: "object",
|
||||
AdditionalProperties: &apiext.JSONSchemaPropsOrBool{
|
||||
Schema: &apiext.JSONSchemaProps{
|
||||
Type: "string",
|
||||
},
|
||||
},
|
||||
},
|
||||
"labels": {
|
||||
Type: "object",
|
||||
AdditionalProperties: &apiext.JSONSchemaPropsOrBool{
|
||||
Schema: &apiext.JSONSchemaProps{
|
||||
Type: "string",
|
||||
},
|
||||
},
|
||||
},
|
||||
"finalizers": {
|
||||
Type: "array",
|
||||
Items: &apiext.JSONSchemaPropsOrArray{
|
||||
Schema: &apiext.JSONSchemaProps{
|
||||
Type: "string",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
func boolPtr(b bool) *bool {
|
||||
return &b
|
||||
}
|
||||
|
||||
// AddKnownTypes registers the packages overrides in KnownPackages with the given parser.
|
||||
func AddKnownTypes(parser *Parser) {
|
||||
// ensure everything is there before adding to PackageOverrides
|
||||
// TODO(directxman12): this is a bit of a hack, maybe just use constructors?
|
||||
parser.init()
|
||||
for pkgName, override := range KnownPackages {
|
||||
parser.PackageOverrides[pkgName] = override
|
||||
}
|
||||
// if we want to generate the embedded ObjectMeta in the CRD we need to add the ObjectMetaPackages
|
||||
if parser.GenerateEmbeddedObjectMeta {
|
||||
for pkgName, override := range ObjectMetaPackages {
|
||||
parser.PackageOverrides[pkgName] = override
|
||||
}
|
||||
}
|
||||
}
|
347
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/crd.go
generated
vendored
Normal file
347
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/crd.go
generated
vendored
Normal file
@@ -0,0 +1,347 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// CRDMarkers lists all markers that directly modify the CRD (not validation
|
||||
// schemas).
|
||||
var CRDMarkers = []*definitionWithHelp{
|
||||
// TODO(directxman12): more detailed help
|
||||
must(markers.MakeDefinition("kubebuilder:subresource:status", markers.DescribesType, SubresourceStatus{})).
|
||||
WithHelp(SubresourceStatus{}.Help()),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:subresource:scale", markers.DescribesType, SubresourceScale{})).
|
||||
WithHelp(SubresourceScale{}.Help()),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:printcolumn", markers.DescribesType, PrintColumn{})).
|
||||
WithHelp(PrintColumn{}.Help()),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:resource", markers.DescribesType, Resource{})).
|
||||
WithHelp(Resource{}.Help()),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:storageversion", markers.DescribesType, StorageVersion{})).
|
||||
WithHelp(StorageVersion{}.Help()),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:skipversion", markers.DescribesType, SkipVersion{})).
|
||||
WithHelp(SkipVersion{}.Help()),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:unservedversion", markers.DescribesType, UnservedVersion{})).
|
||||
WithHelp(UnservedVersion{}.Help()),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:deprecatedversion", markers.DescribesType, DeprecatedVersion{})).
|
||||
WithHelp(DeprecatedVersion{}.Help()),
|
||||
}
|
||||
|
||||
// TODO: categories and singular used to be annotations types
|
||||
// TODO: doc
|
||||
|
||||
func init() {
|
||||
AllDefinitions = append(AllDefinitions, CRDMarkers...)
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=CRD
|
||||
|
||||
// SubresourceStatus enables the "/status" subresource on a CRD.
|
||||
type SubresourceStatus struct{}
|
||||
|
||||
func (s SubresourceStatus) ApplyToCRD(crd *apiext.CustomResourceDefinitionSpec, version string) error {
|
||||
var subresources *apiext.CustomResourceSubresources
|
||||
for i := range crd.Versions {
|
||||
ver := &crd.Versions[i]
|
||||
if ver.Name != version {
|
||||
continue
|
||||
}
|
||||
if ver.Subresources == nil {
|
||||
ver.Subresources = &apiext.CustomResourceSubresources{}
|
||||
}
|
||||
subresources = ver.Subresources
|
||||
break
|
||||
}
|
||||
if subresources == nil {
|
||||
return fmt.Errorf("status subresource applied to version %q not in CRD", version)
|
||||
}
|
||||
subresources.Status = &apiext.CustomResourceSubresourceStatus{}
|
||||
return nil
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=CRD
|
||||
|
||||
// SubresourceScale enables the "/scale" subresource on a CRD.
|
||||
type SubresourceScale struct {
|
||||
// marker names are leftover legacy cruft
|
||||
|
||||
// SpecPath specifies the jsonpath to the replicas field for the scale's spec.
|
||||
SpecPath string `marker:"specpath"`
|
||||
|
||||
// StatusPath specifies the jsonpath to the replicas field for the scale's status.
|
||||
StatusPath string `marker:"statuspath"`
|
||||
|
||||
// SelectorPath specifies the jsonpath to the pod label selector field for the scale's status.
|
||||
//
|
||||
// The selector field must be the *string* form (serialized form) of a selector.
|
||||
// Setting a pod label selector is necessary for your type to work with the HorizontalPodAutoscaler.
|
||||
SelectorPath *string `marker:"selectorpath"`
|
||||
}
|
||||
|
||||
func (s SubresourceScale) ApplyToCRD(crd *apiext.CustomResourceDefinitionSpec, version string) error {
|
||||
var subresources *apiext.CustomResourceSubresources
|
||||
for i := range crd.Versions {
|
||||
ver := &crd.Versions[i]
|
||||
if ver.Name != version {
|
||||
continue
|
||||
}
|
||||
if ver.Subresources == nil {
|
||||
ver.Subresources = &apiext.CustomResourceSubresources{}
|
||||
}
|
||||
subresources = ver.Subresources
|
||||
break
|
||||
}
|
||||
if subresources == nil {
|
||||
return fmt.Errorf("scale subresource applied to version %q not in CRD", version)
|
||||
}
|
||||
subresources.Scale = &apiext.CustomResourceSubresourceScale{
|
||||
SpecReplicasPath: s.SpecPath,
|
||||
StatusReplicasPath: s.StatusPath,
|
||||
LabelSelectorPath: s.SelectorPath,
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=CRD
|
||||
|
||||
// StorageVersion marks this version as the "storage version" for the CRD for conversion.
|
||||
//
|
||||
// When conversion is enabled for a CRD (i.e. it's not a trivial-versions/single-version CRD),
|
||||
// one version is set as the "storage version" to be stored in etcd. Attempting to store any
|
||||
// other version will result in conversion to the storage version via a conversion webhook.
|
||||
type StorageVersion struct{}
|
||||
|
||||
func (s StorageVersion) ApplyToCRD(crd *apiext.CustomResourceDefinitionSpec, version string) error {
|
||||
if version == "" {
|
||||
// single-version, do nothing
|
||||
return nil
|
||||
}
|
||||
// multi-version
|
||||
for i := range crd.Versions {
|
||||
ver := &crd.Versions[i]
|
||||
if ver.Name != version {
|
||||
continue
|
||||
}
|
||||
ver.Storage = true
|
||||
break
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=CRD
|
||||
|
||||
// SkipVersion removes the particular version of the CRD from the CRDs spec.
|
||||
//
|
||||
// This is useful if you need to skip generating and listing version entries
|
||||
// for 'internal' resource versions, which typically exist if using the
|
||||
// Kubernetes upstream conversion-gen tool.
|
||||
type SkipVersion struct{}
|
||||
|
||||
func (s SkipVersion) ApplyToCRD(crd *apiext.CustomResourceDefinitionSpec, version string) error {
|
||||
if version == "" {
|
||||
// single-version, this is an invalid state
|
||||
return fmt.Errorf("cannot skip a version if there is only a single version")
|
||||
}
|
||||
var versions []apiext.CustomResourceDefinitionVersion
|
||||
// multi-version
|
||||
for i := range crd.Versions {
|
||||
ver := crd.Versions[i]
|
||||
if ver.Name == version {
|
||||
// skip the skipped version
|
||||
continue
|
||||
}
|
||||
versions = append(versions, ver)
|
||||
}
|
||||
crd.Versions = versions
|
||||
return nil
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=CRD
|
||||
|
||||
// PrintColumn adds a column to "kubectl get" output for this CRD.
|
||||
type PrintColumn struct {
|
||||
// Name specifies the name of the column.
|
||||
Name string
|
||||
|
||||
// Type indicates the type of the column.
|
||||
//
|
||||
// It may be any OpenAPI data type listed at
|
||||
// https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types.
|
||||
Type string
|
||||
|
||||
// JSONPath specifies the jsonpath expression used to extract the value of the column.
|
||||
JSONPath string `marker:"JSONPath"` // legacy cruft
|
||||
|
||||
// Description specifies the help/description for this column.
|
||||
Description string `marker:",optional"`
|
||||
|
||||
// Format specifies the format of the column.
|
||||
//
|
||||
// It may be any OpenAPI data format corresponding to the type, listed at
|
||||
// https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types.
|
||||
Format string `marker:",optional"`
|
||||
|
||||
// Priority indicates how important it is that this column be displayed.
|
||||
//
|
||||
// Lower priority (*higher* numbered) columns will be hidden if the terminal
|
||||
// width is too small.
|
||||
Priority int32 `marker:",optional"`
|
||||
}
|
||||
|
||||
func (s PrintColumn) ApplyToCRD(crd *apiext.CustomResourceDefinitionSpec, version string) error {
|
||||
var columns *[]apiext.CustomResourceColumnDefinition
|
||||
for i := range crd.Versions {
|
||||
ver := &crd.Versions[i]
|
||||
if ver.Name != version {
|
||||
continue
|
||||
}
|
||||
if ver.Subresources == nil {
|
||||
ver.Subresources = &apiext.CustomResourceSubresources{}
|
||||
}
|
||||
columns = &ver.AdditionalPrinterColumns
|
||||
break
|
||||
}
|
||||
if columns == nil {
|
||||
return fmt.Errorf("printer columns applied to version %q not in CRD", version)
|
||||
}
|
||||
|
||||
*columns = append(*columns, apiext.CustomResourceColumnDefinition{
|
||||
Name: s.Name,
|
||||
Type: s.Type,
|
||||
JSONPath: s.JSONPath,
|
||||
Description: s.Description,
|
||||
Format: s.Format,
|
||||
Priority: s.Priority,
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=CRD
|
||||
|
||||
// Resource configures naming and scope for a CRD.
|
||||
type Resource struct {
|
||||
// Path specifies the plural "resource" for this CRD.
|
||||
//
|
||||
// It generally corresponds to a plural, lower-cased version of the Kind.
|
||||
// See https://book.kubebuilder.io/cronjob-tutorial/gvks.html.
|
||||
Path string `marker:",optional"`
|
||||
|
||||
// ShortName specifies aliases for this CRD.
|
||||
//
|
||||
// Short names are often used when people have work with your resource
|
||||
// over and over again. For instance, "rs" for "replicaset" or
|
||||
// "crd" for customresourcedefinition.
|
||||
ShortName []string `marker:",optional"`
|
||||
|
||||
// Categories specifies which group aliases this resource is part of.
|
||||
//
|
||||
// Group aliases are used to work with groups of resources at once.
|
||||
// The most common one is "all" which covers about a third of the base
|
||||
// resources in Kubernetes, and is generally used for "user-facing" resources.
|
||||
Categories []string `marker:",optional"`
|
||||
|
||||
// Singular overrides the singular form of your resource.
|
||||
//
|
||||
// The singular form is otherwise defaulted off the plural (path).
|
||||
Singular string `marker:",optional"`
|
||||
|
||||
// Scope overrides the scope of the CRD (Cluster vs Namespaced).
|
||||
//
|
||||
// Scope defaults to "Namespaced". Cluster-scoped ("Cluster") resources
|
||||
// don't exist in namespaces.
|
||||
Scope string `marker:",optional"`
|
||||
}
|
||||
|
||||
func (s Resource) ApplyToCRD(crd *apiext.CustomResourceDefinitionSpec, version string) error {
|
||||
if s.Path != "" {
|
||||
crd.Names.Plural = s.Path
|
||||
}
|
||||
if s.Singular != "" {
|
||||
crd.Names.Singular = s.Singular
|
||||
}
|
||||
crd.Names.ShortNames = s.ShortName
|
||||
crd.Names.Categories = s.Categories
|
||||
|
||||
switch s.Scope {
|
||||
case "":
|
||||
crd.Scope = apiext.NamespaceScoped
|
||||
default:
|
||||
crd.Scope = apiext.ResourceScope(s.Scope)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=CRD
|
||||
|
||||
// UnservedVersion does not serve this version.
|
||||
//
|
||||
// This is useful if you need to drop support for a version in favor of a newer version.
|
||||
type UnservedVersion struct{}
|
||||
|
||||
func (s UnservedVersion) ApplyToCRD(crd *apiext.CustomResourceDefinitionSpec, version string) error {
|
||||
for i := range crd.Versions {
|
||||
ver := &crd.Versions[i]
|
||||
if ver.Name != version {
|
||||
continue
|
||||
}
|
||||
ver.Served = false
|
||||
break
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// NB(directxman12): singular was historically distinct, so we keep it here for backwards compat
|
||||
|
||||
// +controllertools:marker:generateHelp:category=CRD
|
||||
|
||||
// DeprecatedVersion marks this version as deprecated.
|
||||
type DeprecatedVersion struct {
|
||||
// Warning message to be shown on the deprecated version
|
||||
Warning *string `marker:",optional"`
|
||||
}
|
||||
|
||||
func (s DeprecatedVersion) ApplyToCRD(crd *apiext.CustomResourceDefinitionSpec, version string) error {
|
||||
if version == "" {
|
||||
// single-version, do nothing
|
||||
return nil
|
||||
}
|
||||
// multi-version
|
||||
for i := range crd.Versions {
|
||||
ver := &crd.Versions[i]
|
||||
if ver.Name != version {
|
||||
continue
|
||||
}
|
||||
ver.Deprecated = true
|
||||
ver.DeprecationWarning = s.Warning
|
||||
break
|
||||
}
|
||||
return nil
|
||||
}
|
46
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/doc.go
generated
vendored
Normal file
46
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/doc.go
generated
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package markers defines markers for generating schema valiation
|
||||
// and CRD structure.
|
||||
//
|
||||
// All markers related to CRD generation live in AllDefinitions.
|
||||
//
|
||||
// Validation Markers
|
||||
//
|
||||
// Validation markers have values that implement ApplyToSchema
|
||||
// (crd.SchemaMarker). Any marker implementing this will automatically
|
||||
// be run after the rest of a given schema node has been generated.
|
||||
// Markers that need to be run before any other markers can also
|
||||
// implement ApplyFirst, but this is discouraged and may change
|
||||
// in the future.
|
||||
//
|
||||
// All validation markers start with "+kubebuilder:validation", and
|
||||
// have the same name as their type name.
|
||||
//
|
||||
// CRD Markers
|
||||
//
|
||||
// Markers that modify anything in the CRD itself *except* for the schema
|
||||
// implement ApplyToCRD (crd.CRDMarker). They are expected to detect whether
|
||||
// they should apply themselves to a specific version in the CRD (as passed to
|
||||
// them), or to the root-level CRD for legacy cases. They are applied *after*
|
||||
// the rest of the CRD is computed.
|
||||
//
|
||||
// Misc
|
||||
//
|
||||
// This package also defines the "+groupName" and "+versionName" package-level
|
||||
// markers, for defining package<->group-version mappings.
|
||||
package markers
|
40
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/package.go
generated
vendored
Normal file
40
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/package.go
generated
vendored
Normal file
@@ -0,0 +1,40 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
func init() {
|
||||
AllDefinitions = append(AllDefinitions,
|
||||
must(markers.MakeDefinition("groupName", markers.DescribesPackage, "")).
|
||||
WithHelp(markers.SimpleHelp("CRD", "specifies the API group name for this package.")),
|
||||
|
||||
must(markers.MakeDefinition("versionName", markers.DescribesPackage, "")).
|
||||
WithHelp(markers.SimpleHelp("CRD", "overrides the API group version for this package (defaults to the package name).")),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:validation:Optional", markers.DescribesPackage, struct{}{})).
|
||||
WithHelp(markers.SimpleHelp("CRD validation", "specifies that all fields in this package are optional by default.")),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:validation:Required", markers.DescribesPackage, struct{}{})).
|
||||
WithHelp(markers.SimpleHelp("CRD validation", "specifies that all fields in this package are required by default.")),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:skip", markers.DescribesPackage, struct{}{})).
|
||||
WithHelp(markers.SimpleHelp("CRD", "don't consider this package as an API version.")),
|
||||
)
|
||||
}
|
83
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/register.go
generated
vendored
Normal file
83
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/register.go
generated
vendored
Normal file
@@ -0,0 +1,83 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
type definitionWithHelp struct {
|
||||
*markers.Definition
|
||||
Help *markers.DefinitionHelp
|
||||
}
|
||||
|
||||
func (d *definitionWithHelp) WithHelp(help *markers.DefinitionHelp) *definitionWithHelp {
|
||||
d.Help = help
|
||||
return d
|
||||
}
|
||||
|
||||
func (d *definitionWithHelp) Register(reg *markers.Registry) error {
|
||||
if err := reg.Register(d.Definition); err != nil {
|
||||
return err
|
||||
}
|
||||
if d.Help != nil {
|
||||
reg.AddHelp(d.Definition, d.Help)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func must(def *markers.Definition, err error) *definitionWithHelp {
|
||||
return &definitionWithHelp{
|
||||
Definition: markers.Must(def, err),
|
||||
}
|
||||
}
|
||||
|
||||
// AllDefinitions contains all marker definitions for this package.
|
||||
var AllDefinitions []*definitionWithHelp
|
||||
|
||||
type hasHelp interface {
|
||||
Help() *markers.DefinitionHelp
|
||||
}
|
||||
|
||||
// mustMakeAllWithPrefix converts each object into a marker definition using
|
||||
// the object's type's with the prefix to form the marker name.
|
||||
func mustMakeAllWithPrefix(prefix string, target markers.TargetType, objs ...interface{}) []*definitionWithHelp {
|
||||
defs := make([]*definitionWithHelp, len(objs))
|
||||
for i, obj := range objs {
|
||||
name := prefix + ":" + reflect.TypeOf(obj).Name()
|
||||
def, err := markers.MakeDefinition(name, target, obj)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defs[i] = &definitionWithHelp{Definition: def, Help: obj.(hasHelp).Help()}
|
||||
}
|
||||
|
||||
return defs
|
||||
}
|
||||
|
||||
// Register registers all definitions for CRD generation to the given registry.
|
||||
func Register(reg *markers.Registry) error {
|
||||
for _, def := range AllDefinitions {
|
||||
if err := def.Register(reg); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
155
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/topology.go
generated
vendored
Normal file
155
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/topology.go
generated
vendored
Normal file
@@ -0,0 +1,155 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// TopologyMarkers specify topology markers (i.e. markers that describe if a
|
||||
// list behaves as an associative-list or a set, if a map is atomic or not).
|
||||
var TopologyMarkers = []*definitionWithHelp{
|
||||
must(markers.MakeDefinition("listMapKey", markers.DescribesField, ListMapKey(""))).
|
||||
WithHelp(ListMapKey("").Help()),
|
||||
must(markers.MakeDefinition("listType", markers.DescribesField, ListType(""))).
|
||||
WithHelp(ListType("").Help()),
|
||||
must(markers.MakeDefinition("mapType", markers.DescribesField, MapType(""))).
|
||||
WithHelp(MapType("").Help()),
|
||||
must(markers.MakeDefinition("structType", markers.DescribesField, StructType(""))).
|
||||
WithHelp(StructType("").Help()),
|
||||
}
|
||||
|
||||
func init() {
|
||||
AllDefinitions = append(AllDefinitions, TopologyMarkers...)
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD processing"
|
||||
|
||||
// ListType specifies the type of data-structure that the list
|
||||
// represents (map, set, atomic).
|
||||
//
|
||||
// Possible data-structure types of a list are:
|
||||
//
|
||||
// - "map": it needs to have a key field, which will be used to build an
|
||||
// associative list. A typical example is a the pod container list,
|
||||
// which is indexed by the container name.
|
||||
//
|
||||
// - "set": Fields need to be "scalar", and there can be only one
|
||||
// occurrence of each.
|
||||
//
|
||||
// - "atomic": All the fields in the list are treated as a single value,
|
||||
// are typically manipulated together by the same actor.
|
||||
type ListType string
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD processing"
|
||||
|
||||
// ListMapKey specifies the keys to map listTypes.
|
||||
//
|
||||
// It indicates the index of a map list. They can be repeated if multiple keys
|
||||
// must be used. It can only be used when ListType is set to map, and the keys
|
||||
// should be scalar types.
|
||||
type ListMapKey string
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD processing"
|
||||
|
||||
// MapType specifies the level of atomicity of the map;
|
||||
// i.e. whether each item in the map is independent of the others,
|
||||
// or all fields are treated as a single unit.
|
||||
//
|
||||
// Possible values:
|
||||
//
|
||||
// - "granular": items in the map are independent of each other,
|
||||
// and can be manipulated by different actors.
|
||||
// This is the default behavior.
|
||||
//
|
||||
// - "atomic": all fields are treated as one unit.
|
||||
// Any changes have to replace the entire map.
|
||||
type MapType string
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD processing"
|
||||
|
||||
// StructType specifies the level of atomicity of the struct;
|
||||
// i.e. whether each field in the struct is independent of the others,
|
||||
// or all fields are treated as a single unit.
|
||||
//
|
||||
// Possible values:
|
||||
//
|
||||
// - "granular": fields in the struct are independent of each other,
|
||||
// and can be manipulated by different actors.
|
||||
// This is the default behavior.
|
||||
//
|
||||
// - "atomic": all fields are treated as one unit.
|
||||
// Any changes have to replace the entire struct.
|
||||
type StructType string
|
||||
|
||||
func (l ListType) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "array" {
|
||||
return fmt.Errorf("must apply listType to an array")
|
||||
}
|
||||
if l != "map" && l != "atomic" && l != "set" {
|
||||
return fmt.Errorf(`ListType must be either "map", "set" or "atomic"`)
|
||||
}
|
||||
p := string(l)
|
||||
schema.XListType = &p
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l ListType) ApplyFirst() {}
|
||||
|
||||
func (l ListMapKey) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "array" {
|
||||
return fmt.Errorf("must apply listMapKey to an array")
|
||||
}
|
||||
if schema.XListType == nil || *schema.XListType != "map" {
|
||||
return fmt.Errorf("must apply listMapKey to an associative-list")
|
||||
}
|
||||
schema.XListMapKeys = append(schema.XListMapKeys, string(l))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m MapType) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "object" {
|
||||
return fmt.Errorf("must apply mapType to an object")
|
||||
}
|
||||
|
||||
if m != "atomic" && m != "granular" {
|
||||
return fmt.Errorf(`MapType must be either "granular" or "atomic"`)
|
||||
}
|
||||
|
||||
p := string(m)
|
||||
schema.XMapType = &p
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s StructType) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "object" && schema.Type != "" {
|
||||
return fmt.Errorf("must apply structType to an object; either explicitly set or defaulted through an empty schema type")
|
||||
}
|
||||
|
||||
if s != "atomic" && s != "granular" {
|
||||
return fmt.Errorf(`StructType must be either "granular" or "atomic"`)
|
||||
}
|
||||
|
||||
p := string(s)
|
||||
schema.XMapType = &p
|
||||
|
||||
return nil
|
||||
}
|
408
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/validation.go
generated
vendored
Normal file
408
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/validation.go
generated
vendored
Normal file
@@ -0,0 +1,408 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"encoding/json"
|
||||
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
const (
|
||||
SchemalessName = "kubebuilder:validation:Schemaless"
|
||||
)
|
||||
|
||||
// ValidationMarkers lists all available markers that affect CRD schema generation,
|
||||
// except for the few that don't make sense as type-level markers (see FieldOnlyMarkers).
|
||||
// All markers start with `+kubebuilder:validation:`, and continue with their type name.
|
||||
// A copy is produced of all markers that describes types as well, for making types
|
||||
// reusable and writing complex validations on slice items.
|
||||
var ValidationMarkers = mustMakeAllWithPrefix("kubebuilder:validation", markers.DescribesField,
|
||||
|
||||
// integer markers
|
||||
|
||||
Maximum(0),
|
||||
Minimum(0),
|
||||
ExclusiveMaximum(false),
|
||||
ExclusiveMinimum(false),
|
||||
MultipleOf(0),
|
||||
MinProperties(0),
|
||||
MaxProperties(0),
|
||||
|
||||
// string markers
|
||||
|
||||
MaxLength(0),
|
||||
MinLength(0),
|
||||
Pattern(""),
|
||||
|
||||
// slice markers
|
||||
|
||||
MaxItems(0),
|
||||
MinItems(0),
|
||||
UniqueItems(false),
|
||||
|
||||
// general markers
|
||||
|
||||
Enum(nil),
|
||||
Format(""),
|
||||
Type(""),
|
||||
XPreserveUnknownFields{},
|
||||
XEmbeddedResource{},
|
||||
)
|
||||
|
||||
// FieldOnlyMarkers list field-specific validation markers (i.e. those markers that don't make
|
||||
// sense on a type, and thus aren't in ValidationMarkers).
|
||||
var FieldOnlyMarkers = []*definitionWithHelp{
|
||||
must(markers.MakeDefinition("kubebuilder:validation:Required", markers.DescribesField, struct{}{})).
|
||||
WithHelp(markers.SimpleHelp("CRD validation", "specifies that this field is required, if fields are optional by default.")),
|
||||
must(markers.MakeDefinition("kubebuilder:validation:Optional", markers.DescribesField, struct{}{})).
|
||||
WithHelp(markers.SimpleHelp("CRD validation", "specifies that this field is optional, if fields are required by default.")),
|
||||
must(markers.MakeDefinition("optional", markers.DescribesField, struct{}{})).
|
||||
WithHelp(markers.SimpleHelp("CRD validation", "specifies that this field is optional, if fields are required by default.")),
|
||||
|
||||
must(markers.MakeDefinition("nullable", markers.DescribesField, Nullable{})).
|
||||
WithHelp(Nullable{}.Help()),
|
||||
|
||||
must(markers.MakeAnyTypeDefinition("kubebuilder:default", markers.DescribesField, Default{})).
|
||||
WithHelp(Default{}.Help()),
|
||||
|
||||
must(markers.MakeDefinition("kubebuilder:validation:EmbeddedResource", markers.DescribesField, XEmbeddedResource{})).
|
||||
WithHelp(XEmbeddedResource{}.Help()),
|
||||
|
||||
must(markers.MakeDefinition(SchemalessName, markers.DescribesField, Schemaless{})).
|
||||
WithHelp(Schemaless{}.Help()),
|
||||
}
|
||||
|
||||
// ValidationIshMarkers are field-and-type markers that don't fall under the
|
||||
// :validation: prefix, and/or don't have a name that directly matches their
|
||||
// type.
|
||||
var ValidationIshMarkers = []*definitionWithHelp{
|
||||
must(markers.MakeDefinition("kubebuilder:pruning:PreserveUnknownFields", markers.DescribesField, XPreserveUnknownFields{})).
|
||||
WithHelp(XPreserveUnknownFields{}.Help()),
|
||||
must(markers.MakeDefinition("kubebuilder:pruning:PreserveUnknownFields", markers.DescribesType, XPreserveUnknownFields{})).
|
||||
WithHelp(XPreserveUnknownFields{}.Help()),
|
||||
}
|
||||
|
||||
func init() {
|
||||
AllDefinitions = append(AllDefinitions, ValidationMarkers...)
|
||||
|
||||
for _, def := range ValidationMarkers {
|
||||
newDef := *def.Definition
|
||||
// copy both parts so we don't change the definition
|
||||
typDef := definitionWithHelp{
|
||||
Definition: &newDef,
|
||||
Help: def.Help,
|
||||
}
|
||||
typDef.Target = markers.DescribesType
|
||||
AllDefinitions = append(AllDefinitions, &typDef)
|
||||
}
|
||||
|
||||
AllDefinitions = append(AllDefinitions, FieldOnlyMarkers...)
|
||||
AllDefinitions = append(AllDefinitions, ValidationIshMarkers...)
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// Maximum specifies the maximum numeric value that this field can have.
|
||||
type Maximum int
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// Minimum specifies the minimum numeric value that this field can have. Negative integers are supported.
|
||||
type Minimum int
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// ExclusiveMinimum indicates that the minimum is "up to" but not including that value.
|
||||
type ExclusiveMinimum bool
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// ExclusiveMaximum indicates that the maximum is "up to" but not including that value.
|
||||
type ExclusiveMaximum bool
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// MultipleOf specifies that this field must have a numeric value that's a multiple of this one.
|
||||
type MultipleOf int
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// MaxLength specifies the maximum length for this string.
|
||||
type MaxLength int
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// MinLength specifies the minimum length for this string.
|
||||
type MinLength int
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// Pattern specifies that this string must match the given regular expression.
|
||||
type Pattern string
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// MaxItems specifies the maximum length for this list.
|
||||
type MaxItems int
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// MinItems specifies the minimun length for this list.
|
||||
type MinItems int
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// UniqueItems specifies that all items in this list must be unique.
|
||||
type UniqueItems bool
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// MaxProperties restricts the number of keys in an object
|
||||
type MaxProperties int
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// MinProperties restricts the number of keys in an object
|
||||
type MinProperties int
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// Enum specifies that this (scalar) field is restricted to the *exact* values specified here.
|
||||
type Enum []interface{}
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// Format specifies additional "complex" formatting for this field.
|
||||
//
|
||||
// For example, a date-time field would be marked as "type: string" and
|
||||
// "format: date-time".
|
||||
type Format string
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// Type overrides the type for this field (which defaults to the equivalent of the Go type).
|
||||
//
|
||||
// This generally must be paired with custom serialization. For example, the
|
||||
// metav1.Time field would be marked as "type: string" and "format: date-time".
|
||||
type Type string
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// Nullable marks this field as allowing the "null" value.
|
||||
//
|
||||
// This is often not necessary, but may be helpful with custom serialization.
|
||||
type Nullable struct{}
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// Default sets the default value for this field.
|
||||
//
|
||||
// A default value will be accepted as any value valid for the
|
||||
// field. Formatting for common types include: boolean: `true`, string:
|
||||
// `Cluster`, numerical: `1.24`, array: `{1,2}`, object: `{policy:
|
||||
// "delete"}`). Defaults should be defined in pruned form, and only best-effort
|
||||
// validation will be performed. Full validation of a default requires
|
||||
// submission of the containing CRD to an apiserver.
|
||||
type Default struct {
|
||||
Value interface{}
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD processing"
|
||||
// PreserveUnknownFields stops the apiserver from pruning fields which are not specified.
|
||||
//
|
||||
// By default the apiserver drops unknown fields from the request payload
|
||||
// during the decoding step. This marker stops the API server from doing so.
|
||||
// It affects fields recursively, but switches back to normal pruning behaviour
|
||||
// if nested properties or additionalProperties are specified in the schema.
|
||||
// This can either be true or undefined. False
|
||||
// is forbidden.
|
||||
//
|
||||
// NB: The kubebuilder:validation:XPreserveUnknownFields variant is deprecated
|
||||
// in favor of the kubebuilder:pruning:PreserveUnknownFields variant. They function
|
||||
// identically.
|
||||
type XPreserveUnknownFields struct{}
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// EmbeddedResource marks a fields as an embedded resource with apiVersion, kind and metadata fields.
|
||||
//
|
||||
// An embedded resource is a value that has apiVersion, kind and metadata fields.
|
||||
// They are validated implicitly according to the semantics of the currently
|
||||
// running apiserver. It is not necessary to add any additional schema for these
|
||||
// field, yet it is possible. This can be combined with PreserveUnknownFields.
|
||||
type XEmbeddedResource struct{}
|
||||
|
||||
// +controllertools:marker:generateHelp:category="CRD validation"
|
||||
// Schemaless marks a field as being a schemaless object.
|
||||
//
|
||||
// Schemaless objects are not introspected, so you must provide
|
||||
// any type and validation information yourself. One use for this
|
||||
// tag is for embedding fields that hold JSONSchema typed objects.
|
||||
// Because this field disables all type checking, it is recommended
|
||||
// to be used only as a last resort.
|
||||
type Schemaless struct{}
|
||||
|
||||
func (m Maximum) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "integer" {
|
||||
return fmt.Errorf("must apply maximum to an integer")
|
||||
}
|
||||
val := float64(m)
|
||||
schema.Maximum = &val
|
||||
return nil
|
||||
}
|
||||
func (m Minimum) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "integer" {
|
||||
return fmt.Errorf("must apply minimum to an integer")
|
||||
}
|
||||
val := float64(m)
|
||||
schema.Minimum = &val
|
||||
return nil
|
||||
}
|
||||
func (m ExclusiveMaximum) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "integer" {
|
||||
return fmt.Errorf("must apply exclusivemaximum to an integer")
|
||||
}
|
||||
schema.ExclusiveMaximum = bool(m)
|
||||
return nil
|
||||
}
|
||||
func (m ExclusiveMinimum) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "integer" {
|
||||
return fmt.Errorf("must apply exclusiveminimum to an integer")
|
||||
}
|
||||
schema.ExclusiveMinimum = bool(m)
|
||||
return nil
|
||||
}
|
||||
func (m MultipleOf) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "integer" {
|
||||
return fmt.Errorf("must apply multipleof to an integer")
|
||||
}
|
||||
val := float64(m)
|
||||
schema.MultipleOf = &val
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m MaxLength) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "string" {
|
||||
return fmt.Errorf("must apply maxlength to a string")
|
||||
}
|
||||
val := int64(m)
|
||||
schema.MaxLength = &val
|
||||
return nil
|
||||
}
|
||||
func (m MinLength) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "string" {
|
||||
return fmt.Errorf("must apply minlength to a string")
|
||||
}
|
||||
val := int64(m)
|
||||
schema.MinLength = &val
|
||||
return nil
|
||||
}
|
||||
func (m Pattern) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "string" {
|
||||
return fmt.Errorf("must apply pattern to a string")
|
||||
}
|
||||
schema.Pattern = string(m)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m MaxItems) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "array" {
|
||||
return fmt.Errorf("must apply maxitem to an array")
|
||||
}
|
||||
val := int64(m)
|
||||
schema.MaxItems = &val
|
||||
return nil
|
||||
}
|
||||
func (m MinItems) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "array" {
|
||||
return fmt.Errorf("must apply minitems to an array")
|
||||
}
|
||||
val := int64(m)
|
||||
schema.MinItems = &val
|
||||
return nil
|
||||
}
|
||||
func (m UniqueItems) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "array" {
|
||||
return fmt.Errorf("must apply uniqueitems to an array")
|
||||
}
|
||||
schema.UniqueItems = bool(m)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m MinProperties) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "object" {
|
||||
return fmt.Errorf("must apply minproperties to an object")
|
||||
}
|
||||
val := int64(m)
|
||||
schema.MinProperties = &val
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m MaxProperties) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
if schema.Type != "object" {
|
||||
return fmt.Errorf("must apply maxproperties to an object")
|
||||
}
|
||||
val := int64(m)
|
||||
schema.MaxProperties = &val
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m Enum) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
// TODO(directxman12): this is a bit hacky -- we should
|
||||
// probably support AnyType better + using the schema structure
|
||||
vals := make([]apiext.JSON, len(m))
|
||||
for i, val := range m {
|
||||
// TODO(directxman12): check actual type with schema type?
|
||||
// if we're expecting a string, marshal the string properly...
|
||||
// NB(directxman12): we use json.Marshal to ensure we handle JSON escaping properly
|
||||
valMarshalled, err := json.Marshal(val)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
vals[i] = apiext.JSON{Raw: valMarshalled}
|
||||
}
|
||||
schema.Enum = vals
|
||||
return nil
|
||||
}
|
||||
func (m Format) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
schema.Format = string(m)
|
||||
return nil
|
||||
}
|
||||
|
||||
// NB(directxman12): we "typecheck" on target schema properties here,
|
||||
// which means the "Type" marker *must* be applied first.
|
||||
// TODO(directxman12): find a less hacky way to do this
|
||||
// (we could preserve ordering of markers, but that feels bad in its own right).
|
||||
|
||||
func (m Type) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
schema.Type = string(m)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m Type) ApplyFirst() {}
|
||||
|
||||
func (m Nullable) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
schema.Nullable = true
|
||||
return nil
|
||||
}
|
||||
|
||||
// Defaults are only valid CRDs created with the v1 API
|
||||
func (m Default) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
marshalledDefault, err := json.Marshal(m.Value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
schema.Default = &apiext.JSON{Raw: marshalledDefault}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m XPreserveUnknownFields) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
defTrue := true
|
||||
schema.XPreserveUnknownFields = &defTrue
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m XEmbeddedResource) ApplyToSchema(schema *apiext.JSONSchemaProps) error {
|
||||
schema.XEmbeddedResource = true
|
||||
return nil
|
||||
}
|
457
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/zz_generated.markerhelp.go
generated
vendored
Normal file
457
vendor/sigs.k8s.io/controller-tools/pkg/crd/markers/zz_generated.markerhelp.go
generated
vendored
Normal file
@@ -0,0 +1,457 @@
|
||||
// +build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by helpgen. DO NOT EDIT.
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
func (Default) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "sets the default value for this field. ",
|
||||
Details: "A default value will be accepted as any value valid for the field. Formatting for common types include: boolean: `true`, string: `Cluster`, numerical: `1.24`, array: `{1,2}`, object: `{policy: \"delete\"}`). Defaults should be defined in pruned form, and only best-effort validation will be performed. Full validation of a default requires submission of the containing CRD to an apiserver.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"Value": {
|
||||
Summary: "",
|
||||
Details: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (DeprecatedVersion) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "marks this version as deprecated.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"Warning": markers.DetailedHelp{
|
||||
Summary: "message to be shown on the deprecated version",
|
||||
Details: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (Enum) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies that this (scalar) field is restricted to the *exact* values specified here.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (ExclusiveMaximum) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "indicates that the maximum is \"up to\" but not including that value.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (ExclusiveMinimum) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "indicates that the minimum is \"up to\" but not including that value.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (Format) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies additional \"complex\" formatting for this field. ",
|
||||
Details: "For example, a date-time field would be marked as \"type: string\" and \"format: date-time\".",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (ListMapKey) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD processing",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the keys to map listTypes. ",
|
||||
Details: "It indicates the index of a map list. They can be repeated if multiple keys must be used. It can only be used when ListType is set to map, and the keys should be scalar types.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (ListType) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD processing",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the type of data-structure that the list represents (map, set, atomic). ",
|
||||
Details: "Possible data-structure types of a list are: \n - \"map\": it needs to have a key field, which will be used to build an associative list. A typical example is a the pod container list, which is indexed by the container name. \n - \"set\": Fields need to be \"scalar\", and there can be only one occurrence of each. \n - \"atomic\": All the fields in the list are treated as a single value, are typically manipulated together by the same actor.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (MapType) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD processing",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the level of atomicity of the map; i.e. whether each item in the map is independent of the others, or all fields are treated as a single unit. ",
|
||||
Details: "Possible values: \n - \"granular\": items in the map are independent of each other, and can be manipulated by different actors. This is the default behavior. \n - \"atomic\": all fields are treated as one unit. Any changes have to replace the entire map.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (MaxItems) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the maximum length for this list.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (MaxLength) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the maximum length for this string.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (MaxProperties) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "restricts the number of keys in an object",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (Maximum) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the maximum numeric value that this field can have.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (MinItems) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the minimun length for this list.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (MinLength) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the minimum length for this string.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (MinProperties) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "restricts the number of keys in an object",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (Minimum) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the minimum numeric value that this field can have. Negative integers are supported.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (MultipleOf) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies that this field must have a numeric value that's a multiple of this one.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (Nullable) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "marks this field as allowing the \"null\" value. ",
|
||||
Details: "This is often not necessary, but may be helpful with custom serialization.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (Pattern) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies that this string must match the given regular expression.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (PrintColumn) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "adds a column to \"kubectl get\" output for this CRD.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"Name": {
|
||||
Summary: "specifies the name of the column.",
|
||||
Details: "",
|
||||
},
|
||||
"Type": {
|
||||
Summary: "indicates the type of the column. ",
|
||||
Details: "It may be any OpenAPI data type listed at https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types.",
|
||||
},
|
||||
"JSONPath": {
|
||||
Summary: "specifies the jsonpath expression used to extract the value of the column.",
|
||||
Details: "",
|
||||
},
|
||||
"Description": {
|
||||
Summary: "specifies the help/description for this column.",
|
||||
Details: "",
|
||||
},
|
||||
"Format": {
|
||||
Summary: "specifies the format of the column. ",
|
||||
Details: "It may be any OpenAPI data format corresponding to the type, listed at https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types.",
|
||||
},
|
||||
"Priority": {
|
||||
Summary: "indicates how important it is that this column be displayed. ",
|
||||
Details: "Lower priority (*higher* numbered) columns will be hidden if the terminal width is too small.",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (Resource) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "configures naming and scope for a CRD.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"Path": {
|
||||
Summary: "specifies the plural \"resource\" for this CRD. ",
|
||||
Details: "It generally corresponds to a plural, lower-cased version of the Kind. See https://book.kubebuilder.io/cronjob-tutorial/gvks.html.",
|
||||
},
|
||||
"ShortName": {
|
||||
Summary: "specifies aliases for this CRD. ",
|
||||
Details: "Short names are often used when people have work with your resource over and over again. For instance, \"rs\" for \"replicaset\" or \"crd\" for customresourcedefinition.",
|
||||
},
|
||||
"Categories": {
|
||||
Summary: "specifies which group aliases this resource is part of. ",
|
||||
Details: "Group aliases are used to work with groups of resources at once. The most common one is \"all\" which covers about a third of the base resources in Kubernetes, and is generally used for \"user-facing\" resources.",
|
||||
},
|
||||
"Singular": {
|
||||
Summary: "overrides the singular form of your resource. ",
|
||||
Details: "The singular form is otherwise defaulted off the plural (path).",
|
||||
},
|
||||
"Scope": {
|
||||
Summary: "overrides the scope of the CRD (Cluster vs Namespaced). ",
|
||||
Details: "Scope defaults to \"Namespaced\". Cluster-scoped (\"Cluster\") resources don't exist in namespaces.",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (Schemaless) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "marks a field as being a schemaless object. ",
|
||||
Details: "Schemaless objects are not introspected, so you must provide any type and validation information yourself. One use for this tag is for embedding fields that hold JSONSchema typed objects. Because this field disables all type checking, it is recommended to be used only as a last resort.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (SkipVersion) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "removes the particular version of the CRD from the CRDs spec. ",
|
||||
Details: "This is useful if you need to skip generating and listing version entries for 'internal' resource versions, which typically exist if using the Kubernetes upstream conversion-gen tool.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (StorageVersion) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "marks this version as the \"storage version\" for the CRD for conversion. ",
|
||||
Details: "When conversion is enabled for a CRD (i.e. it's not a trivial-versions/single-version CRD), one version is set as the \"storage version\" to be stored in etcd. Attempting to store any other version will result in conversion to the storage version via a conversion webhook.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (StructType) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD processing",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies the level of atomicity of the struct; i.e. whether each field in the struct is independent of the others, or all fields are treated as a single unit. ",
|
||||
Details: "Possible values: \n - \"granular\": fields in the struct are independent of each other, and can be manipulated by different actors. This is the default behavior. \n - \"atomic\": all fields are treated as one unit. Any changes have to replace the entire struct.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (SubresourceScale) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "enables the \"/scale\" subresource on a CRD.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"SpecPath": {
|
||||
Summary: "specifies the jsonpath to the replicas field for the scale's spec.",
|
||||
Details: "",
|
||||
},
|
||||
"StatusPath": {
|
||||
Summary: "specifies the jsonpath to the replicas field for the scale's status.",
|
||||
Details: "",
|
||||
},
|
||||
"SelectorPath": {
|
||||
Summary: "specifies the jsonpath to the pod label selector field for the scale's status. ",
|
||||
Details: "The selector field must be the *string* form (serialized form) of a selector. Setting a pod label selector is necessary for your type to work with the HorizontalPodAutoscaler.",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (SubresourceStatus) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "enables the \"/status\" subresource on a CRD.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (Type) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "overrides the type for this field (which defaults to the equivalent of the Go type). ",
|
||||
Details: "This generally must be paired with custom serialization. For example, the metav1.Time field would be marked as \"type: string\" and \"format: date-time\".",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (UniqueItems) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies that all items in this list must be unique.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (UnservedVersion) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "does not serve this version. ",
|
||||
Details: "This is useful if you need to drop support for a version in favor of a newer version.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (XEmbeddedResource) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD validation",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "EmbeddedResource marks a fields as an embedded resource with apiVersion, kind and metadata fields. ",
|
||||
Details: "An embedded resource is a value that has apiVersion, kind and metadata fields. They are validated implicitly according to the semantics of the currently running apiserver. It is not necessary to add any additional schema for these field, yet it is possible. This can be combined with PreserveUnknownFields.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (XPreserveUnknownFields) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "CRD processing",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "PreserveUnknownFields stops the apiserver from pruning fields which are not specified. ",
|
||||
Details: "By default the apiserver drops unknown fields from the request payload during the decoding step. This marker stops the API server from doing so. It affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden. \n NB: The kubebuilder:validation:XPreserveUnknownFields variant is deprecated in favor of the kubebuilder:pruning:PreserveUnknownFields variant. They function identically.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
240
vendor/sigs.k8s.io/controller-tools/pkg/crd/parser.go
generated
vendored
Normal file
240
vendor/sigs.k8s.io/controller-tools/pkg/crd/parser.go
generated
vendored
Normal file
@@ -0,0 +1,240 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package crd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// TypeIdent represents some type in a Package.
|
||||
type TypeIdent struct {
|
||||
Package *loader.Package
|
||||
Name string
|
||||
}
|
||||
|
||||
func (t TypeIdent) String() string {
|
||||
return fmt.Sprintf("%q.%s", t.Package.ID, t.Name)
|
||||
}
|
||||
|
||||
// PackageOverride overrides the loading of some package
|
||||
// (potentially setting custom schemata, etc). It must
|
||||
// call AddPackage if it wants to continue with the default
|
||||
// loading behavior.
|
||||
type PackageOverride func(p *Parser, pkg *loader.Package)
|
||||
|
||||
// Parser knows how to parse out CRD information and generate
|
||||
// OpenAPI schemata from some collection of types and markers.
|
||||
// Most methods on Parser cache their results automatically,
|
||||
// and thus may be called any number of times.
|
||||
type Parser struct {
|
||||
Collector *markers.Collector
|
||||
|
||||
// Types contains the known TypeInfo for this parser.
|
||||
Types map[TypeIdent]*markers.TypeInfo
|
||||
// Schemata contains the known OpenAPI JSONSchemata for this parser.
|
||||
Schemata map[TypeIdent]apiext.JSONSchemaProps
|
||||
// GroupVersions contains the known group-versions of each package in this parser.
|
||||
GroupVersions map[*loader.Package]schema.GroupVersion
|
||||
// CustomResourceDefinitions contains the known CustomResourceDefinitions for types in this parser.
|
||||
CustomResourceDefinitions map[schema.GroupKind]apiext.CustomResourceDefinition
|
||||
// FlattenedSchemata contains fully flattened schemata for use in building
|
||||
// CustomResourceDefinition validation. Each schema has been flattened by the flattener,
|
||||
// and then embedded fields have been flattened with FlattenEmbedded.
|
||||
FlattenedSchemata map[TypeIdent]apiext.JSONSchemaProps
|
||||
|
||||
// PackageOverrides indicates that the loading of any package with
|
||||
// the given path should be handled by the given overrider.
|
||||
PackageOverrides map[string]PackageOverride
|
||||
|
||||
// checker stores persistent partial type-checking/reference-traversal information.
|
||||
Checker *loader.TypeChecker
|
||||
// packages marks packages as loaded, to avoid re-loading them.
|
||||
packages map[*loader.Package]struct{}
|
||||
|
||||
flattener *Flattener
|
||||
|
||||
// AllowDangerousTypes controls the handling of non-recommended types such as float. If
|
||||
// false (the default), these types are not supported.
|
||||
// There is a continuum here:
|
||||
// 1. Types that are always supported.
|
||||
// 2. Types that are allowed by default, but not recommended (warning emitted when they are encountered as per PR #443).
|
||||
// Possibly they are allowed by default for historical reasons and may even be "on their way out" at some point in the future.
|
||||
// 3. Types that are not allowed by default, not recommended, but there are some legitimate reasons to need them in certain corner cases.
|
||||
// Possibly these types should also emit a warning as per PR #443 even when they are "switched on" (an integration point between
|
||||
// this feature and #443 if desired). This is the category that this flag deals with.
|
||||
// 4. Types that are not allowed and will not be allowed, possibly because it just "doesn't make sense" or possibly
|
||||
// because the implementation is too difficult/clunky to promote them to category 3.
|
||||
// TODO: Should we have a more formal mechanism for putting "type patterns" in each of the above categories?
|
||||
AllowDangerousTypes bool
|
||||
|
||||
// GenerateEmbeddedObjectMeta specifies if any embedded ObjectMeta should be generated
|
||||
GenerateEmbeddedObjectMeta bool
|
||||
}
|
||||
|
||||
func (p *Parser) init() {
|
||||
if p.packages == nil {
|
||||
p.packages = make(map[*loader.Package]struct{})
|
||||
}
|
||||
if p.flattener == nil {
|
||||
p.flattener = &Flattener{
|
||||
Parser: p,
|
||||
}
|
||||
}
|
||||
if p.Schemata == nil {
|
||||
p.Schemata = make(map[TypeIdent]apiext.JSONSchemaProps)
|
||||
}
|
||||
if p.Types == nil {
|
||||
p.Types = make(map[TypeIdent]*markers.TypeInfo)
|
||||
}
|
||||
if p.PackageOverrides == nil {
|
||||
p.PackageOverrides = make(map[string]PackageOverride)
|
||||
}
|
||||
if p.GroupVersions == nil {
|
||||
p.GroupVersions = make(map[*loader.Package]schema.GroupVersion)
|
||||
}
|
||||
if p.CustomResourceDefinitions == nil {
|
||||
p.CustomResourceDefinitions = make(map[schema.GroupKind]apiext.CustomResourceDefinition)
|
||||
}
|
||||
if p.FlattenedSchemata == nil {
|
||||
p.FlattenedSchemata = make(map[TypeIdent]apiext.JSONSchemaProps)
|
||||
}
|
||||
}
|
||||
|
||||
// indexTypes loads all types in the package into Types.
|
||||
func (p *Parser) indexTypes(pkg *loader.Package) {
|
||||
// autodetect
|
||||
pkgMarkers, err := markers.PackageMarkers(p.Collector, pkg)
|
||||
if err != nil {
|
||||
pkg.AddError(err)
|
||||
} else {
|
||||
if skipPkg := pkgMarkers.Get("kubebuilder:skip"); skipPkg != nil {
|
||||
return
|
||||
}
|
||||
if nameVal := pkgMarkers.Get("groupName"); nameVal != nil {
|
||||
versionVal := pkg.Name // a reasonable guess
|
||||
if versionMarker := pkgMarkers.Get("versionName"); versionMarker != nil {
|
||||
versionVal = versionMarker.(string)
|
||||
}
|
||||
|
||||
p.GroupVersions[pkg] = schema.GroupVersion{
|
||||
Version: versionVal,
|
||||
Group: nameVal.(string),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err := markers.EachType(p.Collector, pkg, func(info *markers.TypeInfo) {
|
||||
ident := TypeIdent{
|
||||
Package: pkg,
|
||||
Name: info.Name,
|
||||
}
|
||||
|
||||
p.Types[ident] = info
|
||||
}); err != nil {
|
||||
pkg.AddError(err)
|
||||
}
|
||||
}
|
||||
|
||||
// LookupType fetches type info from Types.
|
||||
func (p *Parser) LookupType(pkg *loader.Package, name string) *markers.TypeInfo {
|
||||
return p.Types[TypeIdent{Package: pkg, Name: name}]
|
||||
}
|
||||
|
||||
// NeedSchemaFor indicates that a schema should be generated for the given type.
|
||||
func (p *Parser) NeedSchemaFor(typ TypeIdent) {
|
||||
p.init()
|
||||
|
||||
p.NeedPackage(typ.Package)
|
||||
if _, knownSchema := p.Schemata[typ]; knownSchema {
|
||||
return
|
||||
}
|
||||
|
||||
info, knownInfo := p.Types[typ]
|
||||
if !knownInfo {
|
||||
typ.Package.AddError(fmt.Errorf("unknown type %s", typ))
|
||||
return
|
||||
}
|
||||
|
||||
// avoid tripping recursive schemata, like ManagedFields, by adding an empty WIP schema
|
||||
p.Schemata[typ] = apiext.JSONSchemaProps{}
|
||||
|
||||
schemaCtx := newSchemaContext(typ.Package, p, p.AllowDangerousTypes)
|
||||
ctxForInfo := schemaCtx.ForInfo(info)
|
||||
|
||||
pkgMarkers, err := markers.PackageMarkers(p.Collector, typ.Package)
|
||||
if err != nil {
|
||||
typ.Package.AddError(err)
|
||||
}
|
||||
ctxForInfo.PackageMarkers = pkgMarkers
|
||||
|
||||
schema := infoToSchema(ctxForInfo)
|
||||
|
||||
p.Schemata[typ] = *schema
|
||||
}
|
||||
|
||||
func (p *Parser) NeedFlattenedSchemaFor(typ TypeIdent) {
|
||||
p.init()
|
||||
|
||||
if _, knownSchema := p.FlattenedSchemata[typ]; knownSchema {
|
||||
return
|
||||
}
|
||||
|
||||
p.NeedSchemaFor(typ)
|
||||
partialFlattened := p.flattener.FlattenType(typ)
|
||||
fullyFlattened := FlattenEmbedded(partialFlattened, typ.Package)
|
||||
|
||||
p.FlattenedSchemata[typ] = *fullyFlattened
|
||||
}
|
||||
|
||||
// NeedCRDFor lives off in spec.go
|
||||
|
||||
// AddPackage indicates that types and type-checking information is needed
|
||||
// for the the given package, *ignoring* overrides.
|
||||
// Generally, consumers should call NeedPackage, while PackageOverrides should
|
||||
// call AddPackage to continue with the normal loading procedure.
|
||||
func (p *Parser) AddPackage(pkg *loader.Package) {
|
||||
p.init()
|
||||
if _, checked := p.packages[pkg]; checked {
|
||||
return
|
||||
}
|
||||
p.indexTypes(pkg)
|
||||
p.Checker.Check(pkg)
|
||||
p.packages[pkg] = struct{}{}
|
||||
}
|
||||
|
||||
// NeedPackage indicates that types and type-checking information
|
||||
// is needed for the given package.
|
||||
func (p *Parser) NeedPackage(pkg *loader.Package) {
|
||||
p.init()
|
||||
if _, checked := p.packages[pkg]; checked {
|
||||
return
|
||||
}
|
||||
// overrides are going to be written without vendor. This is why we index by the actual
|
||||
// object when we can.
|
||||
if override, overridden := p.PackageOverrides[loader.NonVendorPath(pkg.PkgPath)]; overridden {
|
||||
override(p, pkg)
|
||||
p.packages[pkg] = struct{}{}
|
||||
return
|
||||
}
|
||||
p.AddPackage(pkg)
|
||||
}
|
433
vendor/sigs.k8s.io/controller-tools/pkg/crd/schema.go
generated
vendored
Normal file
433
vendor/sigs.k8s.io/controller-tools/pkg/crd/schema.go
generated
vendored
Normal file
@@ -0,0 +1,433 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package crd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"go/ast"
|
||||
"go/types"
|
||||
"strings"
|
||||
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
crdmarkers "sigs.k8s.io/controller-tools/pkg/crd/markers"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// Schema flattening is done in a recursive mapping method.
|
||||
// Start reading at infoToSchema.
|
||||
|
||||
const (
|
||||
// defPrefix is the prefix used to link to definitions in the OpenAPI schema.
|
||||
defPrefix = "#/definitions/"
|
||||
)
|
||||
|
||||
var (
|
||||
// byteType is the types.Type for byte (see the types documention
|
||||
// for why we need to look this up in the Universe), saved
|
||||
// for quick comparison.
|
||||
byteType = types.Universe.Lookup("byte").Type()
|
||||
)
|
||||
|
||||
// SchemaMarker is any marker that needs to modify the schema of the underlying type or field.
|
||||
type SchemaMarker interface {
|
||||
// ApplyToSchema is called after the rest of the schema for a given type
|
||||
// or field is generated, to modify the schema appropriately.
|
||||
ApplyToSchema(*apiext.JSONSchemaProps) error
|
||||
}
|
||||
|
||||
// applyFirstMarker is applied before any other markers. It's a bit of a hack.
|
||||
type applyFirstMarker interface {
|
||||
ApplyFirst()
|
||||
}
|
||||
|
||||
// schemaRequester knows how to marker that another schema (e.g. via an external reference) is necessary.
|
||||
type schemaRequester interface {
|
||||
NeedSchemaFor(typ TypeIdent)
|
||||
}
|
||||
|
||||
// schemaContext stores and provides information across a hierarchy of schema generation.
|
||||
type schemaContext struct {
|
||||
pkg *loader.Package
|
||||
info *markers.TypeInfo
|
||||
|
||||
schemaRequester schemaRequester
|
||||
PackageMarkers markers.MarkerValues
|
||||
|
||||
allowDangerousTypes bool
|
||||
}
|
||||
|
||||
// newSchemaContext constructs a new schemaContext for the given package and schema requester.
|
||||
// It must have type info added before use via ForInfo.
|
||||
func newSchemaContext(pkg *loader.Package, req schemaRequester, allowDangerousTypes bool) *schemaContext {
|
||||
pkg.NeedTypesInfo()
|
||||
return &schemaContext{
|
||||
pkg: pkg,
|
||||
schemaRequester: req,
|
||||
allowDangerousTypes: allowDangerousTypes,
|
||||
}
|
||||
}
|
||||
|
||||
// ForInfo produces a new schemaContext with containing the same information
|
||||
// as this one, except with the given type information.
|
||||
func (c *schemaContext) ForInfo(info *markers.TypeInfo) *schemaContext {
|
||||
return &schemaContext{
|
||||
pkg: c.pkg,
|
||||
info: info,
|
||||
schemaRequester: c.schemaRequester,
|
||||
allowDangerousTypes: c.allowDangerousTypes,
|
||||
}
|
||||
}
|
||||
|
||||
// requestSchema asks for the schema for a type in the package with the
|
||||
// given import path.
|
||||
func (c *schemaContext) requestSchema(pkgPath, typeName string) {
|
||||
pkg := c.pkg
|
||||
if pkgPath != "" {
|
||||
pkg = c.pkg.Imports()[pkgPath]
|
||||
}
|
||||
c.schemaRequester.NeedSchemaFor(TypeIdent{
|
||||
Package: pkg,
|
||||
Name: typeName,
|
||||
})
|
||||
}
|
||||
|
||||
// infoToSchema creates a schema for the type in the given set of type information.
|
||||
func infoToSchema(ctx *schemaContext) *apiext.JSONSchemaProps {
|
||||
return typeToSchema(ctx, ctx.info.RawSpec.Type)
|
||||
}
|
||||
|
||||
// applyMarkers applies schema markers to the given schema, respecting "apply first" markers.
|
||||
func applyMarkers(ctx *schemaContext, markerSet markers.MarkerValues, props *apiext.JSONSchemaProps, node ast.Node) {
|
||||
// apply "apply first" markers first...
|
||||
for _, markerValues := range markerSet {
|
||||
for _, markerValue := range markerValues {
|
||||
if _, isApplyFirst := markerValue.(applyFirstMarker); !isApplyFirst {
|
||||
continue
|
||||
}
|
||||
|
||||
schemaMarker, isSchemaMarker := markerValue.(SchemaMarker)
|
||||
if !isSchemaMarker {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := schemaMarker.ApplyToSchema(props); err != nil {
|
||||
ctx.pkg.AddError(loader.ErrFromNode(err /* an okay guess */, node))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ...then the rest of the markers
|
||||
for _, markerValues := range markerSet {
|
||||
for _, markerValue := range markerValues {
|
||||
if _, isApplyFirst := markerValue.(applyFirstMarker); isApplyFirst {
|
||||
// skip apply-first markers, which were already applied
|
||||
continue
|
||||
}
|
||||
|
||||
schemaMarker, isSchemaMarker := markerValue.(SchemaMarker)
|
||||
if !isSchemaMarker {
|
||||
continue
|
||||
}
|
||||
if err := schemaMarker.ApplyToSchema(props); err != nil {
|
||||
ctx.pkg.AddError(loader.ErrFromNode(err /* an okay guess */, node))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// typeToSchema creates a schema for the given AST type.
|
||||
func typeToSchema(ctx *schemaContext, rawType ast.Expr) *apiext.JSONSchemaProps {
|
||||
var props *apiext.JSONSchemaProps
|
||||
switch expr := rawType.(type) {
|
||||
case *ast.Ident:
|
||||
props = localNamedToSchema(ctx, expr)
|
||||
case *ast.SelectorExpr:
|
||||
props = namedToSchema(ctx, expr)
|
||||
case *ast.ArrayType:
|
||||
props = arrayToSchema(ctx, expr)
|
||||
case *ast.MapType:
|
||||
props = mapToSchema(ctx, expr)
|
||||
case *ast.StarExpr:
|
||||
props = typeToSchema(ctx, expr.X)
|
||||
case *ast.StructType:
|
||||
props = structToSchema(ctx, expr)
|
||||
default:
|
||||
ctx.pkg.AddError(loader.ErrFromNode(fmt.Errorf("unsupported AST kind %T", expr), rawType))
|
||||
// NB(directxman12): we explicitly don't handle interfaces
|
||||
return &apiext.JSONSchemaProps{}
|
||||
}
|
||||
|
||||
props.Description = ctx.info.Doc
|
||||
|
||||
applyMarkers(ctx, ctx.info.Markers, props, rawType)
|
||||
|
||||
return props
|
||||
}
|
||||
|
||||
// qualifiedName constructs a JSONSchema-safe qualified name for a type
|
||||
// (`<typeName>` or `<safePkgPath>~0<typeName>`, where `<safePkgPath>`
|
||||
// is the package path with `/` replaced by `~1`, according to JSONPointer
|
||||
// escapes).
|
||||
func qualifiedName(pkgName, typeName string) string {
|
||||
if pkgName != "" {
|
||||
return strings.Replace(pkgName, "/", "~1", -1) + "~0" + typeName
|
||||
}
|
||||
return typeName
|
||||
}
|
||||
|
||||
// TypeRefLink creates a definition link for the given type and package.
|
||||
func TypeRefLink(pkgName, typeName string) string {
|
||||
return defPrefix + qualifiedName(pkgName, typeName)
|
||||
}
|
||||
|
||||
// localNamedToSchema creates a schema (ref) for a *potentially* local type reference
|
||||
// (could be external from a dot-import).
|
||||
func localNamedToSchema(ctx *schemaContext, ident *ast.Ident) *apiext.JSONSchemaProps {
|
||||
typeInfo := ctx.pkg.TypesInfo.TypeOf(ident)
|
||||
if typeInfo == types.Typ[types.Invalid] {
|
||||
ctx.pkg.AddError(loader.ErrFromNode(fmt.Errorf("unknown type %s", ident.Name), ident))
|
||||
return &apiext.JSONSchemaProps{}
|
||||
}
|
||||
if basicInfo, isBasic := typeInfo.(*types.Basic); isBasic {
|
||||
typ, fmt, err := builtinToType(basicInfo, ctx.allowDangerousTypes)
|
||||
if err != nil {
|
||||
ctx.pkg.AddError(loader.ErrFromNode(err, ident))
|
||||
}
|
||||
return &apiext.JSONSchemaProps{
|
||||
Type: typ,
|
||||
Format: fmt,
|
||||
}
|
||||
}
|
||||
// NB(directxman12): if there are dot imports, this might be an external reference,
|
||||
// so use typechecking info to get the actual object
|
||||
typeNameInfo := typeInfo.(*types.Named).Obj()
|
||||
pkg := typeNameInfo.Pkg()
|
||||
pkgPath := loader.NonVendorPath(pkg.Path())
|
||||
if pkg == ctx.pkg.Types {
|
||||
pkgPath = ""
|
||||
}
|
||||
ctx.requestSchema(pkgPath, typeNameInfo.Name())
|
||||
link := TypeRefLink(pkgPath, typeNameInfo.Name())
|
||||
return &apiext.JSONSchemaProps{
|
||||
Ref: &link,
|
||||
}
|
||||
}
|
||||
|
||||
// namedSchema creates a schema (ref) for an explicitly external type reference.
|
||||
func namedToSchema(ctx *schemaContext, named *ast.SelectorExpr) *apiext.JSONSchemaProps {
|
||||
typeInfoRaw := ctx.pkg.TypesInfo.TypeOf(named)
|
||||
if typeInfoRaw == types.Typ[types.Invalid] {
|
||||
ctx.pkg.AddError(loader.ErrFromNode(fmt.Errorf("unknown type %v.%s", named.X, named.Sel.Name), named))
|
||||
return &apiext.JSONSchemaProps{}
|
||||
}
|
||||
typeInfo := typeInfoRaw.(*types.Named)
|
||||
typeNameInfo := typeInfo.Obj()
|
||||
nonVendorPath := loader.NonVendorPath(typeNameInfo.Pkg().Path())
|
||||
ctx.requestSchema(nonVendorPath, typeNameInfo.Name())
|
||||
link := TypeRefLink(nonVendorPath, typeNameInfo.Name())
|
||||
return &apiext.JSONSchemaProps{
|
||||
Ref: &link,
|
||||
}
|
||||
// NB(directxman12): we special-case things like resource.Quantity during the "collapse" phase.
|
||||
}
|
||||
|
||||
// arrayToSchema creates a schema for the items of the given array, dealing appropriately
|
||||
// with the special `[]byte` type (according to OpenAPI standards).
|
||||
func arrayToSchema(ctx *schemaContext, array *ast.ArrayType) *apiext.JSONSchemaProps {
|
||||
eltType := ctx.pkg.TypesInfo.TypeOf(array.Elt)
|
||||
if eltType == byteType && array.Len == nil {
|
||||
// byte slices are represented as base64-encoded strings
|
||||
// (the format is defined in OpenAPI v3, but not JSON Schema)
|
||||
return &apiext.JSONSchemaProps{
|
||||
Type: "string",
|
||||
Format: "byte",
|
||||
}
|
||||
}
|
||||
// TODO(directxman12): backwards-compat would require access to markers from base info
|
||||
items := typeToSchema(ctx.ForInfo(&markers.TypeInfo{}), array.Elt)
|
||||
|
||||
return &apiext.JSONSchemaProps{
|
||||
Type: "array",
|
||||
Items: &apiext.JSONSchemaPropsOrArray{Schema: items},
|
||||
}
|
||||
}
|
||||
|
||||
// mapToSchema creates a schema for items of the given map. Key types must eventually resolve
|
||||
// to string (other types aren't allowed by JSON, and thus the kubernetes API standards).
|
||||
func mapToSchema(ctx *schemaContext, mapType *ast.MapType) *apiext.JSONSchemaProps {
|
||||
keyInfo := ctx.pkg.TypesInfo.TypeOf(mapType.Key)
|
||||
// check that we've got a type that actually corresponds to a string
|
||||
for keyInfo != nil {
|
||||
switch typedKey := keyInfo.(type) {
|
||||
case *types.Basic:
|
||||
if typedKey.Info()&types.IsString == 0 {
|
||||
ctx.pkg.AddError(loader.ErrFromNode(fmt.Errorf("map keys must be strings, not %s", keyInfo.String()), mapType.Key))
|
||||
return &apiext.JSONSchemaProps{}
|
||||
}
|
||||
keyInfo = nil // stop iterating
|
||||
case *types.Named:
|
||||
keyInfo = typedKey.Underlying()
|
||||
default:
|
||||
ctx.pkg.AddError(loader.ErrFromNode(fmt.Errorf("map keys must be strings, not %s", keyInfo.String()), mapType.Key))
|
||||
return &apiext.JSONSchemaProps{}
|
||||
}
|
||||
}
|
||||
|
||||
// TODO(directxman12): backwards-compat would require access to markers from base info
|
||||
var valSchema *apiext.JSONSchemaProps
|
||||
switch val := mapType.Value.(type) {
|
||||
case *ast.Ident:
|
||||
valSchema = localNamedToSchema(ctx.ForInfo(&markers.TypeInfo{}), val)
|
||||
case *ast.SelectorExpr:
|
||||
valSchema = namedToSchema(ctx.ForInfo(&markers.TypeInfo{}), val)
|
||||
case *ast.ArrayType:
|
||||
valSchema = arrayToSchema(ctx.ForInfo(&markers.TypeInfo{}), val)
|
||||
if valSchema.Type == "array" && valSchema.Items.Schema.Type != "string" {
|
||||
ctx.pkg.AddError(loader.ErrFromNode(fmt.Errorf("map values must be a named type, not %T", mapType.Value), mapType.Value))
|
||||
return &apiext.JSONSchemaProps{}
|
||||
}
|
||||
case *ast.StarExpr:
|
||||
valSchema = typeToSchema(ctx.ForInfo(&markers.TypeInfo{}), val)
|
||||
default:
|
||||
ctx.pkg.AddError(loader.ErrFromNode(fmt.Errorf("map values must be a named type, not %T", mapType.Value), mapType.Value))
|
||||
return &apiext.JSONSchemaProps{}
|
||||
}
|
||||
|
||||
return &apiext.JSONSchemaProps{
|
||||
Type: "object",
|
||||
AdditionalProperties: &apiext.JSONSchemaPropsOrBool{
|
||||
Schema: valSchema,
|
||||
Allows: true, /* set automatically by serialization, but useful for testing */
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// structToSchema creates a schema for the given struct. Embedded fields are placed in AllOf,
|
||||
// and can be flattened later with a Flattener.
|
||||
func structToSchema(ctx *schemaContext, structType *ast.StructType) *apiext.JSONSchemaProps {
|
||||
props := &apiext.JSONSchemaProps{
|
||||
Type: "object",
|
||||
Properties: make(map[string]apiext.JSONSchemaProps),
|
||||
}
|
||||
|
||||
if ctx.info.RawSpec.Type != structType {
|
||||
ctx.pkg.AddError(loader.ErrFromNode(fmt.Errorf("encountered non-top-level struct (possibly embedded), those aren't allowed"), structType))
|
||||
return props
|
||||
}
|
||||
|
||||
for _, field := range ctx.info.Fields {
|
||||
jsonTag, hasTag := field.Tag.Lookup("json")
|
||||
if !hasTag {
|
||||
// if the field doesn't have a JSON tag, it doesn't belong in output (and shouldn't exist in a serialized type)
|
||||
ctx.pkg.AddError(loader.ErrFromNode(fmt.Errorf("encountered struct field %q without JSON tag in type %q", field.Name, ctx.info.Name), field.RawField))
|
||||
continue
|
||||
}
|
||||
jsonOpts := strings.Split(jsonTag, ",")
|
||||
if len(jsonOpts) == 1 && jsonOpts[0] == "-" {
|
||||
// skipped fields have the tag "-" (note that "-," means the field is named "-")
|
||||
continue
|
||||
}
|
||||
|
||||
inline := false
|
||||
omitEmpty := false
|
||||
for _, opt := range jsonOpts[1:] {
|
||||
switch opt {
|
||||
case "inline":
|
||||
inline = true
|
||||
case "omitempty":
|
||||
omitEmpty = true
|
||||
}
|
||||
}
|
||||
fieldName := jsonOpts[0]
|
||||
inline = inline || fieldName == "" // anonymous fields are inline fields in YAML/JSON
|
||||
|
||||
// if no default required mode is set, default to required
|
||||
defaultMode := "required"
|
||||
if ctx.PackageMarkers.Get("kubebuilder:validation:Optional") != nil {
|
||||
defaultMode = "optional"
|
||||
}
|
||||
|
||||
switch defaultMode {
|
||||
// if this package isn't set to optional default...
|
||||
case "required":
|
||||
// ...everything that's not inline, omitempty, or explicitly optional is required
|
||||
if !inline && !omitEmpty && field.Markers.Get("kubebuilder:validation:Optional") == nil && field.Markers.Get("optional") == nil {
|
||||
props.Required = append(props.Required, fieldName)
|
||||
}
|
||||
|
||||
// if this package isn't set to required default...
|
||||
case "optional":
|
||||
// ...everything that isn't explicitly required is optional
|
||||
if field.Markers.Get("kubebuilder:validation:Required") != nil {
|
||||
props.Required = append(props.Required, fieldName)
|
||||
}
|
||||
}
|
||||
|
||||
var propSchema *apiext.JSONSchemaProps
|
||||
if field.Markers.Get(crdmarkers.SchemalessName) != nil {
|
||||
propSchema = &apiext.JSONSchemaProps{}
|
||||
} else {
|
||||
propSchema = typeToSchema(ctx.ForInfo(&markers.TypeInfo{}), field.RawField.Type)
|
||||
}
|
||||
propSchema.Description = field.Doc
|
||||
|
||||
applyMarkers(ctx, field.Markers, propSchema, field.RawField)
|
||||
|
||||
if inline {
|
||||
props.AllOf = append(props.AllOf, *propSchema)
|
||||
continue
|
||||
}
|
||||
|
||||
props.Properties[fieldName] = *propSchema
|
||||
}
|
||||
|
||||
return props
|
||||
}
|
||||
|
||||
// builtinToType converts builtin basic types to their equivalent JSON schema form.
|
||||
// It *only* handles types allowed by the kubernetes API standards. Floats are not
|
||||
// allowed unless allowDangerousTypes is true
|
||||
func builtinToType(basic *types.Basic, allowDangerousTypes bool) (typ string, format string, err error) {
|
||||
// NB(directxman12): formats from OpenAPI v3 are slightly different than those defined
|
||||
// in JSONSchema. This'll use the OpenAPI v3 ones, since they're useful for bounding our
|
||||
// non-string types.
|
||||
basicInfo := basic.Info()
|
||||
switch {
|
||||
case basicInfo&types.IsBoolean != 0:
|
||||
typ = "boolean"
|
||||
case basicInfo&types.IsString != 0:
|
||||
typ = "string"
|
||||
case basicInfo&types.IsInteger != 0:
|
||||
typ = "integer"
|
||||
case basicInfo&types.IsFloat != 0 && allowDangerousTypes:
|
||||
typ = "number"
|
||||
default:
|
||||
// NB(directxman12): floats are *NOT* allowed in kubernetes APIs
|
||||
return "", "", fmt.Errorf("unsupported type %q", basic.String())
|
||||
}
|
||||
|
||||
switch basic.Kind() {
|
||||
case types.Int32, types.Uint32:
|
||||
format = "int32"
|
||||
case types.Int64, types.Uint64:
|
||||
format = "int64"
|
||||
}
|
||||
|
||||
return typ, format, nil
|
||||
}
|
131
vendor/sigs.k8s.io/controller-tools/pkg/crd/schema_visitor.go
generated
vendored
Normal file
131
vendor/sigs.k8s.io/controller-tools/pkg/crd/schema_visitor.go
generated
vendored
Normal file
@@ -0,0 +1,131 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package crd
|
||||
|
||||
import (
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
)
|
||||
|
||||
// SchemaVisitor walks the nodes of a schema.
|
||||
type SchemaVisitor interface {
|
||||
// Visit is called for each schema node. If it returns a visitor,
|
||||
// the visitor will be called on each direct child node, and then
|
||||
// this visitor will be called again with `nil` to indicate that
|
||||
// all children have been visited. If a nil visitor is returned,
|
||||
// children are not visited.
|
||||
//
|
||||
// It is *NOT* safe to save references to the given schema.
|
||||
// Make deepcopies if you need to keep things around beyond
|
||||
// the lifetime of the call.
|
||||
Visit(schema *apiext.JSONSchemaProps) SchemaVisitor
|
||||
}
|
||||
|
||||
// EditSchema walks the given schema using the given visitor. Actual
|
||||
// pointers to each schema node are passed to the visitor, so any changes
|
||||
// made by the visitor will be reflected to the passed-in schema.
|
||||
func EditSchema(schema *apiext.JSONSchemaProps, visitor SchemaVisitor) {
|
||||
walker := schemaWalker{visitor: visitor}
|
||||
walker.walkSchema(schema)
|
||||
}
|
||||
|
||||
// schemaWalker knows how to walk the schema, saving modifications
|
||||
// made by the given visitor.
|
||||
type schemaWalker struct {
|
||||
visitor SchemaVisitor
|
||||
}
|
||||
|
||||
// walkSchema walks the given schema, saving modifications made by the visitor
|
||||
// (this is as simple as passing a pointer in most cases, but special care
|
||||
// needs to be taken to persist with maps). It also visits referenced
|
||||
// schemata, dealing with circular references appropriately. The returned
|
||||
// visitor will be used to visit all "children" of the current schema, followed
|
||||
// by a nil schema with the returned visitor to mark completion. If a nil visitor
|
||||
// is returned, traversal will no continue into the children of the current schema.
|
||||
func (w schemaWalker) walkSchema(schema *apiext.JSONSchemaProps) {
|
||||
// Walk a potential chain of schema references, keeping track of seen
|
||||
// references to avoid circular references
|
||||
subVisitor := w.visitor
|
||||
seenRefs := map[string]bool{}
|
||||
if schema.Ref != nil {
|
||||
seenRefs[*schema.Ref] = true
|
||||
}
|
||||
for {
|
||||
subVisitor = subVisitor.Visit(schema)
|
||||
if subVisitor == nil {
|
||||
return
|
||||
}
|
||||
// mark completion of the visitor
|
||||
defer subVisitor.Visit(nil)
|
||||
|
||||
// Break if schema is not a reference or a cycle is detected
|
||||
if schema.Ref == nil || len(*schema.Ref) == 0 || seenRefs[*schema.Ref] {
|
||||
break
|
||||
}
|
||||
seenRefs[*schema.Ref] = true
|
||||
}
|
||||
|
||||
// walk sub-schemata
|
||||
subWalker := schemaWalker{visitor: subVisitor}
|
||||
if schema.Items != nil {
|
||||
subWalker.walkPtr(schema.Items.Schema)
|
||||
subWalker.walkSlice(schema.Items.JSONSchemas)
|
||||
}
|
||||
subWalker.walkSlice(schema.AllOf)
|
||||
subWalker.walkSlice(schema.OneOf)
|
||||
subWalker.walkSlice(schema.AnyOf)
|
||||
subWalker.walkPtr(schema.Not)
|
||||
subWalker.walkMap(schema.Properties)
|
||||
if schema.AdditionalProperties != nil {
|
||||
subWalker.walkPtr(schema.AdditionalProperties.Schema)
|
||||
}
|
||||
subWalker.walkMap(schema.PatternProperties)
|
||||
for name, dep := range schema.Dependencies {
|
||||
subWalker.walkPtr(dep.Schema)
|
||||
schema.Dependencies[name] = dep
|
||||
}
|
||||
if schema.AdditionalItems != nil {
|
||||
subWalker.walkPtr(schema.AdditionalItems.Schema)
|
||||
}
|
||||
subWalker.walkMap(schema.Definitions)
|
||||
}
|
||||
|
||||
// walkMap walks over values of the given map, saving changes to them.
|
||||
func (w schemaWalker) walkMap(defs map[string]apiext.JSONSchemaProps) {
|
||||
for name, def := range defs {
|
||||
// this is iter var reference is because we immediately preseve it below
|
||||
//nolint:gosec
|
||||
w.walkSchema(&def)
|
||||
// make sure the edits actually go through since we can't
|
||||
// take a reference to the value in the map
|
||||
defs[name] = def
|
||||
}
|
||||
}
|
||||
|
||||
// walkSlice walks over items of the given slice.
|
||||
func (w schemaWalker) walkSlice(defs []apiext.JSONSchemaProps) {
|
||||
for i := range defs {
|
||||
w.walkSchema(&defs[i])
|
||||
}
|
||||
}
|
||||
|
||||
// walkPtr walks over the contents of the given pointer, if it's not nil.
|
||||
func (w schemaWalker) walkPtr(def *apiext.JSONSchemaProps) {
|
||||
if def == nil {
|
||||
return
|
||||
}
|
||||
w.walkSchema(def)
|
||||
}
|
174
vendor/sigs.k8s.io/controller-tools/pkg/crd/spec.go
generated
vendored
Normal file
174
vendor/sigs.k8s.io/controller-tools/pkg/crd/spec.go
generated
vendored
Normal file
@@ -0,0 +1,174 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
package crd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/gobuffalo/flect"
|
||||
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
)
|
||||
|
||||
// SpecMarker is a marker that knows how to apply itself to a particular
|
||||
// version in a CRD.
|
||||
type SpecMarker interface {
|
||||
// ApplyToCRD applies this marker to the given CRD, in the given version
|
||||
// within that CRD. It's called after everything else in the CRD is populated.
|
||||
ApplyToCRD(crd *apiext.CustomResourceDefinitionSpec, version string) error
|
||||
}
|
||||
|
||||
// NeedCRDFor requests the full CRD for the given group-kind. It requires
|
||||
// that the packages containing the Go structs for that CRD have already
|
||||
// been loaded with NeedPackage.
|
||||
func (p *Parser) NeedCRDFor(groupKind schema.GroupKind, maxDescLen *int) {
|
||||
p.init()
|
||||
|
||||
if _, exists := p.CustomResourceDefinitions[groupKind]; exists {
|
||||
return
|
||||
}
|
||||
|
||||
var packages []*loader.Package
|
||||
for pkg, gv := range p.GroupVersions {
|
||||
if gv.Group != groupKind.Group {
|
||||
continue
|
||||
}
|
||||
packages = append(packages, pkg)
|
||||
}
|
||||
|
||||
defaultPlural := strings.ToLower(flect.Pluralize(groupKind.Kind))
|
||||
crd := apiext.CustomResourceDefinition{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
APIVersion: apiext.SchemeGroupVersion.String(),
|
||||
Kind: "CustomResourceDefinition",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: defaultPlural + "." + groupKind.Group,
|
||||
},
|
||||
Spec: apiext.CustomResourceDefinitionSpec{
|
||||
Group: groupKind.Group,
|
||||
Names: apiext.CustomResourceDefinitionNames{
|
||||
Kind: groupKind.Kind,
|
||||
ListKind: groupKind.Kind + "List",
|
||||
Plural: defaultPlural,
|
||||
Singular: strings.ToLower(groupKind.Kind),
|
||||
},
|
||||
Scope: apiext.NamespaceScoped,
|
||||
},
|
||||
}
|
||||
|
||||
for _, pkg := range packages {
|
||||
typeIdent := TypeIdent{Package: pkg, Name: groupKind.Kind}
|
||||
typeInfo := p.Types[typeIdent]
|
||||
if typeInfo == nil {
|
||||
continue
|
||||
}
|
||||
p.NeedFlattenedSchemaFor(typeIdent)
|
||||
fullSchema := p.FlattenedSchemata[typeIdent]
|
||||
fullSchema = *fullSchema.DeepCopy() // don't mutate the cache (we might be truncating description, etc)
|
||||
if maxDescLen != nil {
|
||||
TruncateDescription(&fullSchema, *maxDescLen)
|
||||
}
|
||||
ver := apiext.CustomResourceDefinitionVersion{
|
||||
Name: p.GroupVersions[pkg].Version,
|
||||
Served: true,
|
||||
Schema: &apiext.CustomResourceValidation{
|
||||
OpenAPIV3Schema: &fullSchema, // fine to take a reference since we deepcopy above
|
||||
},
|
||||
}
|
||||
crd.Spec.Versions = append(crd.Spec.Versions, ver)
|
||||
}
|
||||
|
||||
// markers are applied *after* initial generation of objects
|
||||
for _, pkg := range packages {
|
||||
typeIdent := TypeIdent{Package: pkg, Name: groupKind.Kind}
|
||||
typeInfo := p.Types[typeIdent]
|
||||
if typeInfo == nil {
|
||||
continue
|
||||
}
|
||||
ver := p.GroupVersions[pkg].Version
|
||||
|
||||
for _, markerVals := range typeInfo.Markers {
|
||||
for _, val := range markerVals {
|
||||
crdMarker, isCrdMarker := val.(SpecMarker)
|
||||
if !isCrdMarker {
|
||||
continue
|
||||
}
|
||||
if err := crdMarker.ApplyToCRD(&crd.Spec, ver); err != nil {
|
||||
pkg.AddError(loader.ErrFromNode(err /* an okay guess */, typeInfo.RawSpec))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// fix the name if the plural was changed (this is the form the name *has* to take, so no harm in changing it).
|
||||
crd.Name = crd.Spec.Names.Plural + "." + groupKind.Group
|
||||
|
||||
// nothing to actually write
|
||||
if len(crd.Spec.Versions) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// it is necessary to make sure the order of CRD versions in crd.Spec.Versions is stable and explicitly set crd.Spec.Version.
|
||||
// Otherwise, crd.Spec.Version may point to different CRD versions across different runs.
|
||||
sort.Slice(crd.Spec.Versions, func(i, j int) bool { return crd.Spec.Versions[i].Name < crd.Spec.Versions[j].Name })
|
||||
|
||||
// make sure we have *a* storage version
|
||||
// (default it if we only have one, otherwise, bail)
|
||||
if len(crd.Spec.Versions) == 1 {
|
||||
crd.Spec.Versions[0].Storage = true
|
||||
}
|
||||
|
||||
hasStorage := false
|
||||
for _, ver := range crd.Spec.Versions {
|
||||
if ver.Storage {
|
||||
hasStorage = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !hasStorage {
|
||||
// just add the error to the first relevant package for this CRD,
|
||||
// since there's no specific error location
|
||||
packages[0].AddError(fmt.Errorf("CRD for %s has no storage version", groupKind))
|
||||
}
|
||||
|
||||
served := false
|
||||
for _, ver := range crd.Spec.Versions {
|
||||
if ver.Served {
|
||||
served = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !served {
|
||||
// just add the error to the first relevant package for this CRD,
|
||||
// since there's no specific error location
|
||||
packages[0].AddError(fmt.Errorf("CRD for %s with version(s) %v does not serve any version", groupKind, crd.Spec.Versions))
|
||||
}
|
||||
|
||||
// NB(directxman12): CRD's status doesn't have omitempty markers, which means things
|
||||
// get serialized as null, which causes the validator to freak out. Manually set
|
||||
// these to empty till we get a better solution.
|
||||
crd.Status.Conditions = []apiext.CustomResourceDefinitionCondition{}
|
||||
crd.Status.StoredVersions = []string{}
|
||||
|
||||
p.CustomResourceDefinitions[groupKind] = crd
|
||||
}
|
61
vendor/sigs.k8s.io/controller-tools/pkg/crd/zz_generated.markerhelp.go
generated
vendored
Normal file
61
vendor/sigs.k8s.io/controller-tools/pkg/crd/zz_generated.markerhelp.go
generated
vendored
Normal file
@@ -0,0 +1,61 @@
|
||||
// +build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by helpgen. DO NOT EDIT.
|
||||
|
||||
package crd
|
||||
|
||||
import (
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
func (Generator) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "generates CustomResourceDefinition objects.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"TrivialVersions": {
|
||||
Summary: "indicates that we should produce a single-version CRD. ",
|
||||
Details: "Single \"trivial-version\" CRDs are compatible with older (pre 1.13) Kubernetes API servers. The storage version's schema will be used as the CRD's schema. \n Only works with the v1beta1 CRD version.",
|
||||
},
|
||||
"PreserveUnknownFields": {
|
||||
Summary: "indicates whether or not we should turn off pruning. ",
|
||||
Details: "Left unspecified, it'll default to true when only a v1beta1 CRD is generated (to preserve compatibility with older versions of this tool), or false otherwise. \n It's required to be false for v1 CRDs.",
|
||||
},
|
||||
"AllowDangerousTypes": {
|
||||
Summary: "allows types which are usually omitted from CRD generation because they are not recommended. ",
|
||||
Details: "Currently the following additional types are allowed when this is true: float32 float64 \n Left unspecified, the default is false",
|
||||
},
|
||||
"MaxDescLen": {
|
||||
Summary: "specifies the maximum description length for fields in CRD's OpenAPI schema. ",
|
||||
Details: "0 indicates drop the description for all fields completely. n indicates limit the description to at most n characters and truncate the description to closest sentence boundary if it exceeds n characters.",
|
||||
},
|
||||
"CRDVersions": {
|
||||
Summary: "specifies the target API versions of the CRD type itself to generate. Defaults to v1. ",
|
||||
Details: "The first version listed will be assumed to be the \"default\" version and will not get a version suffix in the output filename. \n You'll need to use \"v1\" to get support for features like defaulting, along with an API server that supports it (Kubernetes 1.16+).",
|
||||
},
|
||||
"GenerateEmbeddedObjectMeta": {
|
||||
Summary: "specifies if any embedded ObjectMeta in the CRD should be generated",
|
||||
Details: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
23
vendor/sigs.k8s.io/controller-tools/pkg/deepcopy/doc.go
generated
vendored
Normal file
23
vendor/sigs.k8s.io/controller-tools/pkg/deepcopy/doc.go
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package deepcopy generates DeepCopy, DeepCopyInto, and DeepCopyObject
|
||||
// implementations for types.
|
||||
//
|
||||
// It's ported from k8s.io/code-generator's / k8s.io/gengo's deepcopy-gen,
|
||||
// but it's scoped specifically to runtime.Object and skips support for
|
||||
// deepcopying interfaces, which aren't handled in CRDs anyway.
|
||||
package deepcopy
|
304
vendor/sigs.k8s.io/controller-tools/pkg/deepcopy/gen.go
generated
vendored
Normal file
304
vendor/sigs.k8s.io/controller-tools/pkg/deepcopy/gen.go
generated
vendored
Normal file
@@ -0,0 +1,304 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package deepcopy
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"go/ast"
|
||||
"go/format"
|
||||
"io"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/genall"
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// NB(directxman12): markers.LoadRoots ignores autogenerated code via a build tag
|
||||
// so any time we check for existing deepcopy functions, we only seen manually written ones.
|
||||
|
||||
const (
|
||||
runtimeObjPath = "k8s.io/apimachinery/pkg/runtime.Object"
|
||||
)
|
||||
|
||||
var (
|
||||
enablePkgMarker = markers.Must(markers.MakeDefinition("kubebuilder:object:generate", markers.DescribesPackage, false))
|
||||
enableTypeMarker = markers.Must(markers.MakeDefinition("kubebuilder:object:generate", markers.DescribesType, false))
|
||||
isObjectMarker = markers.Must(markers.MakeDefinition("kubebuilder:object:root", markers.DescribesType, false))
|
||||
|
||||
legacyEnablePkgMarker = markers.Must(markers.MakeDefinition("k8s:deepcopy-gen", markers.DescribesPackage, markers.RawArguments(nil)))
|
||||
legacyEnableTypeMarker = markers.Must(markers.MakeDefinition("k8s:deepcopy-gen", markers.DescribesType, markers.RawArguments(nil)))
|
||||
legacyIsObjectMarker = markers.Must(markers.MakeDefinition("k8s:deepcopy-gen:interfaces", markers.DescribesType, ""))
|
||||
)
|
||||
|
||||
// +controllertools:marker:generateHelp
|
||||
|
||||
// Generator generates code containing DeepCopy, DeepCopyInto, and
|
||||
// DeepCopyObject method implementations.
|
||||
type Generator struct {
|
||||
// HeaderFile specifies the header text (e.g. license) to prepend to generated files.
|
||||
HeaderFile string `marker:",optional"`
|
||||
// Year specifies the year to substitute for " YEAR" in the header file.
|
||||
Year string `marker:",optional"`
|
||||
}
|
||||
|
||||
func (Generator) CheckFilter() loader.NodeFilter {
|
||||
return func(node ast.Node) bool {
|
||||
// ignore interfaces
|
||||
_, isIface := node.(*ast.InterfaceType)
|
||||
return !isIface
|
||||
}
|
||||
}
|
||||
|
||||
func (Generator) RegisterMarkers(into *markers.Registry) error {
|
||||
if err := markers.RegisterAll(into,
|
||||
enablePkgMarker, legacyEnablePkgMarker, enableTypeMarker,
|
||||
legacyEnableTypeMarker, isObjectMarker, legacyIsObjectMarker); err != nil {
|
||||
return err
|
||||
}
|
||||
into.AddHelp(enablePkgMarker,
|
||||
markers.SimpleHelp("object", "enables or disables object interface & deepcopy implementation generation for this package"))
|
||||
into.AddHelp(
|
||||
enableTypeMarker, markers.SimpleHelp("object", "overrides enabling or disabling deepcopy generation for this type"))
|
||||
into.AddHelp(isObjectMarker,
|
||||
markers.SimpleHelp("object", "enables object interface implementation generation for this type"))
|
||||
|
||||
into.AddHelp(legacyEnablePkgMarker,
|
||||
markers.DeprecatedHelp(enablePkgMarker.Name, "object", "enables or disables object interface & deepcopy implementation generation for this package"))
|
||||
into.AddHelp(legacyEnableTypeMarker,
|
||||
markers.DeprecatedHelp(enableTypeMarker.Name, "object", "overrides enabling or disabling deepcopy generation for this type"))
|
||||
into.AddHelp(legacyIsObjectMarker,
|
||||
markers.DeprecatedHelp(isObjectMarker.Name, "object", "enables object interface implementation generation for this type"))
|
||||
return nil
|
||||
}
|
||||
|
||||
func enabledOnPackage(col *markers.Collector, pkg *loader.Package) (bool, error) {
|
||||
pkgMarkers, err := markers.PackageMarkers(col, pkg)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
pkgMarker := pkgMarkers.Get(enablePkgMarker.Name)
|
||||
if pkgMarker != nil {
|
||||
return pkgMarker.(bool), nil
|
||||
}
|
||||
legacyMarker := pkgMarkers.Get(legacyEnablePkgMarker.Name)
|
||||
if legacyMarker != nil {
|
||||
legacyMarkerVal := string(legacyMarker.(markers.RawArguments))
|
||||
firstArg := strings.Split(legacyMarkerVal, ",")[0]
|
||||
return firstArg == "package", nil
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func enabledOnType(allTypes bool, info *markers.TypeInfo) bool {
|
||||
if typeMarker := info.Markers.Get(enableTypeMarker.Name); typeMarker != nil {
|
||||
return typeMarker.(bool)
|
||||
}
|
||||
legacyMarker := info.Markers.Get(legacyEnableTypeMarker.Name)
|
||||
if legacyMarker != nil {
|
||||
legacyMarkerVal := string(legacyMarker.(markers.RawArguments))
|
||||
return legacyMarkerVal == "true"
|
||||
}
|
||||
return allTypes || genObjectInterface(info)
|
||||
}
|
||||
|
||||
func genObjectInterface(info *markers.TypeInfo) bool {
|
||||
objectEnabled := info.Markers.Get(isObjectMarker.Name)
|
||||
if objectEnabled != nil {
|
||||
return objectEnabled.(bool)
|
||||
}
|
||||
|
||||
for _, legacyEnabled := range info.Markers[legacyIsObjectMarker.Name] {
|
||||
if legacyEnabled == runtimeObjPath {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (d Generator) Generate(ctx *genall.GenerationContext) error {
|
||||
var headerText string
|
||||
|
||||
if d.HeaderFile != "" {
|
||||
headerBytes, err := ctx.ReadFile(d.HeaderFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headerText = string(headerBytes)
|
||||
}
|
||||
headerText = strings.ReplaceAll(headerText, " YEAR", " "+d.Year)
|
||||
|
||||
objGenCtx := ObjectGenCtx{
|
||||
Collector: ctx.Collector,
|
||||
Checker: ctx.Checker,
|
||||
HeaderText: headerText,
|
||||
}
|
||||
|
||||
for _, root := range ctx.Roots {
|
||||
outContents := objGenCtx.generateForPackage(root)
|
||||
if outContents == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
writeOut(ctx, root, outContents)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ObjectGenCtx contains the common info for generating deepcopy implementations.
|
||||
// It mostly exists so that generating for a package can be easily tested without
|
||||
// requiring a full set of output rules, etc.
|
||||
type ObjectGenCtx struct {
|
||||
Collector *markers.Collector
|
||||
Checker *loader.TypeChecker
|
||||
HeaderText string
|
||||
}
|
||||
|
||||
// writeHeader writes out the build tag, package declaration, and imports
|
||||
func writeHeader(pkg *loader.Package, out io.Writer, packageName string, imports *importsList, headerText string) {
|
||||
// NB(directxman12): blank line after build tags to distinguish them from comments
|
||||
_, err := fmt.Fprintf(out, `// +build !ignore_autogenerated
|
||||
|
||||
%[3]s
|
||||
|
||||
// Code generated by controller-gen. DO NOT EDIT.
|
||||
|
||||
package %[1]s
|
||||
|
||||
import (
|
||||
%[2]s
|
||||
)
|
||||
|
||||
`, packageName, strings.Join(imports.ImportSpecs(), "\n"), headerText)
|
||||
if err != nil {
|
||||
pkg.AddError(err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// generateForPackage generates DeepCopy and runtime.Object implementations for
|
||||
// types in the given package, writing the formatted result to given writer.
|
||||
// May return nil if source could not be generated.
|
||||
func (ctx *ObjectGenCtx) generateForPackage(root *loader.Package) []byte {
|
||||
allTypes, err := enabledOnPackage(ctx.Collector, root)
|
||||
if err != nil {
|
||||
root.AddError(err)
|
||||
return nil
|
||||
}
|
||||
|
||||
ctx.Checker.Check(root)
|
||||
|
||||
root.NeedTypesInfo()
|
||||
|
||||
byType := make(map[string][]byte)
|
||||
imports := &importsList{
|
||||
byPath: make(map[string]string),
|
||||
byAlias: make(map[string]string),
|
||||
pkg: root,
|
||||
}
|
||||
// avoid confusing aliases by "reserving" the root package's name as an alias
|
||||
imports.byAlias[root.Name] = ""
|
||||
|
||||
if err := markers.EachType(ctx.Collector, root, func(info *markers.TypeInfo) {
|
||||
outContent := new(bytes.Buffer)
|
||||
|
||||
// copy when nabled for all types and not disabled, or enabled
|
||||
// specifically on this type
|
||||
if !enabledOnType(allTypes, info) {
|
||||
return
|
||||
}
|
||||
|
||||
// avoid copying non-exported types, etc
|
||||
if !shouldBeCopied(root, info) {
|
||||
return
|
||||
}
|
||||
|
||||
copyCtx := ©MethodMaker{
|
||||
pkg: root,
|
||||
importsList: imports,
|
||||
codeWriter: &codeWriter{out: outContent},
|
||||
}
|
||||
|
||||
copyCtx.GenerateMethodsFor(root, info)
|
||||
|
||||
outBytes := outContent.Bytes()
|
||||
if len(outBytes) > 0 {
|
||||
byType[info.Name] = outBytes
|
||||
}
|
||||
}); err != nil {
|
||||
root.AddError(err)
|
||||
return nil
|
||||
}
|
||||
|
||||
if len(byType) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
outContent := new(bytes.Buffer)
|
||||
writeHeader(root, outContent, root.Name, imports, ctx.HeaderText)
|
||||
writeMethods(root, outContent, byType)
|
||||
|
||||
outBytes := outContent.Bytes()
|
||||
formattedBytes, err := format.Source(outBytes)
|
||||
if err != nil {
|
||||
root.AddError(err)
|
||||
// we still write the invalid source to disk to figure out what went wrong
|
||||
} else {
|
||||
outBytes = formattedBytes
|
||||
}
|
||||
|
||||
return outBytes
|
||||
}
|
||||
|
||||
// writeMethods writes each method to the file, sorted by type name.
|
||||
func writeMethods(pkg *loader.Package, out io.Writer, byType map[string][]byte) {
|
||||
sortedNames := make([]string, 0, len(byType))
|
||||
for name := range byType {
|
||||
sortedNames = append(sortedNames, name)
|
||||
}
|
||||
sort.Strings(sortedNames)
|
||||
|
||||
for _, name := range sortedNames {
|
||||
_, err := out.Write(byType[name])
|
||||
if err != nil {
|
||||
pkg.AddError(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// writeFormatted outputs the given code, after gofmt-ing it. If we couldn't gofmt,
|
||||
// we write the unformatted code for debugging purposes.
|
||||
func writeOut(ctx *genall.GenerationContext, root *loader.Package, outBytes []byte) {
|
||||
outputFile, err := ctx.Open(root, "zz_generated.deepcopy.go")
|
||||
if err != nil {
|
||||
root.AddError(err)
|
||||
return
|
||||
}
|
||||
defer outputFile.Close()
|
||||
n, err := outputFile.Write(outBytes)
|
||||
if err != nil {
|
||||
root.AddError(err)
|
||||
return
|
||||
}
|
||||
if n < len(outBytes) {
|
||||
root.AddError(io.ErrShortWrite)
|
||||
}
|
||||
}
|
829
vendor/sigs.k8s.io/controller-tools/pkg/deepcopy/traverse.go
generated
vendored
Normal file
829
vendor/sigs.k8s.io/controller-tools/pkg/deepcopy/traverse.go
generated
vendored
Normal file
@@ -0,0 +1,829 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package deepcopy
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"go/ast"
|
||||
"go/types"
|
||||
"io"
|
||||
"path"
|
||||
"strings"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// NB(directxman12): This code is a bit of a byzantine mess.
|
||||
// I've tried to clean it up a bit from the original in deepcopy-gen,
|
||||
// but parts remain a bit convoluted. Exercise caution when changing.
|
||||
// It's perhaps a tad over-commented now, but better safe than sorry.
|
||||
// It also seriously needs auditing for sanity -- there's parts where we
|
||||
// copy the original deepcopy-gen's output just to be safe, but some of that
|
||||
// could be simplified away if we're careful.
|
||||
|
||||
// codeWriter assists in writing out Go code lines and blocks to a writer.
|
||||
type codeWriter struct {
|
||||
out io.Writer
|
||||
}
|
||||
|
||||
// Line writes a single line.
|
||||
func (c *codeWriter) Line(line string) {
|
||||
fmt.Fprintln(c.out, line)
|
||||
}
|
||||
|
||||
// Linef writes a single line with formatting (as per fmt.Sprintf).
|
||||
func (c *codeWriter) Linef(line string, args ...interface{}) {
|
||||
fmt.Fprintf(c.out, line+"\n", args...)
|
||||
}
|
||||
|
||||
// If writes an if statement with the given setup/condition clause, executing
|
||||
// the given function to write the contents of the block.
|
||||
func (c *codeWriter) If(setup string, block func()) {
|
||||
c.Linef("if %s {", setup)
|
||||
block()
|
||||
c.Line("}")
|
||||
}
|
||||
|
||||
// If writes if and else statements with the given setup/condition clause, executing
|
||||
// the given functions to write the contents of the blocks.
|
||||
func (c *codeWriter) IfElse(setup string, ifBlock func(), elseBlock func()) {
|
||||
c.Linef("if %s {", setup)
|
||||
ifBlock()
|
||||
c.Line("} else {")
|
||||
elseBlock()
|
||||
c.Line("}")
|
||||
}
|
||||
|
||||
// For writes an for statement with the given setup/condition clause, executing
|
||||
// the given function to write the contents of the block.
|
||||
func (c *codeWriter) For(setup string, block func()) {
|
||||
c.Linef("for %s {", setup)
|
||||
block()
|
||||
c.Line("}")
|
||||
}
|
||||
|
||||
// importsList keeps track of required imports, automatically assigning aliases
|
||||
// to import statement.
|
||||
type importsList struct {
|
||||
byPath map[string]string
|
||||
byAlias map[string]string
|
||||
|
||||
pkg *loader.Package
|
||||
}
|
||||
|
||||
// NeedImport marks that the given package is needed in the list of imports,
|
||||
// returning the ident (import alias) that should be used to reference the package.
|
||||
func (l *importsList) NeedImport(importPath string) string {
|
||||
// we get an actual path from Package, which might include venddored
|
||||
// packages if running on a package in vendor.
|
||||
if ind := strings.LastIndex(importPath, "/vendor/"); ind != -1 {
|
||||
importPath = importPath[ind+8: /* len("/vendor/") */]
|
||||
}
|
||||
|
||||
// check to see if we've already assigned an alias, and just return that.
|
||||
alias, exists := l.byPath[importPath]
|
||||
if exists {
|
||||
return alias
|
||||
}
|
||||
|
||||
// otherwise, calculate an import alias by joining path parts till we get something unique
|
||||
restPath, nextWord := path.Split(importPath)
|
||||
|
||||
for otherPath, exists := "", true; exists && otherPath != importPath; otherPath, exists = l.byAlias[alias] {
|
||||
if restPath == "" {
|
||||
// do something else to disambiguate if we're run out of parts and
|
||||
// still have duplicates, somehow
|
||||
alias += "x"
|
||||
}
|
||||
|
||||
// can't have a first digit, per Go identifier rules, so just skip them
|
||||
for firstRune, runeLen := utf8.DecodeRuneInString(nextWord); unicode.IsDigit(firstRune); firstRune, runeLen = utf8.DecodeRuneInString(nextWord) {
|
||||
nextWord = nextWord[runeLen:]
|
||||
}
|
||||
|
||||
// make a valid identifier by replacing "bad" characters with underscores
|
||||
nextWord = strings.Map(func(r rune) rune {
|
||||
if unicode.IsLetter(r) || unicode.IsDigit(r) || r == '_' {
|
||||
return r
|
||||
}
|
||||
return '_'
|
||||
}, nextWord)
|
||||
|
||||
alias = nextWord + alias
|
||||
if len(restPath) > 0 {
|
||||
restPath, nextWord = path.Split(restPath[:len(restPath)-1] /* chop off final slash */)
|
||||
}
|
||||
}
|
||||
|
||||
l.byPath[importPath] = alias
|
||||
l.byAlias[alias] = importPath
|
||||
return alias
|
||||
}
|
||||
|
||||
// ImportSpecs returns a string form of each import spec
|
||||
// (i.e. `alias "path/to/import"). Aliases are only present
|
||||
// when they don't match the package name.
|
||||
func (l *importsList) ImportSpecs() []string {
|
||||
res := make([]string, 0, len(l.byPath))
|
||||
for importPath, alias := range l.byPath {
|
||||
pkg := l.pkg.Imports()[importPath]
|
||||
if pkg != nil && pkg.Name == alias {
|
||||
// don't print if alias is the same as package name
|
||||
// (we've already taken care of duplicates).
|
||||
res = append(res, fmt.Sprintf("%q", importPath))
|
||||
} else {
|
||||
res = append(res, fmt.Sprintf("%s %q", alias, importPath))
|
||||
}
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// namingInfo holds package and syntax for referencing a field, type,
|
||||
// etc. It's used to allow lazily marking import usage.
|
||||
// You should generally retrieve the syntax using Syntax.
|
||||
type namingInfo struct {
|
||||
// typeInfo is the type being named.
|
||||
typeInfo types.Type
|
||||
nameOverride string
|
||||
}
|
||||
|
||||
// Syntax calculates the code representation of the given type or name,
|
||||
// and marks that is used (potentially marking an import as used).
|
||||
func (n *namingInfo) Syntax(basePkg *loader.Package, imports *importsList) string {
|
||||
if n.nameOverride != "" {
|
||||
return n.nameOverride
|
||||
}
|
||||
|
||||
// NB(directxman12): typeInfo.String gets us most of the way there,
|
||||
// but fails (for us) on named imports, since it uses the full package path.
|
||||
switch typeInfo := n.typeInfo.(type) {
|
||||
case *types.Named:
|
||||
// register that we need an import for this type,
|
||||
// so we can get the appropriate alias to use.
|
||||
typeName := typeInfo.Obj()
|
||||
otherPkg := typeName.Pkg()
|
||||
if otherPkg == basePkg.Types {
|
||||
// local import
|
||||
return typeName.Name()
|
||||
}
|
||||
alias := imports.NeedImport(loader.NonVendorPath(otherPkg.Path()))
|
||||
return alias + "." + typeName.Name()
|
||||
case *types.Basic:
|
||||
return typeInfo.String()
|
||||
case *types.Pointer:
|
||||
return "*" + (&namingInfo{typeInfo: typeInfo.Elem()}).Syntax(basePkg, imports)
|
||||
case *types.Slice:
|
||||
return "[]" + (&namingInfo{typeInfo: typeInfo.Elem()}).Syntax(basePkg, imports)
|
||||
case *types.Map:
|
||||
return fmt.Sprintf(
|
||||
"map[%s]%s",
|
||||
(&namingInfo{typeInfo: typeInfo.Key()}).Syntax(basePkg, imports),
|
||||
(&namingInfo{typeInfo: typeInfo.Elem()}).Syntax(basePkg, imports))
|
||||
default:
|
||||
basePkg.AddError(fmt.Errorf("name requested for invalid type: %s", typeInfo))
|
||||
return typeInfo.String()
|
||||
}
|
||||
}
|
||||
|
||||
// copyMethodMakers makes DeepCopy (and related) methods for Go types,
|
||||
// writing them to its codeWriter.
|
||||
type copyMethodMaker struct {
|
||||
pkg *loader.Package
|
||||
*importsList
|
||||
*codeWriter
|
||||
}
|
||||
|
||||
// GenerateMethodsFor makes DeepCopy, DeepCopyInto, and DeepCopyObject methods
|
||||
// for the given type, when appropriate
|
||||
func (c *copyMethodMaker) GenerateMethodsFor(root *loader.Package, info *markers.TypeInfo) {
|
||||
typeInfo := root.TypesInfo.TypeOf(info.RawSpec.Name)
|
||||
if typeInfo == types.Typ[types.Invalid] {
|
||||
root.AddError(loader.ErrFromNode(fmt.Errorf("unknown type: %s", info.Name), info.RawSpec))
|
||||
}
|
||||
|
||||
// figure out if we need to use a pointer receiver -- most types get a pointer receiver,
|
||||
// except those that are aliases to types that are already pass-by-reference (pointers,
|
||||
// interfaces. maps, slices).
|
||||
ptrReceiver := usePtrReceiver(typeInfo)
|
||||
|
||||
hasManualDeepCopyInto := hasDeepCopyIntoMethod(root, typeInfo)
|
||||
hasManualDeepCopy, deepCopyOnPtr := hasDeepCopyMethod(root, typeInfo)
|
||||
|
||||
// only generate each method if it hasn't been implemented.
|
||||
if !hasManualDeepCopyInto {
|
||||
c.Line("// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.")
|
||||
if ptrReceiver {
|
||||
c.Linef("func (in *%s) DeepCopyInto(out *%s) {", info.Name, info.Name)
|
||||
} else {
|
||||
c.Linef("func (in %s) DeepCopyInto(out *%s) {", info.Name, info.Name)
|
||||
c.Line("{in := &in") // add an extra block so that we can redefine `in` without type issues
|
||||
}
|
||||
|
||||
// just wrap the existing deepcopy if present
|
||||
if hasManualDeepCopy {
|
||||
if deepCopyOnPtr {
|
||||
c.Line("clone := in.DeepCopy()")
|
||||
c.Line("*out = *clone")
|
||||
} else {
|
||||
c.Line("*out = in.DeepCopy()")
|
||||
}
|
||||
} else {
|
||||
c.genDeepCopyIntoBlock(&namingInfo{nameOverride: info.Name}, typeInfo)
|
||||
}
|
||||
|
||||
if !ptrReceiver {
|
||||
c.Line("}") // close our extra "in redefinition" block
|
||||
}
|
||||
c.Line("}")
|
||||
}
|
||||
|
||||
if !hasManualDeepCopy {
|
||||
// these are both straightforward, so we just template them out.
|
||||
if ptrReceiver {
|
||||
c.Linef(ptrDeepCopy, info.Name)
|
||||
} else {
|
||||
c.Linef(bareDeepCopy, info.Name)
|
||||
}
|
||||
|
||||
// maybe also generate DeepCopyObject, if asked.
|
||||
if genObjectInterface(info) {
|
||||
// we always need runtime.Object for DeepCopyObject
|
||||
runtimeAlias := c.NeedImport("k8s.io/apimachinery/pkg/runtime")
|
||||
if ptrReceiver {
|
||||
c.Linef(ptrDeepCopyObj, info.Name, runtimeAlias)
|
||||
} else {
|
||||
c.Linef(bareDeepCopyObj, info.Name, runtimeAlias)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// genDeepCopyBody generates a DeepCopyInto block for the given type. The
|
||||
// block is *not* wrapped in curly braces.
|
||||
func (c *copyMethodMaker) genDeepCopyIntoBlock(actualName *namingInfo, typeInfo types.Type) {
|
||||
// we calculate *how* we should copy mostly based on the "eventual" type of
|
||||
// a given type (i.e. the type that results from following all aliases)
|
||||
last := eventualUnderlyingType(typeInfo)
|
||||
|
||||
// we might hit a type that has a manual deepcopy method written on non-root types
|
||||
// (this case is handled for root types in GenerateMethodFor).
|
||||
// In that case (when we're not dealing with a pointer, since those need special handling
|
||||
// to match 1-to-1 with k8s deepcopy-gen), just use that.
|
||||
if _, isPtr := last.(*types.Pointer); !isPtr && hasAnyDeepCopyMethod(c.pkg, typeInfo) {
|
||||
c.Line("*out = in.DeepCopy()")
|
||||
return
|
||||
}
|
||||
|
||||
switch last := last.(type) {
|
||||
case *types.Basic:
|
||||
switch last.Kind() {
|
||||
case types.Invalid, types.UnsafePointer:
|
||||
c.pkg.AddError(fmt.Errorf("invalid type: %s", last))
|
||||
default:
|
||||
// basic types themselves can be "shallow" copied, so all we need
|
||||
// to do is check if our *actual* type (not the underlying one) has
|
||||
// a custom method implemented.
|
||||
if hasMethod, _ := hasDeepCopyMethod(c.pkg, typeInfo); hasMethod {
|
||||
c.Line("*out = in.DeepCopy()")
|
||||
}
|
||||
c.Line("*out = *in")
|
||||
}
|
||||
case *types.Map:
|
||||
c.genMapDeepCopy(actualName, last)
|
||||
case *types.Slice:
|
||||
c.genSliceDeepCopy(actualName, last)
|
||||
case *types.Struct:
|
||||
c.genStructDeepCopy(actualName, last)
|
||||
case *types.Pointer:
|
||||
c.genPointerDeepCopy(actualName, last)
|
||||
case *types.Named:
|
||||
// handled via the above loop, should never happen
|
||||
c.pkg.AddError(fmt.Errorf("interface type %s encountered directly, invalid condition", last))
|
||||
default:
|
||||
c.pkg.AddError(fmt.Errorf("invalid type: %s", last))
|
||||
}
|
||||
}
|
||||
|
||||
// genMapDeepCopy generates DeepCopy code for the given named type whose eventual
|
||||
// type is the given map type.
|
||||
func (c *copyMethodMaker) genMapDeepCopy(actualName *namingInfo, mapType *types.Map) {
|
||||
// maps *must* have shallow-copiable types, since we just iterate
|
||||
// through the keys, only trying to deepcopy the values.
|
||||
if !fineToShallowCopy(mapType.Key()) {
|
||||
c.pkg.AddError(fmt.Errorf("invalid map key type: %s", mapType.Key()))
|
||||
return
|
||||
}
|
||||
|
||||
// make our actual type (not the underlying one)...
|
||||
c.Linef("*out = make(%[1]s, len(*in))", actualName.Syntax(c.pkg, c.importsList))
|
||||
|
||||
// ...and copy each element appropriately
|
||||
c.For("key, val := range *in", func() {
|
||||
// check if we have manually written methods,
|
||||
// in which case we'll just try and use those
|
||||
hasDeepCopy, copyOnPtr := hasDeepCopyMethod(c.pkg, mapType.Elem())
|
||||
hasDeepCopyInto := hasDeepCopyIntoMethod(c.pkg, mapType.Elem())
|
||||
switch {
|
||||
case hasDeepCopyInto || hasDeepCopy:
|
||||
// use the manually-written methods
|
||||
_, fieldIsPtr := mapType.Elem().(*types.Pointer) // is "out" actually a pointer
|
||||
inIsPtr := resultWillBePointer(mapType.Elem(), hasDeepCopy, copyOnPtr) // does copying "in" produce a pointer
|
||||
if hasDeepCopy {
|
||||
// If we're calling DeepCopy, check if it's receiver needs a pointer
|
||||
inIsPtr = copyOnPtr
|
||||
}
|
||||
if inIsPtr == fieldIsPtr {
|
||||
c.Line("(*out)[key] = val.DeepCopy()")
|
||||
} else if fieldIsPtr {
|
||||
c.Line("{") // use a block because we use `x` as a temporary
|
||||
c.Line("x := val.DeepCopy()")
|
||||
c.Line("(*out)[key] = &x")
|
||||
c.Line("}")
|
||||
} else {
|
||||
c.Line("(*out)[key] = *val.DeepCopy()")
|
||||
}
|
||||
case fineToShallowCopy(mapType.Elem()):
|
||||
// just shallow copy types for which it's safe to do so
|
||||
c.Line("(*out)[key] = val")
|
||||
default:
|
||||
// otherwise, we've got some kind-specific actions,
|
||||
// based on the element's eventual type.
|
||||
|
||||
underlyingElem := eventualUnderlyingType(mapType.Elem())
|
||||
|
||||
// if it passes by reference, let the main switch handle it
|
||||
if passesByReference(underlyingElem) {
|
||||
c.Linef("var outVal %[1]s", (&namingInfo{typeInfo: underlyingElem}).Syntax(c.pkg, c.importsList))
|
||||
c.IfElse("val == nil", func() {
|
||||
c.Line("(*out)[key] = nil")
|
||||
}, func() {
|
||||
c.Line("in, out := &val, &outVal")
|
||||
c.genDeepCopyIntoBlock(&namingInfo{typeInfo: mapType.Elem()}, mapType.Elem())
|
||||
})
|
||||
c.Line("(*out)[key] = outVal")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// otherwise...
|
||||
switch underlyingElem := underlyingElem.(type) {
|
||||
case *types.Struct:
|
||||
// structs will have deepcopy generated for them, so use that
|
||||
c.Line("(*out)[key] = *val.DeepCopy()")
|
||||
default:
|
||||
c.pkg.AddError(fmt.Errorf("invalid map value type: %s", underlyingElem))
|
||||
return
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// genSliceDeepCopy generates DeepCopy code for the given named type whose
|
||||
// underlying type is the given slice.
|
||||
func (c *copyMethodMaker) genSliceDeepCopy(actualName *namingInfo, sliceType *types.Slice) {
|
||||
underlyingElem := eventualUnderlyingType(sliceType.Elem())
|
||||
|
||||
// make the actual type (not the underlying)
|
||||
c.Linef("*out = make(%[1]s, len(*in))", actualName.Syntax(c.pkg, c.importsList))
|
||||
|
||||
// check if we need to do anything special, or just copy each element appropriately
|
||||
switch {
|
||||
case hasAnyDeepCopyMethod(c.pkg, sliceType.Elem()):
|
||||
// just use deepcopy if it's present (deepcopyinto will be filled in by our code)
|
||||
c.For("i := range *in", func() {
|
||||
c.Line("(*in)[i].DeepCopyInto(&(*out)[i])")
|
||||
})
|
||||
case fineToShallowCopy(underlyingElem):
|
||||
// shallow copy if ok
|
||||
c.Line("copy(*out, *in)")
|
||||
default:
|
||||
// copy each element appropriately
|
||||
c.For("i := range *in", func() {
|
||||
// fall back to normal code for reference types or those with custom logic
|
||||
if passesByReference(underlyingElem) || hasAnyDeepCopyMethod(c.pkg, sliceType.Elem()) {
|
||||
c.If("(*in)[i] != nil", func() {
|
||||
c.Line("in, out := &(*in)[i], &(*out)[i]")
|
||||
c.genDeepCopyIntoBlock(&namingInfo{typeInfo: sliceType.Elem()}, sliceType.Elem())
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
switch underlyingElem.(type) {
|
||||
case *types.Struct:
|
||||
// structs will always have deepcopy
|
||||
c.Linef("(*in)[i].DeepCopyInto(&(*out)[i])")
|
||||
default:
|
||||
c.pkg.AddError(fmt.Errorf("invalid slice element type: %s", underlyingElem))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// genStructDeepCopy generates DeepCopy code for the given named type whose
|
||||
// underlying type is the given struct.
|
||||
func (c *copyMethodMaker) genStructDeepCopy(_ *namingInfo, structType *types.Struct) {
|
||||
c.Line("*out = *in")
|
||||
|
||||
for i := 0; i < structType.NumFields(); i++ {
|
||||
field := structType.Field(i)
|
||||
|
||||
// if we have a manual deepcopy, use that
|
||||
hasDeepCopy, copyOnPtr := hasDeepCopyMethod(c.pkg, field.Type())
|
||||
hasDeepCopyInto := hasDeepCopyIntoMethod(c.pkg, field.Type())
|
||||
if hasDeepCopyInto || hasDeepCopy {
|
||||
// NB(directxman12): yes, I know this is kind-of weird that we
|
||||
// have all this special-casing here, but it's nice for testing
|
||||
// purposes to be 1-to-1 with deepcopy-gen, which does all sorts of
|
||||
// stuff like this (I'm pretty sure I found some codepaths that
|
||||
// never execute there, because they're pretty clearly invalid
|
||||
// syntax).
|
||||
|
||||
_, fieldIsPtr := field.Type().(*types.Pointer)
|
||||
inIsPtr := resultWillBePointer(field.Type(), hasDeepCopy, copyOnPtr)
|
||||
if fieldIsPtr {
|
||||
// we'll need a if block to check for nilness
|
||||
// we'll let genDeepCopyIntoBlock handle the details, we just needed the setup
|
||||
c.If(fmt.Sprintf("in.%s != nil", field.Name()), func() {
|
||||
c.Linef("in, out := &in.%[1]s, &out.%[1]s", field.Name())
|
||||
c.genDeepCopyIntoBlock(&namingInfo{typeInfo: field.Type()}, field.Type())
|
||||
})
|
||||
} else {
|
||||
// special-case for compatibility with deepcopy-gen
|
||||
if inIsPtr == fieldIsPtr {
|
||||
c.Linef("out.%[1]s = in.%[1]s.DeepCopy()", field.Name())
|
||||
} else {
|
||||
c.Linef("in.%[1]s.DeepCopyInto(&out.%[1]s)", field.Name())
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// pass-by-reference fields get delegated to the main type
|
||||
underlyingField := eventualUnderlyingType(field.Type())
|
||||
if passesByReference(underlyingField) {
|
||||
c.If(fmt.Sprintf("in.%s != nil", field.Name()), func() {
|
||||
c.Linef("in, out := &in.%[1]s, &out.%[1]s", field.Name())
|
||||
c.genDeepCopyIntoBlock(&namingInfo{typeInfo: field.Type()}, field.Type())
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
// otherwise...
|
||||
switch underlyingField := underlyingField.(type) {
|
||||
case *types.Basic:
|
||||
switch underlyingField.Kind() {
|
||||
case types.Invalid, types.UnsafePointer:
|
||||
c.pkg.AddError(loader.ErrFromNode(fmt.Errorf("invalid field type: %s", underlyingField), field))
|
||||
return
|
||||
default:
|
||||
// nothing to do, initial assignment copied this
|
||||
}
|
||||
case *types.Struct:
|
||||
if fineToShallowCopy(field.Type()) {
|
||||
c.Linef("out.%[1]s = in.%[1]s", field.Name())
|
||||
} else {
|
||||
c.Linef("in.%[1]s.DeepCopyInto(&out.%[1]s)", field.Name())
|
||||
}
|
||||
default:
|
||||
c.pkg.AddError(loader.ErrFromNode(fmt.Errorf("invalid field type: %s", underlyingField), field))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// genPointerDeepCopy generates DeepCopy code for the given named type whose
|
||||
// underlying type is the given struct.
|
||||
func (c *copyMethodMaker) genPointerDeepCopy(_ *namingInfo, pointerType *types.Pointer) {
|
||||
underlyingElem := eventualUnderlyingType(pointerType.Elem())
|
||||
|
||||
// if we have a manually written deepcopy, just use that
|
||||
hasDeepCopy, copyOnPtr := hasDeepCopyMethod(c.pkg, pointerType.Elem())
|
||||
hasDeepCopyInto := hasDeepCopyIntoMethod(c.pkg, pointerType.Elem())
|
||||
if hasDeepCopyInto || hasDeepCopy {
|
||||
outNeedsPtr := resultWillBePointer(pointerType.Elem(), hasDeepCopy, copyOnPtr)
|
||||
if hasDeepCopy {
|
||||
outNeedsPtr = copyOnPtr
|
||||
}
|
||||
if outNeedsPtr {
|
||||
c.Line("*out = (*in).DeepCopy()")
|
||||
} else {
|
||||
c.Line("x := (*in).DeepCopy()")
|
||||
c.Line("*out = &x")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// shallow-copiable types are pretty easy
|
||||
if fineToShallowCopy(underlyingElem) {
|
||||
c.Linef("*out = new(%[1]s)", (&namingInfo{typeInfo: pointerType.Elem()}).Syntax(c.pkg, c.importsList))
|
||||
c.Line("**out = **in")
|
||||
return
|
||||
}
|
||||
|
||||
// pass-by-reference types get delegated to the main switch
|
||||
if passesByReference(underlyingElem) {
|
||||
c.Linef("*out = new(%s)", (&namingInfo{typeInfo: underlyingElem}).Syntax(c.pkg, c.importsList))
|
||||
c.If("**in != nil", func() {
|
||||
c.Line("in, out := *in, *out")
|
||||
c.genDeepCopyIntoBlock(&namingInfo{typeInfo: underlyingElem}, eventualUnderlyingType(underlyingElem))
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
// otherwise...
|
||||
switch underlyingElem := underlyingElem.(type) {
|
||||
case *types.Struct:
|
||||
c.Linef("*out = new(%[1]s)", (&namingInfo{typeInfo: pointerType.Elem()}).Syntax(c.pkg, c.importsList))
|
||||
c.Line("(*in).DeepCopyInto(*out)")
|
||||
default:
|
||||
c.pkg.AddError(fmt.Errorf("invalid pointer element type: %s", underlyingElem))
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// usePtrReceiver checks if we need a pointer receiver on methods for the given type
|
||||
// Pass-by-reference types don't get pointer receivers.
|
||||
func usePtrReceiver(typeInfo types.Type) bool {
|
||||
switch typeInfo.(type) {
|
||||
case *types.Pointer:
|
||||
return false
|
||||
case *types.Map:
|
||||
return false
|
||||
case *types.Slice:
|
||||
return false
|
||||
case *types.Named:
|
||||
return usePtrReceiver(typeInfo.Underlying())
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
func resultWillBePointer(typeInfo types.Type, hasDeepCopy, deepCopyOnPtr bool) bool {
|
||||
// if we have a manual deepcopy, we can just check what that returns
|
||||
if hasDeepCopy {
|
||||
return deepCopyOnPtr
|
||||
}
|
||||
|
||||
// otherwise, we'll need to check its type
|
||||
switch typeInfo := typeInfo.(type) {
|
||||
case *types.Pointer:
|
||||
// NB(directxman12): we don't have to worry about the elem having a deepcopy,
|
||||
// since hasManualDeepCopy would've caught that.
|
||||
|
||||
// we'll be calling on the elem, so check that
|
||||
return resultWillBePointer(typeInfo.Elem(), false, false)
|
||||
case *types.Map:
|
||||
return false
|
||||
case *types.Slice:
|
||||
return false
|
||||
case *types.Named:
|
||||
return resultWillBePointer(typeInfo.Underlying(), false, false)
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// shouldBeCopied checks if we're supposed to make deepcopy methods the given type.
|
||||
//
|
||||
// This is the case if it's exported *and* either:
|
||||
// - has a partial manual DeepCopy implementation (in which case we fill in the rest)
|
||||
// - aliases to a non-basic type eventually
|
||||
// - is a struct
|
||||
func shouldBeCopied(pkg *loader.Package, info *markers.TypeInfo) bool {
|
||||
if !ast.IsExported(info.Name) {
|
||||
return false
|
||||
}
|
||||
|
||||
typeInfo := pkg.TypesInfo.TypeOf(info.RawSpec.Name)
|
||||
if typeInfo == types.Typ[types.Invalid] {
|
||||
pkg.AddError(loader.ErrFromNode(fmt.Errorf("unknown type: %s", info.Name), info.RawSpec))
|
||||
return false
|
||||
}
|
||||
|
||||
// according to gengo, everything named is an alias, except for an alias to a pointer,
|
||||
// which is just a pointer, afaict. Just roll with it.
|
||||
if asPtr, isPtr := typeInfo.(*types.Named).Underlying().(*types.Pointer); isPtr {
|
||||
typeInfo = asPtr
|
||||
}
|
||||
|
||||
lastType := typeInfo
|
||||
if _, isNamed := typeInfo.(*types.Named); isNamed {
|
||||
// if it has a manual deepcopy or deepcopyinto, we're fine
|
||||
if hasAnyDeepCopyMethod(pkg, typeInfo) {
|
||||
return true
|
||||
}
|
||||
|
||||
for underlyingType := typeInfo.Underlying(); underlyingType != lastType; lastType, underlyingType = underlyingType, underlyingType.Underlying() {
|
||||
// if it has a manual deepcopy or deepcopyinto, we're fine
|
||||
if hasAnyDeepCopyMethod(pkg, underlyingType) {
|
||||
return true
|
||||
}
|
||||
|
||||
// aliases to other things besides basics need copy methods
|
||||
// (basics can be straight-up shallow-copied)
|
||||
if _, isBasic := underlyingType.(*types.Basic); !isBasic {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// structs are the only thing that's not a basic that's copiable by default
|
||||
_, isStruct := lastType.(*types.Struct)
|
||||
return isStruct
|
||||
}
|
||||
|
||||
// hasDeepCopyMethod checks if this type has a manual DeepCopy method and if
|
||||
// the method has a pointer receiver.
|
||||
func hasDeepCopyMethod(pkg *loader.Package, typeInfo types.Type) (bool, bool) {
|
||||
deepCopyMethod, ind, _ := types.LookupFieldOrMethod(typeInfo, true /* check pointers too */, pkg.Types, "DeepCopy")
|
||||
if len(ind) != 1 {
|
||||
// ignore embedded methods
|
||||
return false, false
|
||||
}
|
||||
if deepCopyMethod == nil {
|
||||
return false, false
|
||||
}
|
||||
|
||||
methodSig := deepCopyMethod.Type().(*types.Signature)
|
||||
if methodSig.Params() != nil && methodSig.Params().Len() != 0 {
|
||||
return false, false
|
||||
}
|
||||
if methodSig.Results() == nil || methodSig.Results().Len() != 1 {
|
||||
return false, false
|
||||
}
|
||||
|
||||
recvAsPtr, recvIsPtr := methodSig.Recv().Type().(*types.Pointer)
|
||||
if recvIsPtr {
|
||||
// NB(directxman12): the pointer type returned here isn't comparable even though they
|
||||
// have the same underlying type, for some reason (probably that
|
||||
// LookupFieldOrMethod calls types.NewPointer for us), so check the
|
||||
// underlying values.
|
||||
|
||||
resultPtr, resultIsPtr := methodSig.Results().At(0).Type().(*types.Pointer)
|
||||
if !resultIsPtr {
|
||||
// pointer vs non-pointer are different types
|
||||
return false, false
|
||||
}
|
||||
|
||||
if recvAsPtr.Elem() != resultPtr.Elem() {
|
||||
return false, false
|
||||
}
|
||||
} else if methodSig.Results().At(0).Type() != methodSig.Recv().Type() {
|
||||
return false, false
|
||||
}
|
||||
|
||||
return true, recvIsPtr
|
||||
}
|
||||
|
||||
// hasDeepCopyIntoMethod checks if this type has a manual DeepCopyInto method.
|
||||
func hasDeepCopyIntoMethod(pkg *loader.Package, typeInfo types.Type) bool {
|
||||
deepCopyMethod, ind, _ := types.LookupFieldOrMethod(typeInfo, true /* check pointers too */, pkg.Types, "DeepCopyInto")
|
||||
if len(ind) != 1 {
|
||||
// ignore embedded methods
|
||||
return false
|
||||
}
|
||||
if deepCopyMethod == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
methodSig := deepCopyMethod.Type().(*types.Signature)
|
||||
if methodSig.Params() == nil || methodSig.Params().Len() != 1 {
|
||||
return false
|
||||
}
|
||||
paramPtr, isPtr := methodSig.Params().At(0).Type().(*types.Pointer)
|
||||
if !isPtr {
|
||||
return false
|
||||
}
|
||||
if methodSig.Results() != nil && methodSig.Results().Len() != 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
if recvPtr, recvIsPtr := methodSig.Recv().Type().(*types.Pointer); recvIsPtr {
|
||||
// NB(directxman12): the pointer type returned here isn't comparable even though they
|
||||
// have the same underlying type, for some reason (probably that
|
||||
// LookupFieldOrMethod calls types.NewPointer for us), so check the
|
||||
// underlying values.
|
||||
return paramPtr.Elem() == recvPtr.Elem()
|
||||
}
|
||||
return methodSig.Recv().Type() == paramPtr.Elem()
|
||||
}
|
||||
|
||||
// hasAnyDeepCopyMethod checks if the given method has DeepCopy or DeepCopyInto
|
||||
// (either of which implies the other will exist eventually).
|
||||
func hasAnyDeepCopyMethod(pkg *loader.Package, typeInfo types.Type) bool {
|
||||
hasDeepCopy, _ := hasDeepCopyMethod(pkg, typeInfo)
|
||||
return hasDeepCopy || hasDeepCopyIntoMethod(pkg, typeInfo)
|
||||
}
|
||||
|
||||
// eventualUnderlyingType gets the "final" type in a sequence of named aliases.
|
||||
// It's effectively a shortcut for calling Underlying in a loop.
|
||||
func eventualUnderlyingType(typeInfo types.Type) types.Type {
|
||||
last := typeInfo
|
||||
for underlying := typeInfo.Underlying(); underlying != last; last, underlying = underlying, underlying.Underlying() {
|
||||
// get the actual underlying type
|
||||
}
|
||||
return last
|
||||
}
|
||||
|
||||
// fineToShallowCopy checks if a shallow-copying a type is equivalent to deepcopy-ing it.
|
||||
func fineToShallowCopy(typeInfo types.Type) bool {
|
||||
switch typeInfo := typeInfo.(type) {
|
||||
case *types.Basic:
|
||||
// basic types (int, string, etc) are always fine to shallow-copy,
|
||||
// except for Invalid and UnsafePointer, which can't be copied at all.
|
||||
switch typeInfo.Kind() {
|
||||
case types.Invalid, types.UnsafePointer:
|
||||
return false
|
||||
default:
|
||||
return true
|
||||
}
|
||||
case *types.Named:
|
||||
// aliases are fine to shallow-copy as long as they resolve to a shallow-copyable type
|
||||
return fineToShallowCopy(typeInfo.Underlying())
|
||||
case *types.Struct:
|
||||
// structs are fine to shallow-copy if they have all shallow-copyable fields
|
||||
for i := 0; i < typeInfo.NumFields(); i++ {
|
||||
field := typeInfo.Field(i)
|
||||
if !fineToShallowCopy(field.Type()) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// passesByReference checks if the given type passesByReference
|
||||
// (except for interfaces, which are handled separately).
|
||||
func passesByReference(typeInfo types.Type) bool {
|
||||
switch typeInfo.(type) {
|
||||
case *types.Slice:
|
||||
return true
|
||||
case *types.Map:
|
||||
return true
|
||||
case *types.Pointer:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
// ptrDeepCopy is a DeepCopy for a type with an existing DeepCopyInto and a pointer receiver.
|
||||
ptrDeepCopy = `
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new %[1]s.
|
||||
func (in *%[1]s) DeepCopy() *%[1]s {
|
||||
if in == nil { return nil }
|
||||
out := new(%[1]s)
|
||||
in.DeepCopyInto(out)
|
||||
return out
|
||||
}
|
||||
`
|
||||
|
||||
// ptrDeepCopy is a DeepCopy for a type with an existing DeepCopyInto and a non-pointer receiver.
|
||||
bareDeepCopy = `
|
||||
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new %[1]s.
|
||||
func (in %[1]s) DeepCopy() %[1]s {
|
||||
if in == nil { return nil }
|
||||
out := new(%[1]s)
|
||||
in.DeepCopyInto(out)
|
||||
return *out
|
||||
}
|
||||
`
|
||||
|
||||
// ptrDeepCopy is a DeepCopyObject for a type with an existing DeepCopyInto and a pointer receiver.
|
||||
ptrDeepCopyObj = `
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in *%[1]s) DeepCopyObject() %[2]s.Object {
|
||||
if c := in.DeepCopy(); c != nil {
|
||||
return c
|
||||
}
|
||||
return nil
|
||||
}
|
||||
`
|
||||
// ptrDeepCopy is a DeepCopyObject for a type with an existing DeepCopyInto and a non-pointer receiver.
|
||||
bareDeepCopyObj = `
|
||||
// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
|
||||
func (in %[1]s) DeepCopyObject() %[2]s.Object {
|
||||
return in.DeepCopy()
|
||||
}
|
||||
`
|
||||
)
|
45
vendor/sigs.k8s.io/controller-tools/pkg/deepcopy/zz_generated.markerhelp.go
generated
vendored
Normal file
45
vendor/sigs.k8s.io/controller-tools/pkg/deepcopy/zz_generated.markerhelp.go
generated
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
// +build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by helpgen. DO NOT EDIT.
|
||||
|
||||
package deepcopy
|
||||
|
||||
import (
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
func (Generator) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "generates code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"HeaderFile": {
|
||||
Summary: "specifies the header text (e.g. license) to prepend to generated files.",
|
||||
Details: "",
|
||||
},
|
||||
"Year": {
|
||||
Summary: "specifies the year to substitute for \" YEAR\" in the header file.",
|
||||
Details: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
58
vendor/sigs.k8s.io/controller-tools/pkg/genall/doc.go
generated
vendored
Normal file
58
vendor/sigs.k8s.io/controller-tools/pkg/genall/doc.go
generated
vendored
Normal file
@@ -0,0 +1,58 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package genall defines entrypoints for generation tools to hook into and
|
||||
// share the same set of parsing, typechecking, and marker information.
|
||||
//
|
||||
// Generators
|
||||
//
|
||||
// Each Generator knows how to register its markers into a central Registry,
|
||||
// and then how to generate output using a Collector and some root packages.
|
||||
// Each generator can be considered to be the output type of a marker, for easy
|
||||
// command line parsing.
|
||||
//
|
||||
// Output and Input
|
||||
//
|
||||
// Generators output artifacts via an OutputRule. OutputRules know how to
|
||||
// write output for different package-associated (code) files, as well as
|
||||
// config files. Each OutputRule should also be considered to be the output
|
||||
// type as a marker, for easy command-line parsing.
|
||||
//
|
||||
// OutputRules groups together an OutputRule per generator, plus a default
|
||||
// output rule for any not explicitly specified.
|
||||
//
|
||||
// OutputRules are defined for stdout, file writing, and sending to /dev/null
|
||||
// (useful for doing "type-checking" without actually saving the results).
|
||||
//
|
||||
// InputRule defines custom input loading, but its shared across all
|
||||
// Generators. There's currently only a filesystem implementation.
|
||||
//
|
||||
// Runtime and Context
|
||||
//
|
||||
// Runtime maps together Generators, and constructs "contexts" which provide
|
||||
// the common collector and roots, plus the output rule for that generator, and
|
||||
// a handle for reading files (like boilerplate headers).
|
||||
//
|
||||
// It will run all associated generators, printing errors and automatically
|
||||
// skipping type-checking errors (since those are commonly caused by the
|
||||
// partial type-checking of loader.TypeChecker).
|
||||
//
|
||||
// Options
|
||||
//
|
||||
// The FromOptions (and associated helpers) function makes it easy to use generators
|
||||
// and output rules as markers that can be parsed from the command line, producing
|
||||
// a registry from command line args.
|
||||
package genall
|
215
vendor/sigs.k8s.io/controller-tools/pkg/genall/genall.go
generated
vendored
Normal file
215
vendor/sigs.k8s.io/controller-tools/pkg/genall/genall.go
generated
vendored
Normal file
@@ -0,0 +1,215 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package genall
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
|
||||
"golang.org/x/tools/go/packages"
|
||||
"sigs.k8s.io/yaml"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// Generators are a list of Generators.
|
||||
// NB(directxman12): this is a pointer so that we can uniquely identify each
|
||||
// instance of a generator, even if it's not hashable. Different *instances*
|
||||
// of a generator are treated differently.
|
||||
type Generators []*Generator
|
||||
|
||||
// RegisterMarkers registers all markers defined by each of the Generators in
|
||||
// this list into the given registry.
|
||||
func (g Generators) RegisterMarkers(reg *markers.Registry) error {
|
||||
for _, gen := range g {
|
||||
if err := (*gen).RegisterMarkers(reg); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckFilters returns the set of NodeFilters for all Generators that
|
||||
// implement NeedsTypeChecking.
|
||||
func (g Generators) CheckFilters() []loader.NodeFilter {
|
||||
var filters []loader.NodeFilter
|
||||
for _, gen := range g {
|
||||
withFilter, needsChecking := (*gen).(NeedsTypeChecking)
|
||||
if !needsChecking {
|
||||
continue
|
||||
}
|
||||
filters = append(filters, withFilter.CheckFilter())
|
||||
}
|
||||
return filters
|
||||
}
|
||||
|
||||
// NeedsTypeChecking indicates that a particular generator needs & has opinions
|
||||
// on typechecking. If this is not implemented, a generator will be given a
|
||||
// context with a nil typechecker.
|
||||
type NeedsTypeChecking interface {
|
||||
// CheckFilter indicates the loader.NodeFilter (if any) that should be used
|
||||
// to prune out unused types/packages when type-checking (nodes for which
|
||||
// the filter returns true are considered "interesting"). This filter acts
|
||||
// as a baseline -- all types the pass through this filter will be checked,
|
||||
// but more than that may also be checked due to other generators' filters.
|
||||
CheckFilter() loader.NodeFilter
|
||||
}
|
||||
|
||||
// Generator knows how to register some set of markers, and then produce
|
||||
// output artifacts based on loaded code containing those markers,
|
||||
// sharing common loaded data.
|
||||
type Generator interface {
|
||||
// RegisterMarkers registers all markers needed by this Generator
|
||||
// into the given registry.
|
||||
RegisterMarkers(into *markers.Registry) error
|
||||
// Generate generates artifacts produced by this marker.
|
||||
// It's called *after* RegisterMarkers has been called.
|
||||
Generate(*GenerationContext) error
|
||||
}
|
||||
|
||||
// HasHelp is some Generator, OutputRule, etc with a help method.
|
||||
type HasHelp interface {
|
||||
// Help returns help for this generator.
|
||||
Help() *markers.DefinitionHelp
|
||||
}
|
||||
|
||||
// Runtime collects generators, loaded program data (Collector, root Packages),
|
||||
// and I/O rules, running them together.
|
||||
type Runtime struct {
|
||||
// Generators are the Generators to be run by this Runtime.
|
||||
Generators Generators
|
||||
// GenerationContext is the base generation context that's copied
|
||||
// to produce the context for each Generator.
|
||||
GenerationContext
|
||||
// OutputRules defines how to output artifacts for each Generator.
|
||||
OutputRules OutputRules
|
||||
}
|
||||
|
||||
// GenerationContext defines the common information needed for each Generator
|
||||
// to run.
|
||||
type GenerationContext struct {
|
||||
// Collector is the shared marker collector.
|
||||
Collector *markers.Collector
|
||||
// Roots are the base packages to be processed.
|
||||
Roots []*loader.Package
|
||||
// Checker is the shared partial type-checker.
|
||||
Checker *loader.TypeChecker
|
||||
// OutputRule describes how to output artifacts.
|
||||
OutputRule
|
||||
// InputRule describes how to load associated boilerplate artifacts.
|
||||
// It should *not* be used to load source files.
|
||||
InputRule
|
||||
}
|
||||
|
||||
// WriteYAML writes the given objects out, serialized as YAML, using the
|
||||
// context's OutputRule. Objects are written as separate documents, separated
|
||||
// from each other by `---` (as per the YAML spec).
|
||||
func (g GenerationContext) WriteYAML(itemPath string, objs ...interface{}) error {
|
||||
out, err := g.Open(nil, itemPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer out.Close()
|
||||
|
||||
for _, obj := range objs {
|
||||
yamlContent, err := yaml.Marshal(obj)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
n, err := out.Write(append([]byte("\n---\n"), yamlContent...))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if n < len(yamlContent) {
|
||||
return io.ErrShortWrite
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReadFile reads the given boilerplate artifact using the context's InputRule.
|
||||
func (g GenerationContext) ReadFile(path string) ([]byte, error) {
|
||||
file, err := g.OpenForRead(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer file.Close()
|
||||
return ioutil.ReadAll(file)
|
||||
}
|
||||
|
||||
// ForRoots produces a Runtime to run the given generators against the
|
||||
// given packages. It outputs to /dev/null by default.
|
||||
func (g Generators) ForRoots(rootPaths ...string) (*Runtime, error) {
|
||||
roots, err := loader.LoadRoots(rootPaths...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rt := &Runtime{
|
||||
Generators: g,
|
||||
GenerationContext: GenerationContext{
|
||||
Collector: &markers.Collector{
|
||||
Registry: &markers.Registry{},
|
||||
},
|
||||
Roots: roots,
|
||||
InputRule: InputFromFileSystem,
|
||||
Checker: &loader.TypeChecker{
|
||||
NodeFilters: g.CheckFilters(),
|
||||
},
|
||||
},
|
||||
OutputRules: OutputRules{Default: OutputToNothing},
|
||||
}
|
||||
if err := rt.Generators.RegisterMarkers(rt.Collector.Registry); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return rt, nil
|
||||
}
|
||||
|
||||
// Run runs the Generators in this Runtime against its packages, printing
|
||||
// errors (except type errors, which common result from using TypeChecker with
|
||||
// filters), returning true if errors were found.
|
||||
func (r *Runtime) Run() bool {
|
||||
// TODO(directxman12): we could make this parallel,
|
||||
// but we'd need to ensure all underlying machinery is threadsafe
|
||||
if len(r.Generators) == 0 {
|
||||
fmt.Fprintln(os.Stderr, "no generators to run")
|
||||
return true
|
||||
}
|
||||
|
||||
hadErrs := false
|
||||
for _, gen := range r.Generators {
|
||||
ctx := r.GenerationContext // make a shallow copy
|
||||
ctx.OutputRule = r.OutputRules.ForGenerator(gen)
|
||||
|
||||
// don't pass a typechecker to generators that don't provide a filter
|
||||
// to avoid accidents
|
||||
if _, needsChecking := (*gen).(NeedsTypeChecking); !needsChecking {
|
||||
ctx.Checker = nil
|
||||
}
|
||||
|
||||
if err := (*gen).Generate(&ctx); err != nil {
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
hadErrs = true
|
||||
}
|
||||
}
|
||||
|
||||
// skip TypeErrors -- they're probably just from partial typechecking in crd-gen
|
||||
return loader.PrintErrors(r.Roots, packages.TypeError) || hadErrs
|
||||
}
|
23
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/doc.go
generated
vendored
Normal file
23
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/doc.go
generated
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package help contains utilities for actually writing out marker help.
|
||||
//
|
||||
// Namely, it contains a series of structs (and helpers for producing them)
|
||||
// that represent a merged view of marker definition and help that can be used
|
||||
// for consumption by the pretty subpackage (for terminal help) or serialized
|
||||
// as JSON (e.g. for generating HTML help).
|
||||
package help
|
30
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/pretty/doc.go
generated
vendored
Normal file
30
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/pretty/doc.go
generated
vendored
Normal file
@@ -0,0 +1,30 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package pretty contains utilities for formatting terminal help output,
|
||||
// and a use of those to display marker help.
|
||||
//
|
||||
// Terminal Output
|
||||
//
|
||||
// The Span interface and Table struct allow you to construct tables with
|
||||
// colored formatting, without causing ANSI formatting characters to mess up width
|
||||
// calculations.
|
||||
//
|
||||
// Marker Help
|
||||
//
|
||||
// The MarkersSummary prints a summary table for marker help, while the MarkersDetails
|
||||
// prints out more detailed information, with explainations of the individual marker fields.
|
||||
package pretty
|
171
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/pretty/help.go
generated
vendored
Normal file
171
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/pretty/help.go
generated
vendored
Normal file
@@ -0,0 +1,171 @@
|
||||
package pretty
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/genall/help"
|
||||
|
||||
"github.com/fatih/color"
|
||||
)
|
||||
|
||||
var (
|
||||
headingStyle = Decoration(*color.New(color.Bold, color.Underline))
|
||||
markerNameStyle = Decoration(*color.New(color.Bold))
|
||||
fieldSummaryStyle = Decoration(*color.New(color.FgGreen, color.Italic))
|
||||
markerTargetStyle = Decoration(*color.New(color.Faint))
|
||||
fieldDetailStyle = Decoration(*color.New(color.Italic, color.FgGreen))
|
||||
deprecatedStyle = Decoration(*color.New(color.CrossedOut))
|
||||
)
|
||||
|
||||
// MarkersSummary returns a condensed summary of help for the given markers.
|
||||
func MarkersSummary(groupName string, markers []help.MarkerDoc) Span {
|
||||
out := new(SpanWriter)
|
||||
|
||||
out.Print(Text("\n"))
|
||||
out.Print(headingStyle.Containing(Text(groupName)))
|
||||
out.Print(Text("\n\n"))
|
||||
|
||||
table := &Table{Sizing: &TableCalculator{Padding: 2}}
|
||||
for _, marker := range markers {
|
||||
table.StartRow()
|
||||
table.Column(MarkerSyntaxHelp(marker))
|
||||
table.Column(markerTargetStyle.Containing(Text(marker.Target)))
|
||||
|
||||
summary := new(SpanWriter)
|
||||
if marker.DeprecatedInFavorOf != nil && len(*marker.DeprecatedInFavorOf) > 0 {
|
||||
summary.Print(markerNameStyle.Containing(Text("(use ")))
|
||||
summary.Print(markerNameStyle.Containing(Text(*marker.DeprecatedInFavorOf)))
|
||||
summary.Print(markerNameStyle.Containing(Text(") ")))
|
||||
}
|
||||
summary.Print(Text(marker.Summary))
|
||||
table.Column(summary)
|
||||
|
||||
table.EndRow()
|
||||
}
|
||||
out.Print(table)
|
||||
|
||||
out.Print(Text("\n"))
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
// MarkersDetails returns detailed help for the given markers, including detailed field help.
|
||||
func MarkersDetails(fullDetail bool, groupName string, markers []help.MarkerDoc) Span {
|
||||
out := new(SpanWriter)
|
||||
|
||||
out.Print(Line(headingStyle.Containing(Text(groupName))))
|
||||
out.Print(Newlines(2))
|
||||
|
||||
for _, marker := range markers {
|
||||
out.Print(Line(markerName(marker)))
|
||||
out.Print(Text(" "))
|
||||
out.Print(markerTargetStyle.Containing(Text(marker.Target)))
|
||||
|
||||
summary := new(SpanWriter)
|
||||
if marker.DeprecatedInFavorOf != nil && len(*marker.DeprecatedInFavorOf) > 0 {
|
||||
summary.Print(markerNameStyle.Containing(Text("(use ")))
|
||||
summary.Print(markerNameStyle.Containing(Text(*marker.DeprecatedInFavorOf)))
|
||||
summary.Print(markerNameStyle.Containing(Text(") ")))
|
||||
}
|
||||
summary.Print(Text(marker.Summary))
|
||||
|
||||
if !marker.AnonymousField() {
|
||||
out.Print(Indented(1, Line(summary)))
|
||||
if len(marker.Details) > 0 && fullDetail {
|
||||
out.Print(Indented(1, Line(Text(marker.Details))))
|
||||
}
|
||||
}
|
||||
|
||||
if marker.AnonymousField() {
|
||||
out.Print(Indented(1, Line(fieldDetailStyle.Containing(FieldSyntaxHelp(marker.Fields[0])))))
|
||||
out.Print(Text(" "))
|
||||
out.Print(summary)
|
||||
if len(marker.Details) > 0 && fullDetail {
|
||||
out.Print(Indented(2, Line(Text(marker.Details))))
|
||||
}
|
||||
out.Print(Newlines(1))
|
||||
} else if !marker.Empty() {
|
||||
out.Print(Newlines(1))
|
||||
if fullDetail {
|
||||
for _, arg := range marker.Fields {
|
||||
out.Print(Indented(1, Line(fieldDetailStyle.Containing(FieldSyntaxHelp(arg)))))
|
||||
out.Print(Indented(2, Line(Text(arg.Summary))))
|
||||
if len(arg.Details) > 0 && fullDetail {
|
||||
out.Print(Indented(2, Line(Text(arg.Details))))
|
||||
out.Print(Newlines(1))
|
||||
}
|
||||
}
|
||||
out.Print(Newlines(1))
|
||||
} else {
|
||||
table := &Table{Sizing: &TableCalculator{Padding: 2}}
|
||||
for _, arg := range marker.Fields {
|
||||
table.StartRow()
|
||||
table.Column(fieldDetailStyle.Containing(FieldSyntaxHelp(arg)))
|
||||
table.Column(Text(arg.Summary))
|
||||
table.EndRow()
|
||||
}
|
||||
|
||||
out.Print(Indented(1, table))
|
||||
}
|
||||
} else {
|
||||
out.Print(Newlines(1))
|
||||
}
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
func FieldSyntaxHelp(arg help.FieldHelp) Span {
|
||||
return fieldSyntaxHelp(arg, "")
|
||||
}
|
||||
|
||||
// fieldSyntaxHelp prints the syntax help for a particular marker argument.
|
||||
func fieldSyntaxHelp(arg help.FieldHelp, sep string) Span {
|
||||
if arg.Optional {
|
||||
return FromWriter(func(out io.Writer) error {
|
||||
_, err := fmt.Fprintf(out, "[%s%s=<%s>]", sep, arg.Name, arg.TypeString())
|
||||
return err
|
||||
})
|
||||
}
|
||||
return FromWriter(func(out io.Writer) error {
|
||||
_, err := fmt.Fprintf(out, "%s%s=<%s>", sep, arg.Name, arg.TypeString())
|
||||
return err
|
||||
})
|
||||
}
|
||||
|
||||
// markerName returns a span containing just the appropriately-formatted marker name.
|
||||
func markerName(def help.MarkerDoc) Span {
|
||||
if def.DeprecatedInFavorOf != nil {
|
||||
return deprecatedStyle.Containing(Text("+" + def.Name))
|
||||
}
|
||||
return markerNameStyle.Containing(Text("+" + def.Name))
|
||||
}
|
||||
|
||||
// MarkerSyntaxHelp assembles syntax help for a given marker.
|
||||
func MarkerSyntaxHelp(def help.MarkerDoc) Span {
|
||||
out := new(SpanWriter)
|
||||
|
||||
out.Print(markerName(def))
|
||||
|
||||
if def.Empty() {
|
||||
return out
|
||||
}
|
||||
|
||||
sep := ":"
|
||||
if def.AnonymousField() {
|
||||
sep = ""
|
||||
}
|
||||
|
||||
fieldStyle := fieldSummaryStyle
|
||||
if def.DeprecatedInFavorOf != nil {
|
||||
fieldStyle = deprecatedStyle
|
||||
}
|
||||
|
||||
for _, arg := range def.Fields {
|
||||
out.Print(fieldStyle.Containing(fieldSyntaxHelp(arg, sep)))
|
||||
sep = ","
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
304
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/pretty/print.go
generated
vendored
Normal file
304
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/pretty/print.go
generated
vendored
Normal file
@@ -0,0 +1,304 @@
|
||||
package pretty
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/fatih/color"
|
||||
)
|
||||
|
||||
// NB(directxman12): this isn't particularly elegant, but it's also
|
||||
// sufficiently simple as to be maintained here. Man (roff) would've
|
||||
// probably worked, but it's not necessarily on Windows by default.
|
||||
|
||||
// Span is a chunk of content that is writable to an output, but knows how to
|
||||
// calculate its apparent visual "width" on the terminal (not to be confused
|
||||
// with the raw length, which may include zero-width coloring sequences).
|
||||
type Span interface {
|
||||
// VisualLength reports the "width" as perceived by the user on the terminal
|
||||
// (i.e. widest line, ignoring ANSI escape characters).
|
||||
VisualLength() int
|
||||
// WriteTo writes the full span contents to the given writer.
|
||||
WriteTo(io.Writer) error
|
||||
}
|
||||
|
||||
// Table is a Span that writes its data in table form, with sizing controlled
|
||||
// by the given table calculator. Rows are started with StartRow, followed by
|
||||
// some calls to Column, followed by a call to EndRow. Once all rows are
|
||||
// added, the table can be used as a Span.
|
||||
type Table struct {
|
||||
Sizing *TableCalculator
|
||||
|
||||
cellsByRow [][]Span
|
||||
colSizes []int
|
||||
}
|
||||
|
||||
// StartRow starts a new row.
|
||||
// It must eventually be followed by EndRow.
|
||||
func (t *Table) StartRow() {
|
||||
t.cellsByRow = append(t.cellsByRow, []Span(nil))
|
||||
}
|
||||
|
||||
// EndRow ends the currently started row.
|
||||
func (t *Table) EndRow() {
|
||||
lastRow := t.cellsByRow[len(t.cellsByRow)-1]
|
||||
sizes := make([]int, len(lastRow))
|
||||
for i, cell := range lastRow {
|
||||
sizes[i] = cell.VisualLength()
|
||||
}
|
||||
t.Sizing.AddRowSizes(sizes...)
|
||||
}
|
||||
|
||||
// Column adds the given span as a new column to the current row.
|
||||
func (t *Table) Column(contents Span) {
|
||||
currentRowInd := len(t.cellsByRow) - 1
|
||||
t.cellsByRow[currentRowInd] = append(t.cellsByRow[currentRowInd], contents)
|
||||
}
|
||||
|
||||
// SkipRow prints a span without having it contribute to the table calculation.
|
||||
func (t *Table) SkipRow(contents Span) {
|
||||
t.cellsByRow = append(t.cellsByRow, []Span{contents})
|
||||
}
|
||||
|
||||
func (t *Table) WriteTo(out io.Writer) error {
|
||||
if t.colSizes == nil {
|
||||
t.colSizes = t.Sizing.ColumnWidths()
|
||||
}
|
||||
|
||||
for _, cells := range t.cellsByRow {
|
||||
currentPosition := 0
|
||||
for colInd, cell := range cells {
|
||||
colSize := t.colSizes[colInd]
|
||||
diff := colSize - cell.VisualLength()
|
||||
|
||||
if err := cell.WriteTo(out); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if diff > 0 {
|
||||
if err := writePadding(out, columnPadding, diff); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
currentPosition += colSize
|
||||
}
|
||||
|
||||
if _, err := fmt.Fprint(out, "\n"); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (t *Table) VisualLength() int {
|
||||
if t.colSizes == nil {
|
||||
t.colSizes = t.Sizing.ColumnWidths()
|
||||
}
|
||||
|
||||
res := 0
|
||||
for _, colSize := range t.colSizes {
|
||||
res += colSize
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// Text is a span that simply contains raw text. It's a good starting point.
|
||||
type Text string
|
||||
|
||||
func (t Text) VisualLength() int { return len(t) }
|
||||
func (t Text) WriteTo(w io.Writer) error {
|
||||
_, err := w.Write([]byte(t))
|
||||
return err
|
||||
}
|
||||
|
||||
// indented is a span that indents all lines by the given number of tabs.
|
||||
type indented struct {
|
||||
Amount int
|
||||
Content Span
|
||||
}
|
||||
|
||||
func (i *indented) VisualLength() int { return i.Content.VisualLength() }
|
||||
func (i *indented) WriteTo(w io.Writer) error {
|
||||
var out bytes.Buffer
|
||||
if err := i.Content.WriteTo(&out); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
lines := bytes.Split(out.Bytes(), []byte("\n"))
|
||||
for lineInd, line := range lines {
|
||||
if lineInd != 0 {
|
||||
if _, err := w.Write([]byte("\n")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if len(line) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
if err := writePadding(w, indentPadding, i.Amount); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := w.Write(line); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Indented returns a span that indents all lines by the given number of tabs.
|
||||
func Indented(amt int, content Span) Span {
|
||||
return &indented{Amount: amt, Content: content}
|
||||
}
|
||||
|
||||
// fromWriter is a span that takes content from a function expecting a Writer.
|
||||
type fromWriter struct {
|
||||
cache []byte
|
||||
cacheError error
|
||||
run func(io.Writer) error
|
||||
}
|
||||
|
||||
func (f *fromWriter) VisualLength() int {
|
||||
if f.cache == nil {
|
||||
var buf bytes.Buffer
|
||||
if err := f.run(&buf); err != nil {
|
||||
f.cacheError = err
|
||||
}
|
||||
f.cache = buf.Bytes()
|
||||
}
|
||||
return len(f.cache)
|
||||
}
|
||||
func (f *fromWriter) WriteTo(w io.Writer) error {
|
||||
if f.cache != nil {
|
||||
if f.cacheError != nil {
|
||||
return f.cacheError
|
||||
}
|
||||
_, err := w.Write(f.cache)
|
||||
return err
|
||||
}
|
||||
return f.run(w)
|
||||
}
|
||||
|
||||
// FromWriter returns a span that takes content from a function expecting a Writer.
|
||||
func FromWriter(run func(io.Writer) error) Span {
|
||||
return &fromWriter{run: run}
|
||||
}
|
||||
|
||||
// Decoration represents a terminal decoration.
|
||||
type Decoration color.Color
|
||||
|
||||
// Containing returns a Span that has the given decoration applied.
|
||||
func (d Decoration) Containing(contents Span) Span {
|
||||
return &decorated{
|
||||
Contents: contents,
|
||||
Attributes: color.Color(d),
|
||||
}
|
||||
}
|
||||
|
||||
// decorated is a span that has some terminal decoration applied.
|
||||
type decorated struct {
|
||||
Contents Span
|
||||
Attributes color.Color
|
||||
}
|
||||
|
||||
func (d *decorated) VisualLength() int { return d.Contents.VisualLength() }
|
||||
func (d *decorated) WriteTo(w io.Writer) error {
|
||||
oldOut := color.Output
|
||||
color.Output = w
|
||||
defer func() { color.Output = oldOut }()
|
||||
|
||||
d.Attributes.Set()
|
||||
defer color.Unset()
|
||||
|
||||
return d.Contents.WriteTo(w)
|
||||
}
|
||||
|
||||
// SpanWriter is a span that contains multiple sub-spans.
|
||||
type SpanWriter struct {
|
||||
contents []Span
|
||||
}
|
||||
|
||||
func (m *SpanWriter) VisualLength() int {
|
||||
res := 0
|
||||
for _, span := range m.contents {
|
||||
res += span.VisualLength()
|
||||
}
|
||||
return res
|
||||
}
|
||||
func (m *SpanWriter) WriteTo(w io.Writer) error {
|
||||
for _, span := range m.contents {
|
||||
if err := span.WriteTo(w); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Print adds a new span to this SpanWriter.
|
||||
func (m *SpanWriter) Print(s Span) {
|
||||
m.contents = append(m.contents, s)
|
||||
}
|
||||
|
||||
// lines is a span that adds some newlines, optionally followed by some content.
|
||||
type lines struct {
|
||||
content Span
|
||||
amountBefore int
|
||||
}
|
||||
|
||||
func (l *lines) VisualLength() int {
|
||||
if l.content == nil {
|
||||
return 0
|
||||
}
|
||||
return l.content.VisualLength()
|
||||
}
|
||||
func (l *lines) WriteTo(w io.Writer) error {
|
||||
if err := writePadding(w, linesPadding, l.amountBefore); err != nil {
|
||||
return err
|
||||
}
|
||||
if l.content != nil {
|
||||
if err := l.content.WriteTo(w); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Newlines returns a span just containing some newlines.
|
||||
func Newlines(amt int) Span {
|
||||
return &lines{amountBefore: amt}
|
||||
}
|
||||
|
||||
// Line returns a span that emits a newline, followed by the given content.
|
||||
func Line(content Span) Span {
|
||||
return &lines{amountBefore: 1, content: content}
|
||||
}
|
||||
|
||||
var (
|
||||
columnPadding = []byte(" ")
|
||||
indentPadding = []byte("\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t")
|
||||
linesPadding = []byte("\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n")
|
||||
)
|
||||
|
||||
// writePadding writes out padding of the given type in the given amount to the writer.
|
||||
// Each byte in the padding buffer contributes 1 to the amount -- the padding being
|
||||
// a buffer is just for efficiency.
|
||||
func writePadding(out io.Writer, typ []byte, amt int) error {
|
||||
if amt <= len(typ) {
|
||||
_, err := out.Write(typ[:amt])
|
||||
return err
|
||||
}
|
||||
|
||||
num := amt / len(typ)
|
||||
rem := amt % len(typ)
|
||||
for i := 0; i < num; i++ {
|
||||
if _, err := out.Write(typ); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if _, err := out.Write(typ[:rem]); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
64
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/pretty/table.go
generated
vendored
Normal file
64
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/pretty/table.go
generated
vendored
Normal file
@@ -0,0 +1,64 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package pretty
|
||||
|
||||
// TableCalculator calculates column widths (with optional padding)
|
||||
// for a table based on the maximum required column width.
|
||||
type TableCalculator struct {
|
||||
cellSizesByCol [][]int
|
||||
|
||||
Padding int
|
||||
MaxWidth int
|
||||
}
|
||||
|
||||
// AddRowSizes registers a new row with cells of the given sizes.
|
||||
func (c *TableCalculator) AddRowSizes(cellSizes ...int) {
|
||||
if len(cellSizes) > len(c.cellSizesByCol) {
|
||||
for range cellSizes[len(c.cellSizesByCol):] {
|
||||
c.cellSizesByCol = append(c.cellSizesByCol, []int(nil))
|
||||
}
|
||||
}
|
||||
for i, size := range cellSizes {
|
||||
c.cellSizesByCol[i] = append(c.cellSizesByCol[i], size)
|
||||
}
|
||||
}
|
||||
|
||||
// ColumnWidths calculates the appropriate column sizes given the
|
||||
// previously registered rows.
|
||||
func (c *TableCalculator) ColumnWidths() []int {
|
||||
maxColWidths := make([]int, len(c.cellSizesByCol))
|
||||
|
||||
for colInd, cellSizes := range c.cellSizesByCol {
|
||||
max := 0
|
||||
for _, cellSize := range cellSizes {
|
||||
if max < cellSize {
|
||||
max = cellSize
|
||||
}
|
||||
}
|
||||
maxColWidths[colInd] = max
|
||||
}
|
||||
|
||||
actualMaxWidth := c.MaxWidth - c.Padding
|
||||
for i, width := range maxColWidths {
|
||||
if actualMaxWidth > 0 && width > actualMaxWidth {
|
||||
maxColWidths[i] = actualMaxWidth
|
||||
}
|
||||
maxColWidths[i] += c.Padding
|
||||
}
|
||||
|
||||
return maxColWidths
|
||||
}
|
106
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/sort.go
generated
vendored
Normal file
106
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/sort.go
generated
vendored
Normal file
@@ -0,0 +1,106 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package help
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// SortGroup knows how to sort and group marker definitions.
|
||||
type SortGroup interface {
|
||||
// Less is equivalent to the Less function from sort, and is used to sort the markers.
|
||||
Less(*markers.Definition, *markers.Definition) bool
|
||||
// Group returns the "group" that a given marker belongs to.
|
||||
Group(*markers.Definition, *markers.DefinitionHelp) string
|
||||
}
|
||||
|
||||
var (
|
||||
// SortByCategory sorts the markers by name and groups them by their help category.
|
||||
SortByCategory = sortByCategory{}
|
||||
|
||||
// SortByOption sorts by the generator that the option belongs to.
|
||||
SortByOption = optionsSort{}
|
||||
)
|
||||
|
||||
type sortByCategory struct{}
|
||||
|
||||
func (sortByCategory) Group(_ *markers.Definition, help *markers.DefinitionHelp) string {
|
||||
if help == nil {
|
||||
return ""
|
||||
}
|
||||
return help.Category
|
||||
}
|
||||
func (sortByCategory) Less(i, j *markers.Definition) bool {
|
||||
return i.Name < j.Name
|
||||
}
|
||||
|
||||
type optionsSort struct{}
|
||||
|
||||
func (optionsSort) Less(i, j *markers.Definition) bool {
|
||||
iParts := strings.Split(i.Name, ":")
|
||||
jParts := strings.Split(j.Name, ":")
|
||||
|
||||
iGen := ""
|
||||
iRule := ""
|
||||
jGen := ""
|
||||
jRule := ""
|
||||
|
||||
switch len(iParts) {
|
||||
case 1:
|
||||
iGen = iParts[0]
|
||||
// two means a default output rule, so ignore
|
||||
case 2:
|
||||
iRule = iParts[1]
|
||||
case 3:
|
||||
iGen = iParts[1]
|
||||
iRule = iParts[2]
|
||||
}
|
||||
switch len(jParts) {
|
||||
case 1:
|
||||
jGen = jParts[0]
|
||||
// two means a default output rule, so ignore
|
||||
case 2:
|
||||
jRule = jParts[1]
|
||||
case 3:
|
||||
jGen = jParts[1]
|
||||
jRule = jParts[2]
|
||||
}
|
||||
|
||||
if iGen != jGen {
|
||||
return iGen > jGen
|
||||
}
|
||||
|
||||
return iRule < jRule
|
||||
}
|
||||
func (optionsSort) Group(def *markers.Definition, _ *markers.DefinitionHelp) string {
|
||||
parts := strings.Split(def.Name, ":")
|
||||
|
||||
switch len(parts) {
|
||||
case 1:
|
||||
if parts[0] == "paths" {
|
||||
return "generic"
|
||||
}
|
||||
return "generators"
|
||||
case 2:
|
||||
return "output rules (optionally as output:<generator>:...)"
|
||||
default:
|
||||
return ""
|
||||
// three means a marker-specific output rule, ignore
|
||||
}
|
||||
}
|
215
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/types.go
generated
vendored
Normal file
215
vendor/sigs.k8s.io/controller-tools/pkg/genall/help/types.go
generated
vendored
Normal file
@@ -0,0 +1,215 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package help
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// DetailedHelp contains both a summary and further details.
|
||||
type DetailedHelp struct {
|
||||
// Summary contains a one-line description.
|
||||
Summary string `json:"summary"`
|
||||
// Details contains further information.
|
||||
Details string `json:"details,omitempty"`
|
||||
}
|
||||
|
||||
// Argument is the type data for a marker argument.
|
||||
type Argument struct {
|
||||
// Type is the data type of the argument (string, bool, int, slice, any, raw, invalid)
|
||||
Type string `json:"type"`
|
||||
// Optional marks this argument as optional.
|
||||
Optional bool `json:"optional"`
|
||||
// ItemType contains the type of the slice item, if this is a slice
|
||||
ItemType *Argument `json:"itemType,omitempty"`
|
||||
}
|
||||
|
||||
func (a Argument) typeString(out *strings.Builder) {
|
||||
if a.Type == "slice" {
|
||||
out.WriteString("[]")
|
||||
a.ItemType.typeString(out)
|
||||
return
|
||||
}
|
||||
|
||||
out.WriteString(a.Type)
|
||||
}
|
||||
|
||||
// TypeString returns a string roughly equivalent
|
||||
// (but not identical) to the underlying Go type that
|
||||
// this argument would parse to. It's mainly useful
|
||||
// for user-friendly formatting of this argument (e.g.
|
||||
// help strings).
|
||||
func (a Argument) TypeString() string {
|
||||
out := &strings.Builder{}
|
||||
a.typeString(out)
|
||||
return out.String()
|
||||
}
|
||||
|
||||
// FieldHelp contains information required to print documentation for a marker field.
|
||||
type FieldHelp struct {
|
||||
// Name is the field name.
|
||||
Name string `json:"name"`
|
||||
// Argument is the type of the field.
|
||||
Argument `json:",inline"`
|
||||
|
||||
// DetailedHelp contains the textual help for the field.
|
||||
DetailedHelp `json:",inline"`
|
||||
}
|
||||
|
||||
// MarkerDoc contains information required to print documentation for a marker.
|
||||
type MarkerDoc struct {
|
||||
// definition
|
||||
|
||||
// Name is the name of the marker.
|
||||
Name string `json:"name"`
|
||||
// Target is the target (field, package, type) of the marker.
|
||||
Target string `json:"target"`
|
||||
|
||||
// help
|
||||
|
||||
// DetailedHelp is the textual help for the marker.
|
||||
DetailedHelp `json:",inline"`
|
||||
// Category is the general "category" that this marker belongs to.
|
||||
Category string `json:"category"`
|
||||
// DeprecatedInFavorOf marks that this marker shouldn't be used when
|
||||
// non-nil. If also non-empty, another marker should be used instead.
|
||||
DeprecatedInFavorOf *string `json:"deprecatedInFavorOf,omitempty"`
|
||||
// Fields is the type and help data for each field of this marker.
|
||||
Fields []FieldHelp `json:"fields,omitempty"`
|
||||
}
|
||||
|
||||
// Empty checks if this marker has any arguments, returning true if not.
|
||||
func (m MarkerDoc) Empty() bool {
|
||||
return len(m.Fields) == 0
|
||||
}
|
||||
|
||||
// AnonymousField chekcs if this is an single-valued marker
|
||||
// (as opposed to having named fields).
|
||||
func (m MarkerDoc) AnonymousField() bool {
|
||||
return len(m.Fields) == 1 && m.Fields[0].Name == ""
|
||||
}
|
||||
|
||||
// ForArgument returns the equivalent documentation for a marker argument.
|
||||
func ForArgument(argRaw markers.Argument) Argument {
|
||||
res := Argument{
|
||||
Optional: argRaw.Optional,
|
||||
}
|
||||
|
||||
if argRaw.ItemType != nil {
|
||||
itemType := ForArgument(*argRaw.ItemType)
|
||||
res.ItemType = &itemType
|
||||
}
|
||||
|
||||
switch argRaw.Type {
|
||||
case markers.IntType:
|
||||
res.Type = "int"
|
||||
case markers.StringType:
|
||||
res.Type = "string"
|
||||
case markers.BoolType:
|
||||
res.Type = "bool"
|
||||
case markers.AnyType:
|
||||
res.Type = "any"
|
||||
case markers.SliceType:
|
||||
res.Type = "slice"
|
||||
case markers.RawType:
|
||||
res.Type = "raw"
|
||||
case markers.InvalidType:
|
||||
res.Type = "invalid"
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
// ForDefinition returns the equivalent marker documentation for a given marker definition and spearate help.
|
||||
func ForDefinition(defn *markers.Definition, maybeHelp *markers.DefinitionHelp) MarkerDoc {
|
||||
var help markers.DefinitionHelp
|
||||
if maybeHelp != nil {
|
||||
help = *maybeHelp
|
||||
}
|
||||
|
||||
res := MarkerDoc{
|
||||
Name: defn.Name,
|
||||
Category: help.Category,
|
||||
DeprecatedInFavorOf: help.DeprecatedInFavorOf,
|
||||
Target: defn.Target.String(),
|
||||
DetailedHelp: DetailedHelp{Summary: help.Summary, Details: help.Details},
|
||||
}
|
||||
|
||||
helpByField := help.FieldsHelp(defn)
|
||||
|
||||
// TODO(directxman12): deterministic ordering
|
||||
for fieldName, fieldHelpRaw := range helpByField {
|
||||
fieldInfo := defn.Fields[fieldName]
|
||||
fieldHelp := FieldHelp{
|
||||
Name: fieldName,
|
||||
DetailedHelp: DetailedHelp{Summary: fieldHelpRaw.Summary, Details: fieldHelpRaw.Details},
|
||||
Argument: ForArgument(fieldInfo),
|
||||
}
|
||||
|
||||
res.Fields = append(res.Fields, fieldHelp)
|
||||
}
|
||||
|
||||
sort.Slice(res.Fields, func(i, j int) bool { return res.Fields[i].Name < res.Fields[j].Name })
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
// CategoryDoc contains help information for all markers in a Category.
|
||||
type CategoryDoc struct {
|
||||
Category string `json:"category"`
|
||||
Markers []MarkerDoc `json:"markers"`
|
||||
}
|
||||
|
||||
// ByCategory returns the marker help for markers in the given
|
||||
// registry, grouped and sorted according to the given method.
|
||||
func ByCategory(reg *markers.Registry, sorter SortGroup) []CategoryDoc {
|
||||
groupedMarkers := make(map[string][]*markers.Definition)
|
||||
|
||||
for _, marker := range reg.AllDefinitions() {
|
||||
group := sorter.Group(marker, reg.HelpFor(marker))
|
||||
groupedMarkers[group] = append(groupedMarkers[group], marker)
|
||||
}
|
||||
allGroups := make([]string, 0, len(groupedMarkers))
|
||||
for groupName := range groupedMarkers {
|
||||
allGroups = append(allGroups, groupName)
|
||||
}
|
||||
|
||||
sort.Strings(allGroups)
|
||||
|
||||
res := make([]CategoryDoc, len(allGroups))
|
||||
for i, groupName := range allGroups {
|
||||
markers := groupedMarkers[groupName]
|
||||
sort.Slice(markers, func(i, j int) bool {
|
||||
return sorter.Less(markers[i], markers[j])
|
||||
})
|
||||
|
||||
markerDocs := make([]MarkerDoc, len(markers))
|
||||
for i, marker := range markers {
|
||||
markerDocs[i] = ForDefinition(marker, reg.HelpFor(marker))
|
||||
}
|
||||
|
||||
res[i] = CategoryDoc{
|
||||
Category: groupName,
|
||||
Markers: markerDocs,
|
||||
}
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
37
vendor/sigs.k8s.io/controller-tools/pkg/genall/input.go
generated
vendored
Normal file
37
vendor/sigs.k8s.io/controller-tools/pkg/genall/input.go
generated
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package genall
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
)
|
||||
|
||||
// InputRule describes how to load non-code boilerplate artifacts.
|
||||
// It's not used for loading code.
|
||||
type InputRule interface {
|
||||
// OpenForRead opens the given non-code artifact for reading.
|
||||
OpenForRead(path string) (io.ReadCloser, error)
|
||||
}
|
||||
type inputFromFileSystem struct{}
|
||||
|
||||
func (inputFromFileSystem) OpenForRead(path string) (io.ReadCloser, error) {
|
||||
return os.Open(path)
|
||||
}
|
||||
|
||||
// InputFromFileSystem reads from the filesystem as normal.
|
||||
var InputFromFileSystem = inputFromFileSystem{}
|
192
vendor/sigs.k8s.io/controller-tools/pkg/genall/options.go
generated
vendored
Normal file
192
vendor/sigs.k8s.io/controller-tools/pkg/genall/options.go
generated
vendored
Normal file
@@ -0,0 +1,192 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package genall
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
var (
|
||||
InputPathsMarker = markers.Must(markers.MakeDefinition("paths", markers.DescribesPackage, InputPaths(nil)))
|
||||
)
|
||||
|
||||
// +controllertools:marker:generateHelp:category=""
|
||||
|
||||
// InputPaths represents paths and go-style path patterns to use as package roots.
|
||||
type InputPaths []string
|
||||
|
||||
// RegisterOptionsMarkers registers "mandatory" options markers for FromOptions into the given registry.
|
||||
// At this point, that's just InputPaths.
|
||||
func RegisterOptionsMarkers(into *markers.Registry) error {
|
||||
if err := into.Register(InputPathsMarker); err != nil {
|
||||
return err
|
||||
}
|
||||
// NB(directxman12): we make this optional so we don't have a bootstrap problem with helpgen
|
||||
if helpGiver, hasHelp := ((interface{})(InputPaths(nil))).(HasHelp); hasHelp {
|
||||
into.AddHelp(InputPathsMarker, helpGiver.Help())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// RegistryFromOptions produces just the marker registry that would be used by FromOptions, without
|
||||
// attempting to produce a full Runtime. This can be useful if you want to display help without
|
||||
// trying to load roots.
|
||||
func RegistryFromOptions(optionsRegistry *markers.Registry, options []string) (*markers.Registry, error) {
|
||||
protoRt, err := protoFromOptions(optionsRegistry, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
reg := &markers.Registry{}
|
||||
if err := protoRt.Generators.RegisterMarkers(reg); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return reg, nil
|
||||
}
|
||||
|
||||
// FromOptions parses the options from markers stored in the given registry out into a runtime.
|
||||
// The markers in the registry must be either
|
||||
//
|
||||
// a) Generators
|
||||
// b) OutputRules
|
||||
// c) InputPaths
|
||||
//
|
||||
// The paths specified in InputPaths are loaded as package roots, and the combined with
|
||||
// the generators and the specified output rules to produce a runtime that can be run or
|
||||
// further modified. Not default generators are used if none are specified -- you can check
|
||||
// the output and rerun for that.
|
||||
func FromOptions(optionsRegistry *markers.Registry, options []string) (*Runtime, error) {
|
||||
|
||||
protoRt, err := protoFromOptions(optionsRegistry, options)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// make the runtime
|
||||
genRuntime, err := protoRt.Generators.ForRoots(protoRt.Paths...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// attempt to figure out what the user wants without a lot of verbose specificity:
|
||||
// if the user specifies a default rule, assume that they probably want to fall back
|
||||
// to that. Otherwise, assume that they just wanted to customize one option from the
|
||||
// set, and leave the rest in the standard configuration.
|
||||
if protoRt.OutputRules.Default != nil {
|
||||
genRuntime.OutputRules = protoRt.OutputRules
|
||||
return genRuntime, nil
|
||||
}
|
||||
|
||||
outRules := DirectoryPerGenerator("config", protoRt.GeneratorsByName)
|
||||
for gen, rule := range protoRt.OutputRules.ByGenerator {
|
||||
outRules.ByGenerator[gen] = rule
|
||||
}
|
||||
|
||||
genRuntime.OutputRules = outRules
|
||||
return genRuntime, nil
|
||||
}
|
||||
|
||||
// protoFromOptions returns a proto-Runtime from the given options registry and
|
||||
// options set. This can then be used to construct an actual Runtime. See the
|
||||
// FromOptions function for more details about how the options work.
|
||||
func protoFromOptions(optionsRegistry *markers.Registry, options []string) (protoRuntime, error) {
|
||||
var gens Generators
|
||||
rules := OutputRules{
|
||||
ByGenerator: make(map[*Generator]OutputRule),
|
||||
}
|
||||
var paths []string
|
||||
|
||||
// collect the generators first, so that we can key the output on the actual
|
||||
// generator, which matters if there's settings in the gen object and it's not a pointer.
|
||||
outputByGen := make(map[string]OutputRule)
|
||||
gensByName := make(map[string]*Generator)
|
||||
|
||||
for _, rawOpt := range options {
|
||||
if rawOpt[0] != '+' {
|
||||
rawOpt = "+" + rawOpt // add a `+` to make it acceptable for usage with the registry
|
||||
}
|
||||
defn := optionsRegistry.Lookup(rawOpt, markers.DescribesPackage)
|
||||
if defn == nil {
|
||||
return protoRuntime{}, fmt.Errorf("unknown option %q", rawOpt[1:])
|
||||
}
|
||||
|
||||
val, err := defn.Parse(rawOpt)
|
||||
if err != nil {
|
||||
return protoRuntime{}, fmt.Errorf("unable to parse option %q: %w", rawOpt[1:], err)
|
||||
}
|
||||
|
||||
switch val := val.(type) {
|
||||
case Generator:
|
||||
gens = append(gens, &val)
|
||||
gensByName[defn.Name] = &val
|
||||
case OutputRule:
|
||||
_, genName := splitOutputRuleOption(defn.Name)
|
||||
if genName == "" {
|
||||
// it's a default rule
|
||||
rules.Default = val
|
||||
continue
|
||||
}
|
||||
|
||||
outputByGen[genName] = val
|
||||
continue
|
||||
case InputPaths:
|
||||
paths = append(paths, val...)
|
||||
default:
|
||||
return protoRuntime{}, fmt.Errorf("unknown option marker %q", defn.Name)
|
||||
}
|
||||
}
|
||||
|
||||
// actually associate the rules now that we know the generators
|
||||
for genName, outputRule := range outputByGen {
|
||||
gen, knownGen := gensByName[genName]
|
||||
if !knownGen {
|
||||
return protoRuntime{}, fmt.Errorf("non-invoked generator %q", genName)
|
||||
}
|
||||
|
||||
rules.ByGenerator[gen] = outputRule
|
||||
}
|
||||
|
||||
return protoRuntime{
|
||||
Paths: paths,
|
||||
Generators: Generators(gens),
|
||||
OutputRules: rules,
|
||||
GeneratorsByName: gensByName,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// protoRuntime represents the raw pieces needed to compose a runtime, as
|
||||
// parsed from some options.
|
||||
type protoRuntime struct {
|
||||
Paths []string
|
||||
Generators Generators
|
||||
OutputRules OutputRules
|
||||
GeneratorsByName map[string]*Generator
|
||||
}
|
||||
|
||||
// splitOutputRuleOption splits a marker name of "output:rule:gen" or "output:rule"
|
||||
// into its compontent rule and generator name.
|
||||
func splitOutputRuleOption(name string) (ruleName string, genName string) {
|
||||
parts := strings.SplitN(name, ":", 3)
|
||||
if len(parts) == 3 {
|
||||
// output:<generator>:<rule>
|
||||
return parts[2], parts[1]
|
||||
}
|
||||
// output:<rule>
|
||||
return parts[1], ""
|
||||
}
|
160
vendor/sigs.k8s.io/controller-tools/pkg/genall/output.go
generated
vendored
Normal file
160
vendor/sigs.k8s.io/controller-tools/pkg/genall/output.go
generated
vendored
Normal file
@@ -0,0 +1,160 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package genall
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
)
|
||||
|
||||
// nopCloser is a WriteCloser whose Close
|
||||
// is a no-op.
|
||||
type nopCloser struct {
|
||||
io.Writer
|
||||
}
|
||||
|
||||
func (n nopCloser) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// DirectoryPerGenerator produces output rules mapping output to a different subdirectory
|
||||
// of the given base directory for each generator (with each subdirectory specified as
|
||||
// the key in the input map).
|
||||
func DirectoryPerGenerator(base string, generators map[string]*Generator) OutputRules {
|
||||
rules := OutputRules{
|
||||
Default: OutputArtifacts{Config: OutputToDirectory(base)},
|
||||
ByGenerator: make(map[*Generator]OutputRule, len(generators)),
|
||||
}
|
||||
|
||||
for name, gen := range generators {
|
||||
rules.ByGenerator[gen] = OutputArtifacts{
|
||||
Config: OutputToDirectory(filepath.Join(base, name)),
|
||||
}
|
||||
}
|
||||
|
||||
return rules
|
||||
}
|
||||
|
||||
// OutputRules defines how to output artificats on a per-generator basis.
|
||||
type OutputRules struct {
|
||||
// Default is the output rule used when no specific per-generator overrides match.
|
||||
Default OutputRule
|
||||
// ByGenerator contains specific per-generator overrides.
|
||||
// NB(directxman12): this is a pointer to avoid issues if a given Generator becomes unhashable
|
||||
// (interface values compare by "dereferencing" their internal pointer first, whereas pointers
|
||||
// compare by the actual pointer itself).
|
||||
ByGenerator map[*Generator]OutputRule
|
||||
}
|
||||
|
||||
// ForGenerator returns the output rule that should be used
|
||||
// by the given Generator.
|
||||
func (o OutputRules) ForGenerator(gen *Generator) OutputRule {
|
||||
if forGen, specific := o.ByGenerator[gen]; specific {
|
||||
return forGen
|
||||
}
|
||||
return o.Default
|
||||
}
|
||||
|
||||
// OutputRule defines how to output artifacts from a generator.
|
||||
type OutputRule interface {
|
||||
// Open opens the given artifact path for writing. If a package is passed,
|
||||
// the artifact is considered to be used as part of the package (e.g.
|
||||
// generated code), while a nil package indicates that the artifact is
|
||||
// config (or something else not involved in Go compilation).
|
||||
Open(pkg *loader.Package, path string) (io.WriteCloser, error)
|
||||
}
|
||||
|
||||
// OutputToNothing skips outputting anything.
|
||||
var OutputToNothing = outputToNothing{}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=""
|
||||
|
||||
// outputToNothing skips outputting anything.
|
||||
type outputToNothing struct{}
|
||||
|
||||
func (o outputToNothing) Open(_ *loader.Package, _ string) (io.WriteCloser, error) {
|
||||
return nopCloser{ioutil.Discard}, nil
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=""
|
||||
|
||||
// OutputToDirectory outputs each artifact to the given directory, regardless
|
||||
// of if it's package-associated or not.
|
||||
type OutputToDirectory string
|
||||
|
||||
func (o OutputToDirectory) Open(_ *loader.Package, itemPath string) (io.WriteCloser, error) {
|
||||
// ensure the directory exists
|
||||
if err := os.MkdirAll(string(o), os.ModePerm); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
path := filepath.Join(string(o), itemPath)
|
||||
return os.Create(path)
|
||||
}
|
||||
|
||||
// OutputToStdout outputs everything to standard-out, with no separation.
|
||||
//
|
||||
// Generally useful for single-artifact outputs.
|
||||
var OutputToStdout = outputToStdout{}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=""
|
||||
|
||||
// outputToStdout outputs everything to standard-out, with no separation.
|
||||
//
|
||||
// Generally useful for single-artifact outputs.
|
||||
type outputToStdout struct{}
|
||||
|
||||
func (o outputToStdout) Open(_ *loader.Package, itemPath string) (io.WriteCloser, error) {
|
||||
return nopCloser{os.Stdout}, nil
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=""
|
||||
|
||||
// OutputArtifacts outputs artifacts to different locations, depending on
|
||||
// whether they're package-associated or not.
|
||||
//
|
||||
// Non-package associated artifacts
|
||||
// are output to the Config directory, while package-associated ones are output
|
||||
// to their package's source files' directory, unless an alternate path is
|
||||
// specified in Code.
|
||||
type OutputArtifacts struct {
|
||||
// Config points to the directory to which to write configuration.
|
||||
Config OutputToDirectory
|
||||
// Code overrides the directory in which to write new code (defaults to where the existing code lives).
|
||||
Code OutputToDirectory `marker:",optional"`
|
||||
}
|
||||
|
||||
func (o OutputArtifacts) Open(pkg *loader.Package, itemPath string) (io.WriteCloser, error) {
|
||||
if pkg == nil {
|
||||
return o.Config.Open(pkg, itemPath)
|
||||
}
|
||||
|
||||
if o.Code != "" {
|
||||
return o.Code.Open(pkg, itemPath)
|
||||
}
|
||||
|
||||
if len(pkg.CompiledGoFiles) == 0 {
|
||||
return nil, fmt.Errorf("cannot output to a package with no path on disk")
|
||||
}
|
||||
outDir := filepath.Dir(pkg.CompiledGoFiles[0])
|
||||
outPath := filepath.Join(outDir, itemPath)
|
||||
return os.Create(outPath)
|
||||
}
|
89
vendor/sigs.k8s.io/controller-tools/pkg/genall/zz_generated.markerhelp.go
generated
vendored
Normal file
89
vendor/sigs.k8s.io/controller-tools/pkg/genall/zz_generated.markerhelp.go
generated
vendored
Normal file
@@ -0,0 +1,89 @@
|
||||
// +build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by helpgen. DO NOT EDIT.
|
||||
|
||||
package genall
|
||||
|
||||
import (
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
func (InputPaths) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "represents paths and go-style path patterns to use as package roots.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (OutputArtifacts) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "outputs artifacts to different locations, depending on whether they're package-associated or not. ",
|
||||
Details: "Non-package associated artifacts are output to the Config directory, while package-associated ones are output to their package's source files' directory, unless an alternate path is specified in Code.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"Config": {
|
||||
Summary: "points to the directory to which to write configuration.",
|
||||
Details: "",
|
||||
},
|
||||
"Code": {
|
||||
Summary: "overrides the directory in which to write new code (defaults to where the existing code lives).",
|
||||
Details: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (OutputToDirectory) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "outputs each artifact to the given directory, regardless of if it's package-associated or not.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (outputToNothing) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "skips outputting anything.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
||||
|
||||
func (outputToStdout) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "outputs everything to standard-out, with no separation. ",
|
||||
Details: "Generally useful for single-artifact outputs.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
60
vendor/sigs.k8s.io/controller-tools/pkg/loader/doc.go
generated
vendored
Normal file
60
vendor/sigs.k8s.io/controller-tools/pkg/loader/doc.go
generated
vendored
Normal file
@@ -0,0 +1,60 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package loader defines helpers for loading packages from sources. It wraps
|
||||
// go/packages, allow incremental loading of source code and manual control
|
||||
// over which packages get type-checked. This allows for faster loading in
|
||||
// cases where you don't actually care about certain imports.
|
||||
//
|
||||
// Because it uses go/packages, it's modules-aware, and works in both modules-
|
||||
// and non-modules environments.
|
||||
//
|
||||
// Loading
|
||||
//
|
||||
// The main entrypoint for loading is LoadRoots, which traverse the package
|
||||
// graph starting at the given patterns (file, package, path, or ...-wildcard,
|
||||
// as one might pass to go list). Packages beyond the roots can be accessed
|
||||
// via the Imports() method. Packages are initially loaded with export data
|
||||
// paths, filenames, and imports.
|
||||
//
|
||||
// Packages are suitable for comparison, as each unique package only ever has
|
||||
// one *Package object returned.
|
||||
//
|
||||
// Syntax and TypeChecking
|
||||
//
|
||||
// ASTs and type-checking information can be loaded with NeedSyntax and
|
||||
// NeedTypesInfo, respectively. Both are idempotent -- repeated calls will
|
||||
// simply re-use the cached contents. Note that NeedTypesInfo will *only* type
|
||||
// check the current package -- if you want to type-check imports as well,
|
||||
// you'll need to type-check them first.
|
||||
//
|
||||
// Reference Pruning and Recursive Checking
|
||||
//
|
||||
// In order to type-check using only the packages you care about, you can use a
|
||||
// TypeChecker. TypeChecker will visit each top-level type declaration,
|
||||
// collect (optionally filtered) references, and type-check references
|
||||
// packages.
|
||||
//
|
||||
// Errors
|
||||
//
|
||||
// Errors can be added to each package. Use ErrFromNode to create an error
|
||||
// from an AST node. Errors can then be printed (complete with file and
|
||||
// position information) using PrintErrors, optionally filtered by error type.
|
||||
// It's generally a good idea to filter out TypeErrors when doing incomplete
|
||||
// type-checking with TypeChecker. You can use MaybeErrList to return multiple
|
||||
// errors if you need to return an error instead of adding it to a package.
|
||||
// AddError will later unroll it into individual errors.
|
||||
package loader
|
67
vendor/sigs.k8s.io/controller-tools/pkg/loader/errors.go
generated
vendored
Normal file
67
vendor/sigs.k8s.io/controller-tools/pkg/loader/errors.go
generated
vendored
Normal file
@@ -0,0 +1,67 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package loader
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"go/token"
|
||||
)
|
||||
|
||||
// PositionedError represents some error with an associated position.
|
||||
// The position is tied to some external token.FileSet.
|
||||
type PositionedError struct {
|
||||
Pos token.Pos
|
||||
error
|
||||
}
|
||||
|
||||
// Node is the intersection of go/ast.Node and go/types.Var.
|
||||
type Node interface {
|
||||
Pos() token.Pos // position of first character belonging to the node
|
||||
}
|
||||
|
||||
// ErrFromNode returns the given error, with additional information
|
||||
// attaching it to the given AST node. It will automatically map
|
||||
// over error lists.
|
||||
func ErrFromNode(err error, node Node) error {
|
||||
if asList, isList := err.(ErrList); isList {
|
||||
resList := make(ErrList, len(asList))
|
||||
for i, baseErr := range asList {
|
||||
resList[i] = ErrFromNode(baseErr, node)
|
||||
}
|
||||
return resList
|
||||
}
|
||||
return PositionedError{
|
||||
Pos: node.Pos(),
|
||||
error: err,
|
||||
}
|
||||
}
|
||||
|
||||
// MaybeErrList constructs an ErrList if the given list of
|
||||
// errors has any errors, otherwise returning nil.
|
||||
func MaybeErrList(errs []error) error {
|
||||
if len(errs) == 0 {
|
||||
return nil
|
||||
}
|
||||
return ErrList(errs)
|
||||
}
|
||||
|
||||
// ErrList is a list of errors aggregated together into a single error.
|
||||
type ErrList []error
|
||||
|
||||
func (l ErrList) Error() string {
|
||||
return fmt.Sprintf("%v", []error(l))
|
||||
}
|
360
vendor/sigs.k8s.io/controller-tools/pkg/loader/loader.go
generated
vendored
Normal file
360
vendor/sigs.k8s.io/controller-tools/pkg/loader/loader.go
generated
vendored
Normal file
@@ -0,0 +1,360 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package loader
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"go/ast"
|
||||
"go/parser"
|
||||
"go/scanner"
|
||||
"go/token"
|
||||
"go/types"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"golang.org/x/tools/go/packages"
|
||||
)
|
||||
|
||||
// Much of this is strongly inspired by the contents of go/packages,
|
||||
// except that it allows for lazy loading of syntax and type-checking
|
||||
// information to speed up cases where full traversal isn't needed.
|
||||
|
||||
// PrintErrors print errors associated with all packages
|
||||
// in the given package graph, starting at the given root
|
||||
// packages and traversing through all imports. It will skip
|
||||
// any errors of the kinds specified in filterKinds. It will
|
||||
// return true if any errors were printed.
|
||||
func PrintErrors(pkgs []*Package, filterKinds ...packages.ErrorKind) bool {
|
||||
pkgsRaw := make([]*packages.Package, len(pkgs))
|
||||
for i, pkg := range pkgs {
|
||||
pkgsRaw[i] = pkg.Package
|
||||
}
|
||||
toSkip := make(map[packages.ErrorKind]struct{})
|
||||
for _, errKind := range filterKinds {
|
||||
toSkip[errKind] = struct{}{}
|
||||
}
|
||||
hadErrors := false
|
||||
packages.Visit(pkgsRaw, nil, func(pkgRaw *packages.Package) {
|
||||
for _, err := range pkgRaw.Errors {
|
||||
if _, skip := toSkip[err.Kind]; skip {
|
||||
continue
|
||||
}
|
||||
hadErrors = true
|
||||
fmt.Fprintln(os.Stderr, err)
|
||||
}
|
||||
})
|
||||
return hadErrors
|
||||
}
|
||||
|
||||
// Package is a single, unique Go package that can be
|
||||
// lazily parsed and type-checked. Packages should not
|
||||
// be constructed directly -- instead, use LoadRoots.
|
||||
// For a given call to LoadRoots, only a single instance
|
||||
// of each package exists, and thus they may be used as keys
|
||||
// and for comparison.
|
||||
type Package struct {
|
||||
*packages.Package
|
||||
|
||||
imports map[string]*Package
|
||||
|
||||
loader *loader
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
// Imports returns the imports for the given package, indexed by
|
||||
// package path (*not* name in any particular file).
|
||||
func (p *Package) Imports() map[string]*Package {
|
||||
if p.imports == nil {
|
||||
p.imports = p.loader.packagesFor(p.Package.Imports)
|
||||
}
|
||||
|
||||
return p.imports
|
||||
}
|
||||
|
||||
// NeedTypesInfo indicates that type-checking information is needed for this package.
|
||||
// Actual type-checking information can be accessed via the Types and TypesInfo fields.
|
||||
func (p *Package) NeedTypesInfo() {
|
||||
if p.TypesInfo != nil {
|
||||
return
|
||||
}
|
||||
p.NeedSyntax()
|
||||
p.loader.typeCheck(p)
|
||||
}
|
||||
|
||||
// NeedSyntax indicates that a parsed AST is needed for this package.
|
||||
// Actual ASTs can be accessed via the Syntax field.
|
||||
func (p *Package) NeedSyntax() {
|
||||
if p.Syntax != nil {
|
||||
return
|
||||
}
|
||||
out := make([]*ast.File, len(p.CompiledGoFiles))
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(len(p.CompiledGoFiles))
|
||||
for i, filename := range p.CompiledGoFiles {
|
||||
go func(i int, filename string) {
|
||||
defer wg.Done()
|
||||
src, err := ioutil.ReadFile(filename)
|
||||
if err != nil {
|
||||
p.AddError(err)
|
||||
return
|
||||
}
|
||||
out[i], err = p.loader.parseFile(filename, src)
|
||||
if err != nil {
|
||||
p.AddError(err)
|
||||
return
|
||||
}
|
||||
}(i, filename)
|
||||
}
|
||||
wg.Wait()
|
||||
for _, file := range out {
|
||||
if file == nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
p.Syntax = out
|
||||
}
|
||||
|
||||
// AddError adds an error to the errors associated with the given package.
|
||||
func (p *Package) AddError(err error) {
|
||||
switch typedErr := err.(type) {
|
||||
case *os.PathError:
|
||||
// file-reading errors
|
||||
p.Errors = append(p.Errors, packages.Error{
|
||||
Pos: typedErr.Path + ":1",
|
||||
Msg: typedErr.Err.Error(),
|
||||
Kind: packages.ParseError,
|
||||
})
|
||||
case scanner.ErrorList:
|
||||
// parsing/scanning errors
|
||||
for _, subErr := range typedErr {
|
||||
p.Errors = append(p.Errors, packages.Error{
|
||||
Pos: subErr.Pos.String(),
|
||||
Msg: subErr.Msg,
|
||||
Kind: packages.ParseError,
|
||||
})
|
||||
}
|
||||
case types.Error:
|
||||
// type-checking errors
|
||||
p.Errors = append(p.Errors, packages.Error{
|
||||
Pos: typedErr.Fset.Position(typedErr.Pos).String(),
|
||||
Msg: typedErr.Msg,
|
||||
Kind: packages.TypeError,
|
||||
})
|
||||
case ErrList:
|
||||
for _, subErr := range typedErr {
|
||||
p.AddError(subErr)
|
||||
}
|
||||
case PositionedError:
|
||||
p.Errors = append(p.Errors, packages.Error{
|
||||
Pos: p.loader.cfg.Fset.Position(typedErr.Pos).String(),
|
||||
Msg: typedErr.Error(),
|
||||
Kind: packages.UnknownError,
|
||||
})
|
||||
default:
|
||||
// should only happen for external errors, like ref checking
|
||||
p.Errors = append(p.Errors, packages.Error{
|
||||
Pos: p.ID + ":-",
|
||||
Msg: err.Error(),
|
||||
Kind: packages.UnknownError,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// loader loads packages and their imports. Loaded packages will have
|
||||
// type size, imports, and exports file information populated. Additional
|
||||
// information, like ASTs and type-checking information, can be accessed
|
||||
// via methods on individual packages.
|
||||
type loader struct {
|
||||
// Roots are the loaded "root" packages in the package graph loaded via
|
||||
// LoadRoots.
|
||||
Roots []*Package
|
||||
|
||||
// cfg contains the package loading config (initialized on demand)
|
||||
cfg *packages.Config
|
||||
// packages contains the cache of Packages indexed by the underlying
|
||||
// package.Package, so that we don't ever produce two Packages with
|
||||
// the same underlying packages.Package.
|
||||
packages map[*packages.Package]*Package
|
||||
packagesMu sync.Mutex
|
||||
}
|
||||
|
||||
// packageFor returns a wrapped Package for the given packages.Package,
|
||||
// ensuring that there's a one-to-one mapping between the two.
|
||||
// It's *not* threadsafe -- use packagesFor for that.
|
||||
func (l *loader) packageFor(pkgRaw *packages.Package) *Package {
|
||||
if l.packages[pkgRaw] == nil {
|
||||
l.packages[pkgRaw] = &Package{
|
||||
Package: pkgRaw,
|
||||
loader: l,
|
||||
}
|
||||
}
|
||||
return l.packages[pkgRaw]
|
||||
}
|
||||
|
||||
// packagesFor returns a map of Package objects for each packages.Package in the input
|
||||
// map, ensuring that there's a one-to-one mapping between package.Package and Package
|
||||
// (as per packageFor).
|
||||
func (l *loader) packagesFor(pkgsRaw map[string]*packages.Package) map[string]*Package {
|
||||
l.packagesMu.Lock()
|
||||
defer l.packagesMu.Unlock()
|
||||
|
||||
out := make(map[string]*Package, len(pkgsRaw))
|
||||
for name, rawPkg := range pkgsRaw {
|
||||
out[name] = l.packageFor(rawPkg)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// typeCheck type-checks the given package.
|
||||
func (l *loader) typeCheck(pkg *Package) {
|
||||
// don't conflict with typeCheckFromExportData
|
||||
|
||||
pkg.TypesInfo = &types.Info{
|
||||
Types: make(map[ast.Expr]types.TypeAndValue),
|
||||
Defs: make(map[*ast.Ident]types.Object),
|
||||
Uses: make(map[*ast.Ident]types.Object),
|
||||
Implicits: make(map[ast.Node]types.Object),
|
||||
Scopes: make(map[ast.Node]*types.Scope),
|
||||
Selections: make(map[*ast.SelectorExpr]*types.Selection),
|
||||
}
|
||||
|
||||
pkg.Fset = l.cfg.Fset
|
||||
pkg.Types = types.NewPackage(pkg.PkgPath, pkg.Name)
|
||||
|
||||
importer := importerFunc(func(path string) (*types.Package, error) {
|
||||
if path == "unsafe" {
|
||||
return types.Unsafe, nil
|
||||
}
|
||||
|
||||
// The imports map is keyed by import path.
|
||||
importedPkg := pkg.Imports()[path]
|
||||
if importedPkg == nil {
|
||||
return nil, fmt.Errorf("package %q possibly creates an import loop", path)
|
||||
}
|
||||
|
||||
// it's possible to have a call to check in parallel to a call to this
|
||||
// if one package in the package graph gets its dependency filtered out,
|
||||
// but another doesn't (so one wants a "dummy" package here, and another
|
||||
// wants the full check).
|
||||
//
|
||||
// Thus, we need to lock here (at least for the time being) to avoid
|
||||
// races between the above write to `pkg.Types` and this checking of
|
||||
// importedPkg.Types.
|
||||
importedPkg.Lock()
|
||||
defer importedPkg.Unlock()
|
||||
|
||||
if importedPkg.Types != nil && importedPkg.Types.Complete() {
|
||||
return importedPkg.Types, nil
|
||||
}
|
||||
|
||||
// if we haven't already loaded typecheck data, we don't care about this package's types
|
||||
return types.NewPackage(importedPkg.PkgPath, importedPkg.Name), nil
|
||||
})
|
||||
|
||||
var errs []error
|
||||
|
||||
// type-check
|
||||
checkConfig := &types.Config{
|
||||
Importer: importer,
|
||||
|
||||
IgnoreFuncBodies: true, // we only need decl-level info
|
||||
|
||||
Error: func(err error) {
|
||||
errs = append(errs, err)
|
||||
},
|
||||
|
||||
Sizes: pkg.TypesSizes,
|
||||
}
|
||||
if err := types.NewChecker(checkConfig, l.cfg.Fset, pkg.Types, pkg.TypesInfo).Files(pkg.Syntax); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
|
||||
// make sure that if a given sub-import is ill-typed, we mark this package as ill-typed as well.
|
||||
illTyped := len(errs) > 0
|
||||
if !illTyped {
|
||||
for _, importedPkg := range pkg.Imports() {
|
||||
if importedPkg.IllTyped {
|
||||
illTyped = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
pkg.IllTyped = illTyped
|
||||
|
||||
// publish errors to the package error list.
|
||||
for _, err := range errs {
|
||||
pkg.AddError(err)
|
||||
}
|
||||
}
|
||||
|
||||
// parseFile parses the given file, including comments.
|
||||
func (l *loader) parseFile(filename string, src []byte) (*ast.File, error) {
|
||||
// skip function bodies
|
||||
file, err := parser.ParseFile(l.cfg.Fset, filename, src, parser.AllErrors|parser.ParseComments)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return file, nil
|
||||
}
|
||||
|
||||
// LoadRoots loads the given "root" packages by path, transitively loading
|
||||
// and all imports as well.
|
||||
//
|
||||
// Loaded packages will have type size, imports, and exports file information
|
||||
// populated. Additional information, like ASTs and type-checking information,
|
||||
// can be accessed via methods on individual packages.
|
||||
func LoadRoots(roots ...string) ([]*Package, error) {
|
||||
return LoadRootsWithConfig(&packages.Config{}, roots...)
|
||||
}
|
||||
|
||||
// LoadRootsWithConfig functions like LoadRoots, except that it allows passing
|
||||
// a custom loading config. The config will be modified to suit the needs of
|
||||
// the loader.
|
||||
//
|
||||
// This is generally only useful for use in testing when you need to modify
|
||||
// loading settings to load from a fake location.
|
||||
func LoadRootsWithConfig(cfg *packages.Config, roots ...string) ([]*Package, error) {
|
||||
l := &loader{
|
||||
cfg: cfg,
|
||||
packages: make(map[*packages.Package]*Package),
|
||||
}
|
||||
l.cfg.Mode |= packages.LoadImports | packages.NeedTypesSizes
|
||||
if l.cfg.Fset == nil {
|
||||
l.cfg.Fset = token.NewFileSet()
|
||||
}
|
||||
// put our build flags first so that callers can override them
|
||||
l.cfg.BuildFlags = append([]string{"-tags", "ignore_autogenerated"}, l.cfg.BuildFlags...)
|
||||
|
||||
rawPkgs, err := packages.Load(l.cfg, roots...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, rawPkg := range rawPkgs {
|
||||
l.Roots = append(l.Roots, l.packageFor(rawPkg))
|
||||
}
|
||||
|
||||
return l.Roots, nil
|
||||
}
|
||||
|
||||
// importFunc is an implementation of the single-method
|
||||
// types.Importer interface based on a function value.
|
||||
type importerFunc func(path string) (*types.Package, error)
|
||||
|
||||
func (f importerFunc) Import(path string) (*types.Package, error) { return f(path) }
|
32
vendor/sigs.k8s.io/controller-tools/pkg/loader/paths.go
generated
vendored
Normal file
32
vendor/sigs.k8s.io/controller-tools/pkg/loader/paths.go
generated
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package loader
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// NonVendorPath returns a package path that does not include anything before the
|
||||
// last vendor directory. This is useful for when using vendor directories,
|
||||
// and using go/types.Package.Path(), which returns the full path including vendor.
|
||||
//
|
||||
// If you're using this, make sure you really need it -- it's better to index by
|
||||
// the actual Package object when you can.
|
||||
func NonVendorPath(rawPath string) string {
|
||||
parts := strings.Split(rawPath, "/vendor/")
|
||||
return parts[len(parts)-1]
|
||||
}
|
268
vendor/sigs.k8s.io/controller-tools/pkg/loader/refs.go
generated
vendored
Normal file
268
vendor/sigs.k8s.io/controller-tools/pkg/loader/refs.go
generated
vendored
Normal file
@@ -0,0 +1,268 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package loader
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"go/ast"
|
||||
"strconv"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// NB(directxman12): most of this is done by the typechecker,
|
||||
// but it's a bit slow/heavyweight for what we want -- we want
|
||||
// to resolve external imports *only* if we actually need them.
|
||||
|
||||
// Basically, what we do is:
|
||||
// 1. Map imports to names
|
||||
// 2. Find all explicit external references (`name.type`)
|
||||
// 3. Find all referenced packages by merging explicit references and dot imports
|
||||
// 4. Only type-check those packages
|
||||
// 5. Ignore type-checking errors from the missing packages, because we won't ever
|
||||
// touch unloaded types (they're probably used in ignored fields/types, variables, or functions)
|
||||
// (done using PrintErrors with an ignore argument from the caller).
|
||||
// 6. Notice any actual type-checking errors via invalid types
|
||||
|
||||
// importsMap saves import aliases, mapping them to underlying packages.
|
||||
type importsMap struct {
|
||||
// dotImports maps package IDs to packages for any packages that have/ been imported as `.`
|
||||
dotImports map[string]*Package
|
||||
// byName maps package aliases or names to the underlying package.
|
||||
byName map[string]*Package
|
||||
}
|
||||
|
||||
// mapImports maps imports from the names they use in the given file to the underlying package,
|
||||
// using a map of package import paths to packages (generally from Package.Imports()).
|
||||
func mapImports(file *ast.File, importedPkgs map[string]*Package) (*importsMap, error) {
|
||||
m := &importsMap{
|
||||
dotImports: make(map[string]*Package),
|
||||
byName: make(map[string]*Package),
|
||||
}
|
||||
for _, importSpec := range file.Imports {
|
||||
path, err := strconv.Unquote(importSpec.Path.Value)
|
||||
if err != nil {
|
||||
return nil, ErrFromNode(err, importSpec.Path)
|
||||
}
|
||||
importedPkg := importedPkgs[path]
|
||||
if importedPkg == nil {
|
||||
return nil, ErrFromNode(fmt.Errorf("no such package located"), importSpec.Path)
|
||||
}
|
||||
if importSpec.Name == nil {
|
||||
m.byName[importedPkg.Name] = importedPkg
|
||||
continue
|
||||
}
|
||||
if importSpec.Name.Name == "." {
|
||||
m.dotImports[importedPkg.ID] = importedPkg
|
||||
continue
|
||||
}
|
||||
m.byName[importSpec.Name.Name] = importedPkg
|
||||
}
|
||||
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// referenceSet finds references to external packages' types in the given file,
|
||||
// without otherwise calling into the type-checker. When checking structs,
|
||||
// it only checks fields with JSON tags.
|
||||
type referenceSet struct {
|
||||
file *ast.File
|
||||
imports *importsMap
|
||||
pkg *Package
|
||||
|
||||
externalRefs map[*Package]struct{}
|
||||
}
|
||||
|
||||
func (r *referenceSet) init() {
|
||||
if r.externalRefs == nil {
|
||||
r.externalRefs = make(map[*Package]struct{})
|
||||
}
|
||||
}
|
||||
|
||||
// NodeFilter filters nodes, accepting them for reference collection
|
||||
// when true is returned and rejecting them when false is returned.
|
||||
type NodeFilter func(ast.Node) bool
|
||||
|
||||
// collectReferences saves all references to external types in the given info.
|
||||
func (r *referenceSet) collectReferences(rawType ast.Expr, filterNode NodeFilter) {
|
||||
r.init()
|
||||
col := &referenceCollector{
|
||||
refs: r,
|
||||
filterNode: filterNode,
|
||||
}
|
||||
ast.Walk(col, rawType)
|
||||
}
|
||||
|
||||
// external saves an external reference to the given named package.
|
||||
func (r *referenceSet) external(pkgName string) {
|
||||
pkg := r.imports.byName[pkgName]
|
||||
if pkg == nil {
|
||||
r.pkg.AddError(fmt.Errorf("use of unimported package %q", pkgName))
|
||||
return
|
||||
}
|
||||
r.externalRefs[pkg] = struct{}{}
|
||||
}
|
||||
|
||||
// referenceCollector visits nodes in an AST, adding external references to a
|
||||
// referenceSet.
|
||||
type referenceCollector struct {
|
||||
refs *referenceSet
|
||||
filterNode NodeFilter
|
||||
}
|
||||
|
||||
func (c *referenceCollector) Visit(node ast.Node) ast.Visitor {
|
||||
if !c.filterNode(node) {
|
||||
return nil
|
||||
}
|
||||
switch typedNode := node.(type) {
|
||||
case *ast.Ident:
|
||||
// local reference or dot-import, ignore
|
||||
return nil
|
||||
case *ast.SelectorExpr:
|
||||
pkgName := typedNode.X.(*ast.Ident).Name
|
||||
c.refs.external(pkgName)
|
||||
return nil
|
||||
default:
|
||||
return c
|
||||
}
|
||||
}
|
||||
|
||||
// allReferencedPackages finds all directly referenced packages in the given package.
|
||||
func allReferencedPackages(pkg *Package, filterNodes NodeFilter) []*Package {
|
||||
pkg.NeedSyntax()
|
||||
refsByFile := make(map[*ast.File]*referenceSet)
|
||||
for _, file := range pkg.Syntax {
|
||||
imports, err := mapImports(file, pkg.Imports())
|
||||
if err != nil {
|
||||
pkg.AddError(err)
|
||||
return nil
|
||||
}
|
||||
refs := &referenceSet{
|
||||
file: file,
|
||||
imports: imports,
|
||||
pkg: pkg,
|
||||
}
|
||||
refsByFile[file] = refs
|
||||
}
|
||||
|
||||
EachType(pkg, func(file *ast.File, decl *ast.GenDecl, spec *ast.TypeSpec) {
|
||||
refs := refsByFile[file]
|
||||
refs.collectReferences(spec.Type, filterNodes)
|
||||
})
|
||||
|
||||
allPackages := make(map[*Package]struct{})
|
||||
for _, refs := range refsByFile {
|
||||
for _, pkg := range refs.imports.dotImports {
|
||||
allPackages[pkg] = struct{}{}
|
||||
}
|
||||
for ref := range refs.externalRefs {
|
||||
allPackages[ref] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
res := make([]*Package, 0, len(allPackages))
|
||||
for pkg := range allPackages {
|
||||
res = append(res, pkg)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// TypeChecker performs type-checking on a limitted subset of packages by
|
||||
// checking each package's types' externally-referenced types, and only
|
||||
// type-checking those packages.
|
||||
type TypeChecker struct {
|
||||
// NodeFilters are used to filter the set of references that are followed
|
||||
// when typechecking. If any of the filters returns true for a given node,
|
||||
// its package will be added to the set of packages to check.
|
||||
//
|
||||
// If no filters are specified, all references are followed (this may be slow).
|
||||
//
|
||||
// Modifying this after the first call to check may yield strange/invalid
|
||||
// results.
|
||||
NodeFilters []NodeFilter
|
||||
|
||||
checkedPackages map[*Package]struct{}
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
// Check type-checks the given package and all packages referenced by types
|
||||
// that pass through (have true returned by) any of the NodeFilters.
|
||||
func (c *TypeChecker) Check(root *Package) {
|
||||
c.init()
|
||||
|
||||
// use a sub-checker with the appropriate settings
|
||||
(&TypeChecker{
|
||||
NodeFilters: c.NodeFilters,
|
||||
checkedPackages: c.checkedPackages,
|
||||
}).check(root)
|
||||
}
|
||||
|
||||
func (c *TypeChecker) isNodeInteresting(node ast.Node) bool {
|
||||
// no filters --> everything is important
|
||||
if len(c.NodeFilters) == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
// otherwise, passing through any one filter means this node is important
|
||||
for _, filter := range c.NodeFilters {
|
||||
if filter(node) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (c *TypeChecker) init() {
|
||||
if c.checkedPackages == nil {
|
||||
c.checkedPackages = make(map[*Package]struct{})
|
||||
}
|
||||
}
|
||||
|
||||
// check recursively type-checks the given package, only loading packages that
|
||||
// are actually referenced by our types (it's the actual implementation of Check,
|
||||
// without initialization).
|
||||
func (c *TypeChecker) check(root *Package) {
|
||||
root.Lock()
|
||||
defer root.Unlock()
|
||||
|
||||
c.Lock()
|
||||
_, ok := c.checkedPackages[root]
|
||||
c.Unlock()
|
||||
if ok {
|
||||
return
|
||||
}
|
||||
|
||||
refedPackages := allReferencedPackages(root, c.isNodeInteresting)
|
||||
|
||||
// first, resolve imports for all leaf packages...
|
||||
var wg sync.WaitGroup
|
||||
for _, pkg := range refedPackages {
|
||||
wg.Add(1)
|
||||
go func(pkg *Package) {
|
||||
defer wg.Done()
|
||||
c.check(pkg)
|
||||
}(pkg)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
// ...then, we can safely type-check ourself
|
||||
root.NeedTypesInfo()
|
||||
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
c.checkedPackages[root] = struct{}{}
|
||||
}
|
81
vendor/sigs.k8s.io/controller-tools/pkg/loader/visit.go
generated
vendored
Normal file
81
vendor/sigs.k8s.io/controller-tools/pkg/loader/visit.go
generated
vendored
Normal file
@@ -0,0 +1,81 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package loader
|
||||
|
||||
import (
|
||||
"go/ast"
|
||||
"reflect"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// TypeCallback is a callback called for each raw AST (gendecl, typespec) combo.
|
||||
type TypeCallback func(file *ast.File, decl *ast.GenDecl, spec *ast.TypeSpec)
|
||||
|
||||
// EachType calls the given callback for each (gendecl, typespec) combo in the
|
||||
// given package. Generally, using markers.EachType is better when working
|
||||
// with marker data, and has a more convinient representation.
|
||||
func EachType(pkg *Package, cb TypeCallback) {
|
||||
visitor := &typeVisitor{
|
||||
callback: cb,
|
||||
}
|
||||
pkg.NeedSyntax()
|
||||
for _, file := range pkg.Syntax {
|
||||
visitor.file = file
|
||||
ast.Walk(visitor, file)
|
||||
}
|
||||
}
|
||||
|
||||
// typeVisitor visits all TypeSpecs, calling the given callback for each.
|
||||
type typeVisitor struct {
|
||||
callback TypeCallback
|
||||
decl *ast.GenDecl
|
||||
file *ast.File
|
||||
}
|
||||
|
||||
// Visit visits all TypeSpecs.
|
||||
func (v *typeVisitor) Visit(node ast.Node) ast.Visitor {
|
||||
if node == nil {
|
||||
v.decl = nil
|
||||
return v
|
||||
}
|
||||
|
||||
switch typedNode := node.(type) {
|
||||
case *ast.File:
|
||||
v.file = typedNode
|
||||
return v
|
||||
case *ast.GenDecl:
|
||||
v.decl = typedNode
|
||||
return v
|
||||
case *ast.TypeSpec:
|
||||
v.callback(v.file, v.decl, typedNode)
|
||||
return nil // don't recurse
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// ParseAstTag parses the given raw tag literal into a reflect.StructTag.
|
||||
func ParseAstTag(tag *ast.BasicLit) reflect.StructTag {
|
||||
if tag == nil {
|
||||
return reflect.StructTag("")
|
||||
}
|
||||
tagStr, err := strconv.Unquote(tag.Value)
|
||||
if err != nil {
|
||||
return reflect.StructTag("")
|
||||
}
|
||||
return reflect.StructTag(tagStr)
|
||||
}
|
422
vendor/sigs.k8s.io/controller-tools/pkg/markers/collect.go
generated
vendored
Normal file
422
vendor/sigs.k8s.io/controller-tools/pkg/markers/collect.go
generated
vendored
Normal file
@@ -0,0 +1,422 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"go/ast"
|
||||
"go/token"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
)
|
||||
|
||||
// Collector collects and parses marker comments defined in the registry
|
||||
// from package source code. If no registry is provided, an empty one will
|
||||
// be initialized on the first call to MarkersInPackage.
|
||||
type Collector struct {
|
||||
*Registry
|
||||
|
||||
byPackage map[string]map[ast.Node]MarkerValues
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// MarkerValues are all the values for some set of markers.
|
||||
type MarkerValues map[string][]interface{}
|
||||
|
||||
// Get fetches the first value that for the given marker, returning
|
||||
// nil if no values are available.
|
||||
func (v MarkerValues) Get(name string) interface{} {
|
||||
vals := v[name]
|
||||
if len(vals) == 0 {
|
||||
return nil
|
||||
}
|
||||
return vals[0]
|
||||
}
|
||||
|
||||
func (c *Collector) init() {
|
||||
if c.Registry == nil {
|
||||
c.Registry = &Registry{}
|
||||
}
|
||||
if c.byPackage == nil {
|
||||
c.byPackage = make(map[string]map[ast.Node]MarkerValues)
|
||||
}
|
||||
}
|
||||
|
||||
// MarkersInPackage computes the marker values by node for the given package. Results
|
||||
// are cached by package ID, so this is safe to call repeatedly from different functions.
|
||||
// Each file in the package is treated as a distinct node.
|
||||
//
|
||||
// We consider a marker to be associated with a given AST node if either of the following are true:
|
||||
//
|
||||
// - it's in the Godoc for that AST node
|
||||
//
|
||||
// - it's in the closest non-godoc comment group above that node,
|
||||
// *and* that node is a type or field node, *and* [it's either
|
||||
// registered as type-level *or* it's not registered as being
|
||||
// package-level]
|
||||
//
|
||||
// - it's not in the Godoc of a node, doesn't meet the above criteria, and
|
||||
// isn't in a struct definition (in which case it's package-level)
|
||||
func (c *Collector) MarkersInPackage(pkg *loader.Package) (map[ast.Node]MarkerValues, error) {
|
||||
c.mu.Lock()
|
||||
c.init()
|
||||
if markers, exist := c.byPackage[pkg.ID]; exist {
|
||||
c.mu.Unlock()
|
||||
return markers, nil
|
||||
}
|
||||
// unlock early, it's ok if we do a bit extra work rather than locking while we're working
|
||||
c.mu.Unlock()
|
||||
|
||||
pkg.NeedSyntax()
|
||||
nodeMarkersRaw := c.associatePkgMarkers(pkg)
|
||||
markers, err := c.parseMarkersInPackage(nodeMarkersRaw)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
c.byPackage[pkg.ID] = markers
|
||||
|
||||
return markers, nil
|
||||
}
|
||||
|
||||
// parseMarkersInPackage parses the given raw marker comments into output values using the registry.
|
||||
func (c *Collector) parseMarkersInPackage(nodeMarkersRaw map[ast.Node][]markerComment) (map[ast.Node]MarkerValues, error) {
|
||||
var errors []error
|
||||
nodeMarkerValues := make(map[ast.Node]MarkerValues)
|
||||
for node, markersRaw := range nodeMarkersRaw {
|
||||
var target TargetType
|
||||
switch node.(type) {
|
||||
case *ast.File:
|
||||
target = DescribesPackage
|
||||
case *ast.Field:
|
||||
target = DescribesField
|
||||
default:
|
||||
target = DescribesType
|
||||
}
|
||||
markerVals := make(map[string][]interface{})
|
||||
for _, markerRaw := range markersRaw {
|
||||
markerText := markerRaw.Text()
|
||||
def := c.Registry.Lookup(markerText, target)
|
||||
if def == nil {
|
||||
continue
|
||||
}
|
||||
val, err := def.Parse(markerText)
|
||||
if err != nil {
|
||||
errors = append(errors, loader.ErrFromNode(err, markerRaw))
|
||||
continue
|
||||
}
|
||||
markerVals[def.Name] = append(markerVals[def.Name], val)
|
||||
}
|
||||
nodeMarkerValues[node] = markerVals
|
||||
}
|
||||
|
||||
return nodeMarkerValues, loader.MaybeErrList(errors)
|
||||
}
|
||||
|
||||
// associatePkgMarkers associates markers with AST nodes in the given package.
|
||||
func (c *Collector) associatePkgMarkers(pkg *loader.Package) map[ast.Node][]markerComment {
|
||||
nodeMarkers := make(map[ast.Node][]markerComment)
|
||||
for _, file := range pkg.Syntax {
|
||||
fileNodeMarkers := c.associateFileMarkers(file)
|
||||
for node, markers := range fileNodeMarkers {
|
||||
nodeMarkers[node] = append(nodeMarkers[node], markers...)
|
||||
}
|
||||
}
|
||||
|
||||
return nodeMarkers
|
||||
}
|
||||
|
||||
// associateFileMarkers associates markers with AST nodes in the given file.
|
||||
func (c *Collector) associateFileMarkers(file *ast.File) map[ast.Node][]markerComment {
|
||||
// grab all the raw marker comments by node
|
||||
visitor := markerSubVisitor{
|
||||
collectPackageLevel: true,
|
||||
markerVisitor: &markerVisitor{
|
||||
nodeMarkers: make(map[ast.Node][]markerComment),
|
||||
allComments: file.Comments,
|
||||
},
|
||||
}
|
||||
ast.Walk(visitor, file)
|
||||
|
||||
// grab the last package-level comments at the end of the file (if any)
|
||||
lastFileMarkers := visitor.markersBetween(false, visitor.commentInd, len(visitor.allComments))
|
||||
visitor.pkgMarkers = append(visitor.pkgMarkers, lastFileMarkers...)
|
||||
|
||||
// figure out if any type-level markers are actually package-level markers
|
||||
for node, markers := range visitor.nodeMarkers {
|
||||
_, isType := node.(*ast.TypeSpec)
|
||||
if !isType {
|
||||
continue
|
||||
}
|
||||
endOfMarkers := 0
|
||||
for _, marker := range markers {
|
||||
if marker.fromGodoc {
|
||||
// markers from godoc are never package level
|
||||
markers[endOfMarkers] = marker
|
||||
endOfMarkers++
|
||||
continue
|
||||
}
|
||||
markerText := marker.Text()
|
||||
typeDef := c.Registry.Lookup(markerText, DescribesType)
|
||||
if typeDef != nil {
|
||||
// prefer assuming type-level markers
|
||||
markers[endOfMarkers] = marker
|
||||
endOfMarkers++
|
||||
continue
|
||||
}
|
||||
def := c.Registry.Lookup(markerText, DescribesPackage)
|
||||
if def == nil {
|
||||
// assume type-level unless proven otherwise
|
||||
markers[endOfMarkers] = marker
|
||||
endOfMarkers++
|
||||
continue
|
||||
}
|
||||
// it's package-level, since a package-level definition exists
|
||||
visitor.pkgMarkers = append(visitor.pkgMarkers, marker)
|
||||
}
|
||||
visitor.nodeMarkers[node] = markers[:endOfMarkers] // re-set after trimming the package markers
|
||||
}
|
||||
visitor.nodeMarkers[file] = visitor.pkgMarkers
|
||||
|
||||
return visitor.nodeMarkers
|
||||
}
|
||||
|
||||
// markerComment is an AST comment that contains a marker.
|
||||
// It may or may not be from a Godoc comment, which affects
|
||||
// marker re-associated (from type-level to package-level)
|
||||
type markerComment struct {
|
||||
*ast.Comment
|
||||
fromGodoc bool
|
||||
}
|
||||
|
||||
// Text returns the text of the marker, stripped of the comment
|
||||
// marker and leading spaces, as should be passed to Registry.Lookup
|
||||
// and Registry.Parse.
|
||||
func (c markerComment) Text() string {
|
||||
return strings.TrimSpace(c.Comment.Text[2:])
|
||||
}
|
||||
|
||||
// markerVisistor visits AST nodes, recording markers associated with each node.
|
||||
type markerVisitor struct {
|
||||
allComments []*ast.CommentGroup
|
||||
commentInd int
|
||||
|
||||
declComments []markerComment
|
||||
lastLineCommentGroup *ast.CommentGroup
|
||||
|
||||
pkgMarkers []markerComment
|
||||
nodeMarkers map[ast.Node][]markerComment
|
||||
}
|
||||
|
||||
// isMarkerComment checks that the given comment is a single-line (`//`)
|
||||
// comment and it's first non-space content is `+`.
|
||||
func isMarkerComment(comment string) bool {
|
||||
if comment[0:2] != "//" {
|
||||
return false
|
||||
}
|
||||
stripped := strings.TrimSpace(comment[2:])
|
||||
if len(stripped) < 1 || stripped[0] != '+' {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// markersBetween grabs the markers between the given indicies in the list of all comments.
|
||||
func (v *markerVisitor) markersBetween(fromGodoc bool, start, end int) []markerComment {
|
||||
if start < 0 || end < 0 {
|
||||
return nil
|
||||
}
|
||||
var res []markerComment
|
||||
for i := start; i < end; i++ {
|
||||
commentGroup := v.allComments[i]
|
||||
for _, comment := range commentGroup.List {
|
||||
if !isMarkerComment(comment.Text) {
|
||||
continue
|
||||
}
|
||||
res = append(res, markerComment{Comment: comment, fromGodoc: fromGodoc})
|
||||
}
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
type markerSubVisitor struct {
|
||||
*markerVisitor
|
||||
node ast.Node
|
||||
collectPackageLevel bool
|
||||
}
|
||||
|
||||
// Visit collects markers for each node in the AST, optionally
|
||||
// collecting unassociated markers as package-level.
|
||||
func (v markerSubVisitor) Visit(node ast.Node) ast.Visitor {
|
||||
if node == nil {
|
||||
// end of the node, so we might need to advance comments beyond the end
|
||||
// of the block if we don't want to collect package-level markers in
|
||||
// this block.
|
||||
|
||||
if !v.collectPackageLevel {
|
||||
if v.commentInd < len(v.allComments) {
|
||||
lastCommentInd := v.commentInd
|
||||
nextGroup := v.allComments[lastCommentInd]
|
||||
for nextGroup.Pos() < v.node.End() {
|
||||
lastCommentInd++
|
||||
if lastCommentInd >= len(v.allComments) {
|
||||
// after the increment so our decrement below still makes sense
|
||||
break
|
||||
}
|
||||
nextGroup = v.allComments[lastCommentInd]
|
||||
}
|
||||
v.commentInd = lastCommentInd
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// skip comments on the same line as the previous node
|
||||
// making sure to double-check for the case where we've gone past the end of the comments
|
||||
// but still have to finish up typespec-gendecl association (see below).
|
||||
if v.lastLineCommentGroup != nil && v.commentInd < len(v.allComments) && v.lastLineCommentGroup.Pos() == v.allComments[v.commentInd].Pos() {
|
||||
v.commentInd++
|
||||
}
|
||||
|
||||
// stop visiting if there are no more comments in the file
|
||||
// NB(directxman12): we can't just stop immediately, because we
|
||||
// still need to check if there are typespecs associated with gendecls.
|
||||
var markerCommentBlock []markerComment
|
||||
var docCommentBlock []markerComment
|
||||
lastCommentInd := v.commentInd
|
||||
if v.commentInd < len(v.allComments) {
|
||||
// figure out the first comment after the node in question...
|
||||
nextGroup := v.allComments[lastCommentInd]
|
||||
for nextGroup.Pos() < node.Pos() {
|
||||
lastCommentInd++
|
||||
if lastCommentInd >= len(v.allComments) {
|
||||
// after the increment so our decrement below still makes sense
|
||||
break
|
||||
}
|
||||
nextGroup = v.allComments[lastCommentInd]
|
||||
}
|
||||
lastCommentInd-- // ...then decrement to get the last comment before the node in question
|
||||
|
||||
// figure out the godoc comment so we can deal with it separately
|
||||
var docGroup *ast.CommentGroup
|
||||
docGroup, v.lastLineCommentGroup = associatedCommentsFor(node)
|
||||
|
||||
// find the last comment group that's not godoc
|
||||
markerCommentInd := lastCommentInd
|
||||
if docGroup != nil && v.allComments[markerCommentInd].Pos() == docGroup.Pos() {
|
||||
markerCommentInd--
|
||||
}
|
||||
|
||||
// check if we have freestanding package markers,
|
||||
// and find the markers in our "closest non-godoc" comment block,
|
||||
// plus our godoc comment block
|
||||
if markerCommentInd >= v.commentInd {
|
||||
if v.collectPackageLevel {
|
||||
// assume anything between the comment ind and the marker ind (not including it)
|
||||
// are package-level
|
||||
v.pkgMarkers = append(v.pkgMarkers, v.markersBetween(false, v.commentInd, markerCommentInd)...)
|
||||
}
|
||||
markerCommentBlock = v.markersBetween(false, markerCommentInd, markerCommentInd+1)
|
||||
docCommentBlock = v.markersBetween(true, markerCommentInd+1, lastCommentInd+1)
|
||||
} else {
|
||||
docCommentBlock = v.markersBetween(true, markerCommentInd+1, lastCommentInd+1)
|
||||
}
|
||||
}
|
||||
|
||||
resVisitor := markerSubVisitor{
|
||||
collectPackageLevel: false, // don't collect package level by default
|
||||
markerVisitor: v.markerVisitor,
|
||||
node: node,
|
||||
}
|
||||
|
||||
// associate those markers with a node
|
||||
switch typedNode := node.(type) {
|
||||
case *ast.GenDecl:
|
||||
// save the comments associated with the gen-decl if it's a single-line type decl
|
||||
if typedNode.Lparen != token.NoPos || typedNode.Tok != token.TYPE {
|
||||
// not a single-line type spec, treat them as free comments
|
||||
v.pkgMarkers = append(v.pkgMarkers, markerCommentBlock...)
|
||||
break
|
||||
}
|
||||
// save these, we'll need them when we encounter the actual type spec
|
||||
v.declComments = append(v.declComments, markerCommentBlock...)
|
||||
v.declComments = append(v.declComments, docCommentBlock...)
|
||||
case *ast.TypeSpec:
|
||||
// add in comments attributed to the gen-decl, if any,
|
||||
// as well as comments associated with the actual type
|
||||
v.nodeMarkers[node] = append(v.nodeMarkers[node], v.declComments...)
|
||||
v.nodeMarkers[node] = append(v.nodeMarkers[node], markerCommentBlock...)
|
||||
v.nodeMarkers[node] = append(v.nodeMarkers[node], docCommentBlock...)
|
||||
|
||||
v.declComments = nil
|
||||
v.collectPackageLevel = false // don't collect package-level inside type structs
|
||||
case *ast.Field:
|
||||
v.nodeMarkers[node] = append(v.nodeMarkers[node], markerCommentBlock...)
|
||||
v.nodeMarkers[node] = append(v.nodeMarkers[node], docCommentBlock...)
|
||||
case *ast.File:
|
||||
v.pkgMarkers = append(v.pkgMarkers, markerCommentBlock...)
|
||||
v.pkgMarkers = append(v.pkgMarkers, docCommentBlock...)
|
||||
|
||||
// collect markers in root file scope
|
||||
resVisitor.collectPackageLevel = true
|
||||
default:
|
||||
// assume markers before anything else are package-level markers,
|
||||
// *but* don't include any markers in godoc
|
||||
if v.collectPackageLevel {
|
||||
v.pkgMarkers = append(v.pkgMarkers, markerCommentBlock...)
|
||||
}
|
||||
}
|
||||
|
||||
// increment the comment ind so that we start at the right place for the next node
|
||||
v.commentInd = lastCommentInd + 1
|
||||
|
||||
return resVisitor
|
||||
|
||||
}
|
||||
|
||||
// associatedCommentsFor returns the doc comment group (if relevant and present) and end-of-line comment
|
||||
// (again if relevant and present) for the given AST node.
|
||||
func associatedCommentsFor(node ast.Node) (docGroup *ast.CommentGroup, lastLineCommentGroup *ast.CommentGroup) {
|
||||
switch typedNode := node.(type) {
|
||||
case *ast.Field:
|
||||
docGroup = typedNode.Doc
|
||||
lastLineCommentGroup = typedNode.Comment
|
||||
case *ast.File:
|
||||
docGroup = typedNode.Doc
|
||||
case *ast.FuncDecl:
|
||||
docGroup = typedNode.Doc
|
||||
case *ast.GenDecl:
|
||||
docGroup = typedNode.Doc
|
||||
case *ast.ImportSpec:
|
||||
docGroup = typedNode.Doc
|
||||
lastLineCommentGroup = typedNode.Comment
|
||||
case *ast.TypeSpec:
|
||||
docGroup = typedNode.Doc
|
||||
lastLineCommentGroup = typedNode.Comment
|
||||
case *ast.ValueSpec:
|
||||
docGroup = typedNode.Doc
|
||||
lastLineCommentGroup = typedNode.Comment
|
||||
default:
|
||||
lastLineCommentGroup = nil
|
||||
}
|
||||
|
||||
return docGroup, lastLineCommentGroup
|
||||
}
|
113
vendor/sigs.k8s.io/controller-tools/pkg/markers/doc.go
generated
vendored
Normal file
113
vendor/sigs.k8s.io/controller-tools/pkg/markers/doc.go
generated
vendored
Normal file
@@ -0,0 +1,113 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package markers contains utilities for defining and parsing "marker
|
||||
// comments", also occasionally called tag comments (we use the term marker to
|
||||
// avoid confusing with struct tags). Parsed result (output) values take the
|
||||
// form of Go values, much like the "encoding/json" package.
|
||||
//
|
||||
// Definitions and Parsing
|
||||
//
|
||||
// Markers are defined as structured Definitions which can be used to
|
||||
// consistently parse marker comments. A Definition contains an concrete
|
||||
// output type for the marker, which can be a simple type (like string), a
|
||||
// struct, or a wrapper type (useful for defining additional methods on marker
|
||||
// types).
|
||||
//
|
||||
// Markers take the general form
|
||||
//
|
||||
// +path:to:marker=val
|
||||
//
|
||||
// +path:to:marker:arg1=val,arg2=val2
|
||||
//
|
||||
// +path:to:marker
|
||||
//
|
||||
// Arguments may be ints, bools, strings, and slices. Ints and bool take their
|
||||
// standard form from Go. Strings may take any of their standard forms, or any
|
||||
// sequence of unquoted characters up until a `,` or `;` is encountered. Lists
|
||||
// take either of the following forms:
|
||||
//
|
||||
// val;val;val
|
||||
//
|
||||
// {val, val, val}
|
||||
//
|
||||
// Note that the first form will not properly parse nested slices, but is
|
||||
// generally convenient and is the form used in many existing markers.
|
||||
//
|
||||
// Each of those argument types maps to the corresponding go type. Pointers
|
||||
// mark optional fields (a struct tag, below, may also be used). The empty
|
||||
// interface will match any type.
|
||||
//
|
||||
// Struct fields may optionally be annotated with the `marker` struct tag. The
|
||||
// first argument is a name override. If it's left blank (or the tag isn't
|
||||
// present), the camelCase version of the name will be used. The only
|
||||
// additional argument defined is `optional`, which marks a field as optional
|
||||
// without using a pointer.
|
||||
//
|
||||
// All parsed values are unmarshalled into the output type. If any
|
||||
// non-optional fields aren't mentioned, an error will be raised unless
|
||||
// `Strict` is set to false.
|
||||
//
|
||||
// Registries and Lookup
|
||||
//
|
||||
// Definitions can be added to registries to facilitate lookups. Each
|
||||
// definition is marked as either describing a type, struct field, or package
|
||||
// (unassociated). The same marker name may be registered multiple times, as
|
||||
// long as each describes a different construct (type, field, or package).
|
||||
// Definitions can then be looked up by passing unparsed markers.
|
||||
//
|
||||
// Collection and Extraction
|
||||
//
|
||||
// Markers can be collected from a loader.Package using a Collector. The
|
||||
// Collector will read from a given Registry, collecting comments that look
|
||||
// like markers and parsing them if they match some definition on the registry.
|
||||
//
|
||||
// Markers are considered associated with a particular field or type if they
|
||||
// exist in the Godoc, or the closest non-godoc comment. Any other markers not
|
||||
// inside a some other block (e.g. a struct definition, interface definition,
|
||||
// etc) are considered package level. Markers in a "closest non-Go comment
|
||||
// block" may also be considered package level if registered as such and no
|
||||
// identical type-level definition exists.
|
||||
//
|
||||
// Like loader.Package, Collector's methods are idempotent and will not
|
||||
// reperform work.
|
||||
//
|
||||
// Traversal
|
||||
//
|
||||
// EachType function iterates over each type in a Package, providing
|
||||
// conveniently structured type and field information with marker values
|
||||
// associated.
|
||||
//
|
||||
// PackageMarkers can be used to fetch just package-level markers.
|
||||
//
|
||||
// Help
|
||||
//
|
||||
// Help can be defined for each marker using the DefinitionHelp struct. It's
|
||||
// mostly intended to be generated off of godocs using cmd/helpgen, which takes
|
||||
// the first line as summary (removing the type/field name), and considers the
|
||||
// rest as details. It looks for the
|
||||
//
|
||||
// +controllertools:generateHelp[:category=<string>]
|
||||
//
|
||||
// marker to start generation.
|
||||
//
|
||||
// If you can't use godoc-based generation for whatever reasons (e.g.
|
||||
// primitive-typed markers), you can use the SimpleHelp and DeprecatedHelp
|
||||
// helper functions to generate help structs.
|
||||
//
|
||||
// Help is then registered into a registry as associated with the actual
|
||||
// definition, and can then be later retrieved from the registry.
|
||||
package markers
|
81
vendor/sigs.k8s.io/controller-tools/pkg/markers/help.go
generated
vendored
Normal file
81
vendor/sigs.k8s.io/controller-tools/pkg/markers/help.go
generated
vendored
Normal file
@@ -0,0 +1,81 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
// You *probably* don't want to write these structs by hand
|
||||
// -- use cmd/helpgen if you can write Godoc, and {Simple,Deprecated}Help
|
||||
// otherwise.
|
||||
|
||||
// DetailedHelp contains brief help, as well as more details.
|
||||
// For the "full" help, join the two together.
|
||||
type DetailedHelp struct {
|
||||
Summary string
|
||||
Details string
|
||||
}
|
||||
|
||||
// DefinitionHelp contains overall help for a marker Definition,
|
||||
// as well as per-field help.
|
||||
type DefinitionHelp struct {
|
||||
// DetailedHelp contains the overall help for the marker.
|
||||
DetailedHelp
|
||||
// Category describes what kind of marker this is.
|
||||
Category string
|
||||
// DeprecatedInFavorOf marks the marker as deprecated.
|
||||
// If non-nil & empty, it's assumed to just mean deprecated permanently.
|
||||
// If non-empty, it's assumed to be a marker name.
|
||||
DeprecatedInFavorOf *string
|
||||
|
||||
// NB(directxman12): we make FieldHelp be in terms of the Go struct field
|
||||
// names so that we don't have to know the conversion or processing rules
|
||||
// for struct fields at compile-time for help generation.
|
||||
|
||||
// FieldHelp defines the per-field help for this marker, *in terms of the
|
||||
// go struct field names. Use the FieldsHelp method to map this to
|
||||
// marker argument names.
|
||||
FieldHelp map[string]DetailedHelp
|
||||
}
|
||||
|
||||
// FieldsHelp maps per-field help to the actual marker argument names from the
|
||||
// given definition.
|
||||
func (d *DefinitionHelp) FieldsHelp(def *Definition) map[string]DetailedHelp {
|
||||
fieldsHelp := make(map[string]DetailedHelp, len(def.FieldNames))
|
||||
for fieldName, argName := range def.FieldNames {
|
||||
fieldsHelp[fieldName] = d.FieldHelp[argName]
|
||||
}
|
||||
return fieldsHelp
|
||||
}
|
||||
|
||||
// SimpleHelp returns help that just has marker-level summary information
|
||||
// (e.g. for use with empty or primitive-typed markers, where Godoc-based
|
||||
// generation isn't possible).
|
||||
func SimpleHelp(category, summary string) *DefinitionHelp {
|
||||
return &DefinitionHelp{
|
||||
Category: category,
|
||||
DetailedHelp: DetailedHelp{Summary: summary},
|
||||
}
|
||||
}
|
||||
|
||||
// DeprecatedHelp returns simple help (a la SimpleHelp), except marked as
|
||||
// deprecated in favor of the given marker (or an empty string for just
|
||||
// deprecated).
|
||||
func DeprecatedHelp(inFavorOf, category, summary string) *DefinitionHelp {
|
||||
return &DefinitionHelp{
|
||||
Category: category,
|
||||
DetailedHelp: DetailedHelp{Summary: summary},
|
||||
DeprecatedInFavorOf: &inFavorOf,
|
||||
}
|
||||
}
|
923
vendor/sigs.k8s.io/controller-tools/pkg/markers/parse.go
generated
vendored
Normal file
923
vendor/sigs.k8s.io/controller-tools/pkg/markers/parse.go
generated
vendored
Normal file
@@ -0,0 +1,923 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
sc "text/scanner"
|
||||
"unicode"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
)
|
||||
|
||||
// expect checks that the next token of the scanner is the given token, adding an error
|
||||
// to the scanner if not. It returns whether the token was as expected.
|
||||
func expect(scanner *sc.Scanner, expected rune, errDesc string) bool {
|
||||
tok := scanner.Scan()
|
||||
if tok != expected {
|
||||
scanner.Error(scanner, fmt.Sprintf("expected %s, got %q", errDesc, scanner.TokenText()))
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// peekNoSpace is equivalent to scanner.Peek, except that it will consume intervening whitespace.
|
||||
func peekNoSpace(scanner *sc.Scanner) rune {
|
||||
hint := scanner.Peek()
|
||||
for ; hint <= rune(' ') && ((1<<uint64(hint))&scanner.Whitespace) != 0; hint = scanner.Peek() {
|
||||
scanner.Next() // skip the whitespace
|
||||
}
|
||||
return hint
|
||||
}
|
||||
|
||||
var (
|
||||
// interfaceType is a pre-computed reflect.Type representing the empty interface.
|
||||
interfaceType = reflect.TypeOf((*interface{})(nil)).Elem()
|
||||
rawArgsType = reflect.TypeOf((*RawArguments)(nil)).Elem()
|
||||
)
|
||||
|
||||
// lowerCamelCase converts PascalCase string to
|
||||
// a camelCase string (by lowering the first rune).
|
||||
func lowerCamelCase(in string) string {
|
||||
isFirst := true
|
||||
return strings.Map(func(inRune rune) rune {
|
||||
if isFirst {
|
||||
isFirst = false
|
||||
return unicode.ToLower(inRune)
|
||||
}
|
||||
return inRune
|
||||
}, in)
|
||||
}
|
||||
|
||||
// RawArguments is a special type that can be used for a marker
|
||||
// to receive *all* raw, underparsed argument data for a marker.
|
||||
// You probably want to use `interface{}` to match any type instead.
|
||||
// Use *only* for legacy markers that don't follow Definition's normal
|
||||
// parsing logic. It should *not* be used as a field in a marker struct.
|
||||
type RawArguments []byte
|
||||
|
||||
// ArgumentType is the kind of a marker argument type.
|
||||
// It's roughly analogous to a subset of reflect.Kind, with
|
||||
// an extra "AnyType" to represent the empty interface.
|
||||
type ArgumentType int
|
||||
|
||||
const (
|
||||
// Invalid represents a type that can't be parsed, and should never be used.
|
||||
InvalidType ArgumentType = iota
|
||||
// IntType is an int
|
||||
IntType
|
||||
// StringType is a string
|
||||
StringType
|
||||
// BoolType is a bool
|
||||
BoolType
|
||||
// AnyType is the empty interface, and matches the rest of the content
|
||||
AnyType
|
||||
// SliceType is any slice constructed of the ArgumentTypes
|
||||
SliceType
|
||||
// MapType is any map constructed of string keys, and ArgumentType values.
|
||||
// Keys are strings, and it's common to see AnyType (non-uniform) values.
|
||||
MapType
|
||||
// RawType represents content that gets passed directly to the marker
|
||||
// without any parsing. It should *only* be used with anonymous markers.
|
||||
RawType
|
||||
)
|
||||
|
||||
// Argument is the type of a marker argument.
|
||||
type Argument struct {
|
||||
// Type is the type of this argument For non-scalar types (map and slice),
|
||||
// further information is specified in ItemType.
|
||||
Type ArgumentType
|
||||
// Optional indicates if this argument is optional.
|
||||
Optional bool
|
||||
// Pointer indicates if this argument was a pointer (this is really only
|
||||
// needed for deserialization, and should alway imply optional)
|
||||
Pointer bool
|
||||
|
||||
// ItemType is the type of the slice item for slices, and the value type
|
||||
// for maps.
|
||||
ItemType *Argument
|
||||
}
|
||||
|
||||
// typeString contains the internals of TypeString.
|
||||
func (a Argument) typeString(out *strings.Builder) {
|
||||
if a.Pointer {
|
||||
out.WriteRune('*')
|
||||
}
|
||||
|
||||
switch a.Type {
|
||||
case InvalidType:
|
||||
out.WriteString("<invalid>")
|
||||
case IntType:
|
||||
out.WriteString("int")
|
||||
case StringType:
|
||||
out.WriteString("string")
|
||||
case BoolType:
|
||||
out.WriteString("bool")
|
||||
case AnyType:
|
||||
out.WriteString("<any>")
|
||||
case SliceType:
|
||||
out.WriteString("[]")
|
||||
// arguments can't be non-pointer optional, so just call into typeString again.
|
||||
a.ItemType.typeString(out)
|
||||
case MapType:
|
||||
out.WriteString("map[string]")
|
||||
a.ItemType.typeString(out)
|
||||
case RawType:
|
||||
out.WriteString("<raw>")
|
||||
}
|
||||
}
|
||||
|
||||
// TypeString returns a string roughly equivalent
|
||||
// (but not identical) to the underlying Go type that
|
||||
// this argument would parse to. It's mainly useful
|
||||
// for user-friendly formatting of this argument (e.g.
|
||||
// help strings).
|
||||
func (a Argument) TypeString() string {
|
||||
out := &strings.Builder{}
|
||||
a.typeString(out)
|
||||
return out.String()
|
||||
}
|
||||
|
||||
func (a Argument) String() string {
|
||||
if a.Optional {
|
||||
return fmt.Sprintf("<optional arg %s>", a.TypeString())
|
||||
}
|
||||
return fmt.Sprintf("<arg %s>", a.TypeString())
|
||||
}
|
||||
|
||||
// castAndSet casts val to out's type if needed,
|
||||
// then sets out to val.
|
||||
func castAndSet(out, val reflect.Value) {
|
||||
outType := out.Type()
|
||||
if outType != val.Type() {
|
||||
val = val.Convert(outType)
|
||||
}
|
||||
out.Set(val)
|
||||
}
|
||||
|
||||
// makeSliceType makes a reflect.Type for a slice of the given type.
|
||||
// Useful for constructing the out value for when AnyType's guess returns a slice.
|
||||
func makeSliceType(itemType Argument) (reflect.Type, error) {
|
||||
var itemReflectedType reflect.Type
|
||||
switch itemType.Type {
|
||||
case IntType:
|
||||
itemReflectedType = reflect.TypeOf(int(0))
|
||||
case StringType:
|
||||
itemReflectedType = reflect.TypeOf("")
|
||||
case BoolType:
|
||||
itemReflectedType = reflect.TypeOf(false)
|
||||
case SliceType:
|
||||
subItemType, err := makeSliceType(*itemType.ItemType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
itemReflectedType = subItemType
|
||||
case MapType:
|
||||
subItemType, err := makeMapType(*itemType.ItemType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
itemReflectedType = subItemType
|
||||
// TODO(directxman12): support non-uniform slices? (probably not)
|
||||
default:
|
||||
return nil, fmt.Errorf("invalid type when constructing guessed slice out: %v", itemType.Type)
|
||||
}
|
||||
|
||||
if itemType.Pointer {
|
||||
itemReflectedType = reflect.PtrTo(itemReflectedType)
|
||||
}
|
||||
|
||||
return reflect.SliceOf(itemReflectedType), nil
|
||||
}
|
||||
|
||||
// makeMapType makes a reflect.Type for a map of the given item type.
|
||||
// Useful for constructing the out value for when AnyType's guess returns a map.
|
||||
func makeMapType(itemType Argument) (reflect.Type, error) {
|
||||
var itemReflectedType reflect.Type
|
||||
switch itemType.Type {
|
||||
case IntType:
|
||||
itemReflectedType = reflect.TypeOf(int(0))
|
||||
case StringType:
|
||||
itemReflectedType = reflect.TypeOf("")
|
||||
case BoolType:
|
||||
itemReflectedType = reflect.TypeOf(false)
|
||||
case SliceType:
|
||||
subItemType, err := makeSliceType(*itemType.ItemType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
itemReflectedType = subItemType
|
||||
// TODO(directxman12): support non-uniform slices? (probably not)
|
||||
case MapType:
|
||||
subItemType, err := makeMapType(*itemType.ItemType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
itemReflectedType = subItemType
|
||||
case AnyType:
|
||||
// NB(directxman12): maps explicitly allow non-uniform item types, unlike slices at the moment
|
||||
itemReflectedType = interfaceType
|
||||
default:
|
||||
return nil, fmt.Errorf("invalid type when constructing guessed slice out: %v", itemType.Type)
|
||||
}
|
||||
|
||||
if itemType.Pointer {
|
||||
itemReflectedType = reflect.PtrTo(itemReflectedType)
|
||||
}
|
||||
|
||||
return reflect.MapOf(reflect.TypeOf(""), itemReflectedType), nil
|
||||
}
|
||||
|
||||
// guessType takes an educated guess about the type of the next field. If allowSlice
|
||||
// is false, it will not guess slices. It's less efficient than parsing with actual
|
||||
// type information, since we need to allocate to peek ahead full tokens, and the scanner
|
||||
// only allows peeking ahead one character.
|
||||
// Maps are *always* non-uniform (i.e. type the AnyType item type), since they're frequently
|
||||
// used to represent things like defaults for an object in JSON.
|
||||
func guessType(scanner *sc.Scanner, raw string, allowSlice bool) *Argument {
|
||||
if allowSlice {
|
||||
maybeItem := guessType(scanner, raw, false)
|
||||
|
||||
subRaw := raw[scanner.Pos().Offset:]
|
||||
subScanner := parserScanner(subRaw, scanner.Error)
|
||||
|
||||
var tok rune
|
||||
for tok = subScanner.Scan(); tok != ',' && tok != sc.EOF && tok != ';'; tok = subScanner.Scan() {
|
||||
// wait till we get something interesting
|
||||
}
|
||||
|
||||
// semicolon means it's a legacy slice
|
||||
if tok == ';' {
|
||||
return &Argument{
|
||||
Type: SliceType,
|
||||
ItemType: maybeItem,
|
||||
}
|
||||
}
|
||||
|
||||
return maybeItem
|
||||
}
|
||||
|
||||
// everything else needs a duplicate scanner to scan properly
|
||||
// (so we don't consume our scanner tokens until we actually
|
||||
// go to use this -- Go doesn't like scanners that can be rewound).
|
||||
subRaw := raw[scanner.Pos().Offset:]
|
||||
subScanner := parserScanner(subRaw, scanner.Error)
|
||||
|
||||
// skip whitespace
|
||||
hint := peekNoSpace(subScanner)
|
||||
|
||||
// first, try the easy case -- quoted strings strings
|
||||
switch hint {
|
||||
case '"', '\'', '`':
|
||||
return &Argument{Type: StringType}
|
||||
}
|
||||
|
||||
// next, check for slices or maps
|
||||
if hint == '{' {
|
||||
subScanner.Scan()
|
||||
|
||||
// TODO(directxman12): this can't guess at empty objects, but that's generally ok.
|
||||
// We'll cross that bridge when we get there.
|
||||
|
||||
// look ahead till we can figure out if this is a map or a slice
|
||||
firstElemType := guessType(subScanner, subRaw, false)
|
||||
if firstElemType.Type == StringType {
|
||||
// might be a map or slice, parse the string and check for colon
|
||||
// (blech, basically arbitrary look-ahead due to raw strings).
|
||||
var keyVal string // just ignore this
|
||||
(&Argument{Type: StringType}).parseString(subScanner, raw, reflect.Indirect(reflect.ValueOf(&keyVal)))
|
||||
|
||||
if subScanner.Scan() == ':' {
|
||||
// it's got a string followed by a colon -- it's a map
|
||||
return &Argument{
|
||||
Type: MapType,
|
||||
ItemType: &Argument{Type: AnyType},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// definitely a slice -- maps have to have string keys and have a value followed by a colon
|
||||
return &Argument{
|
||||
Type: SliceType,
|
||||
ItemType: firstElemType,
|
||||
}
|
||||
}
|
||||
|
||||
// then, bools...
|
||||
probablyString := false
|
||||
if hint == 't' || hint == 'f' {
|
||||
// maybe a bool
|
||||
if nextTok := subScanner.Scan(); nextTok == sc.Ident {
|
||||
switch subScanner.TokenText() {
|
||||
case "true", "false":
|
||||
// definitely a bool
|
||||
return &Argument{Type: BoolType}
|
||||
}
|
||||
// probably a string
|
||||
probablyString = true
|
||||
} else {
|
||||
// we shouldn't ever get here
|
||||
scanner.Error(scanner, fmt.Sprintf("got a token (%q) that looked like an ident, but was not", scanner.TokenText()))
|
||||
return &Argument{Type: InvalidType}
|
||||
}
|
||||
}
|
||||
|
||||
// then, integers...
|
||||
if !probablyString {
|
||||
nextTok := subScanner.Scan()
|
||||
if nextTok == '-' {
|
||||
nextTok = subScanner.Scan()
|
||||
}
|
||||
if nextTok == sc.Int {
|
||||
return &Argument{Type: IntType}
|
||||
}
|
||||
}
|
||||
|
||||
// otherwise assume bare strings
|
||||
return &Argument{Type: StringType}
|
||||
}
|
||||
|
||||
// parseString parses either of the two accepted string forms (quoted, or bare tokens).
|
||||
func (a *Argument) parseString(scanner *sc.Scanner, raw string, out reflect.Value) {
|
||||
// strings are a bit weird -- the "easy" case is quoted strings (tokenized as strings),
|
||||
// the "hard" case (present for backwards compat) is a bare sequence of tokens that aren't
|
||||
// a comma.
|
||||
tok := scanner.Scan()
|
||||
if tok == sc.String || tok == sc.RawString {
|
||||
// the easy case
|
||||
val, err := strconv.Unquote(scanner.TokenText())
|
||||
if err != nil {
|
||||
scanner.Error(scanner, fmt.Sprintf("unable to parse string: %v", err))
|
||||
return
|
||||
}
|
||||
castAndSet(out, reflect.ValueOf(val))
|
||||
return
|
||||
}
|
||||
|
||||
// the "hard" case -- bare tokens not including ',' (the argument
|
||||
// separator), ';' (the slice separator), ':' (the map separator), or '}'
|
||||
// (delimitted slice ender)
|
||||
startPos := scanner.Position.Offset
|
||||
for hint := peekNoSpace(scanner); hint != ',' && hint != ';' && hint != ':' && hint != '}' && hint != sc.EOF; hint = peekNoSpace(scanner) {
|
||||
// skip this token
|
||||
scanner.Scan()
|
||||
}
|
||||
endPos := scanner.Position.Offset + len(scanner.TokenText())
|
||||
castAndSet(out, reflect.ValueOf(raw[startPos:endPos]))
|
||||
}
|
||||
|
||||
// parseSlice parses either of the two slice forms (curly-brace-delimitted and semicolon-separated).
|
||||
func (a *Argument) parseSlice(scanner *sc.Scanner, raw string, out reflect.Value) {
|
||||
// slices have two supported formats, like string:
|
||||
// - `{val, val, val}` (preferred)
|
||||
// - `val;val;val` (legacy)
|
||||
resSlice := reflect.Zero(out.Type())
|
||||
elem := reflect.Indirect(reflect.New(out.Type().Elem()))
|
||||
|
||||
// preferred case
|
||||
if peekNoSpace(scanner) == '{' {
|
||||
// NB(directxman12): supporting delimitted slices in bare slices
|
||||
// would require an extra look-ahead here :-/
|
||||
|
||||
scanner.Scan() // skip '{'
|
||||
for hint := peekNoSpace(scanner); hint != '}' && hint != sc.EOF; hint = peekNoSpace(scanner) {
|
||||
a.ItemType.parse(scanner, raw, elem, true /* parsing a slice */)
|
||||
resSlice = reflect.Append(resSlice, elem)
|
||||
tok := peekNoSpace(scanner)
|
||||
if tok == '}' {
|
||||
break
|
||||
}
|
||||
if !expect(scanner, ',', "comma") {
|
||||
return
|
||||
}
|
||||
}
|
||||
if !expect(scanner, '}', "close curly brace") {
|
||||
return
|
||||
}
|
||||
castAndSet(out, resSlice)
|
||||
return
|
||||
}
|
||||
|
||||
// legacy case
|
||||
for hint := peekNoSpace(scanner); hint != ',' && hint != '}' && hint != sc.EOF; hint = peekNoSpace(scanner) {
|
||||
a.ItemType.parse(scanner, raw, elem, true /* parsing a slice */)
|
||||
resSlice = reflect.Append(resSlice, elem)
|
||||
tok := peekNoSpace(scanner)
|
||||
if tok == ',' || tok == '}' || tok == sc.EOF {
|
||||
break
|
||||
}
|
||||
scanner.Scan()
|
||||
if tok != ';' {
|
||||
scanner.Error(scanner, fmt.Sprintf("expected comma, got %q", scanner.TokenText()))
|
||||
return
|
||||
}
|
||||
}
|
||||
castAndSet(out, resSlice)
|
||||
}
|
||||
|
||||
// parseMap parses a map of the form {string: val, string: val, string: val}
|
||||
func (a *Argument) parseMap(scanner *sc.Scanner, raw string, out reflect.Value) {
|
||||
resMap := reflect.MakeMap(out.Type())
|
||||
elem := reflect.Indirect(reflect.New(out.Type().Elem()))
|
||||
key := reflect.Indirect(reflect.New(out.Type().Key()))
|
||||
|
||||
if !expect(scanner, '{', "open curly brace") {
|
||||
return
|
||||
}
|
||||
|
||||
for hint := peekNoSpace(scanner); hint != '}' && hint != sc.EOF; hint = peekNoSpace(scanner) {
|
||||
a.parseString(scanner, raw, key)
|
||||
if !expect(scanner, ':', "colon") {
|
||||
return
|
||||
}
|
||||
a.ItemType.parse(scanner, raw, elem, false /* not in a slice */)
|
||||
resMap.SetMapIndex(key, elem)
|
||||
|
||||
if peekNoSpace(scanner) == '}' {
|
||||
break
|
||||
}
|
||||
if !expect(scanner, ',', "comma") {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if !expect(scanner, '}', "close curly brace") {
|
||||
return
|
||||
}
|
||||
|
||||
castAndSet(out, resMap)
|
||||
}
|
||||
|
||||
// parse functions like Parse, except that it allows passing down whether or not we're
|
||||
// already in a slice, to avoid duplicate legacy slice detection for AnyType
|
||||
func (a *Argument) parse(scanner *sc.Scanner, raw string, out reflect.Value, inSlice bool) {
|
||||
// nolint:gocyclo
|
||||
if a.Type == InvalidType {
|
||||
scanner.Error(scanner, fmt.Sprintf("cannot parse invalid type"))
|
||||
return
|
||||
}
|
||||
if a.Pointer {
|
||||
out.Set(reflect.New(out.Type().Elem()))
|
||||
out = reflect.Indirect(out)
|
||||
}
|
||||
switch a.Type {
|
||||
case RawType:
|
||||
// raw consumes everything else
|
||||
castAndSet(out, reflect.ValueOf(raw[scanner.Pos().Offset:]))
|
||||
// consume everything else
|
||||
for tok := scanner.Scan(); tok != sc.EOF; tok = scanner.Scan() {
|
||||
}
|
||||
case IntType:
|
||||
nextChar := scanner.Peek()
|
||||
isNegative := false
|
||||
if nextChar == '-' {
|
||||
isNegative = true
|
||||
scanner.Scan() // eat the '-'
|
||||
}
|
||||
if !expect(scanner, sc.Int, "integer") {
|
||||
return
|
||||
}
|
||||
// TODO(directxman12): respect the size when parsing
|
||||
text := scanner.TokenText()
|
||||
if isNegative {
|
||||
text = "-" + text
|
||||
}
|
||||
val, err := strconv.Atoi(text)
|
||||
if err != nil {
|
||||
scanner.Error(scanner, fmt.Sprintf("unable to parse integer: %v", err))
|
||||
return
|
||||
}
|
||||
castAndSet(out, reflect.ValueOf(val))
|
||||
case StringType:
|
||||
// strings are a bit weird -- the "easy" case is quoted strings (tokenized as strings),
|
||||
// the "hard" case (present for backwards compat) is a bare sequence of tokens that aren't
|
||||
// a comma.
|
||||
a.parseString(scanner, raw, out)
|
||||
case BoolType:
|
||||
if !expect(scanner, sc.Ident, "true or false") {
|
||||
return
|
||||
}
|
||||
switch scanner.TokenText() {
|
||||
case "true":
|
||||
castAndSet(out, reflect.ValueOf(true))
|
||||
case "false":
|
||||
castAndSet(out, reflect.ValueOf(false))
|
||||
default:
|
||||
scanner.Error(scanner, fmt.Sprintf("expected true or false, got %q", scanner.TokenText()))
|
||||
return
|
||||
}
|
||||
case AnyType:
|
||||
guessedType := guessType(scanner, raw, !inSlice)
|
||||
newOut := out
|
||||
|
||||
// we need to be able to construct the right element types, below
|
||||
// in parse, so construct a concretely-typed value to use as "out"
|
||||
switch guessedType.Type {
|
||||
case SliceType:
|
||||
newType, err := makeSliceType(*guessedType.ItemType)
|
||||
if err != nil {
|
||||
scanner.Error(scanner, err.Error())
|
||||
return
|
||||
}
|
||||
newOut = reflect.Indirect(reflect.New(newType))
|
||||
case MapType:
|
||||
newType, err := makeMapType(*guessedType.ItemType)
|
||||
if err != nil {
|
||||
scanner.Error(scanner, err.Error())
|
||||
return
|
||||
}
|
||||
newOut = reflect.Indirect(reflect.New(newType))
|
||||
}
|
||||
if !newOut.CanSet() {
|
||||
panic("at the disco") // TODO(directxman12): this is left over from debugging -- it might need to be an error
|
||||
}
|
||||
guessedType.Parse(scanner, raw, newOut)
|
||||
castAndSet(out, newOut)
|
||||
case SliceType:
|
||||
// slices have two supported formats, like string:
|
||||
// - `{val, val, val}` (preferred)
|
||||
// - `val;val;val` (legacy)
|
||||
a.parseSlice(scanner, raw, out)
|
||||
case MapType:
|
||||
// maps are {string: val, string: val, string: val}
|
||||
a.parseMap(scanner, raw, out)
|
||||
}
|
||||
}
|
||||
|
||||
// Parse attempts to consume the argument from the given scanner (based on the given
|
||||
// raw input as well for collecting ranges of content), and places the output value
|
||||
// in the given reflect.Value. Errors are reported via the given scanner.
|
||||
func (a *Argument) Parse(scanner *sc.Scanner, raw string, out reflect.Value) {
|
||||
a.parse(scanner, raw, out, false)
|
||||
}
|
||||
|
||||
// ArgumentFromType constructs an Argument by examining the given
|
||||
// raw reflect.Type. It can construct arguments from the Go types
|
||||
// corresponding to any of the types listed in ArgumentType.
|
||||
func ArgumentFromType(rawType reflect.Type) (Argument, error) {
|
||||
if rawType == rawArgsType {
|
||||
return Argument{
|
||||
Type: RawType,
|
||||
}, nil
|
||||
}
|
||||
|
||||
if rawType == interfaceType {
|
||||
return Argument{
|
||||
Type: AnyType,
|
||||
}, nil
|
||||
}
|
||||
|
||||
arg := Argument{}
|
||||
if rawType.Kind() == reflect.Ptr {
|
||||
rawType = rawType.Elem()
|
||||
arg.Pointer = true
|
||||
arg.Optional = true
|
||||
}
|
||||
|
||||
switch rawType.Kind() {
|
||||
case reflect.String:
|
||||
arg.Type = StringType
|
||||
case reflect.Int, reflect.Int32: // NB(directxman12): all ints in kubernetes are int32, so explicitly support that
|
||||
arg.Type = IntType
|
||||
case reflect.Bool:
|
||||
arg.Type = BoolType
|
||||
case reflect.Slice:
|
||||
arg.Type = SliceType
|
||||
itemType, err := ArgumentFromType(rawType.Elem())
|
||||
if err != nil {
|
||||
return Argument{}, fmt.Errorf("bad slice item type: %w", err)
|
||||
}
|
||||
arg.ItemType = &itemType
|
||||
case reflect.Map:
|
||||
arg.Type = MapType
|
||||
if rawType.Key().Kind() != reflect.String {
|
||||
return Argument{}, fmt.Errorf("bad map key type: map keys must be strings")
|
||||
}
|
||||
itemType, err := ArgumentFromType(rawType.Elem())
|
||||
if err != nil {
|
||||
return Argument{}, fmt.Errorf("bad slice item type: %w", err)
|
||||
}
|
||||
arg.ItemType = &itemType
|
||||
default:
|
||||
return Argument{}, fmt.Errorf("type has unsupported kind %s", rawType.Kind())
|
||||
}
|
||||
|
||||
return arg, nil
|
||||
}
|
||||
|
||||
// TargetType describes which kind of node a given marker is associated with.
|
||||
type TargetType int
|
||||
|
||||
const (
|
||||
// DescribesPackage indicates that a marker is associated with a package.
|
||||
DescribesPackage TargetType = iota
|
||||
// DescribesType indicates that a marker is associated with a type declaration.
|
||||
DescribesType
|
||||
// DescribesField indicates that a marker is associated with a struct field.
|
||||
DescribesField
|
||||
)
|
||||
|
||||
func (t TargetType) String() string {
|
||||
switch t {
|
||||
case DescribesPackage:
|
||||
return "package"
|
||||
case DescribesType:
|
||||
return "type"
|
||||
case DescribesField:
|
||||
return "field"
|
||||
default:
|
||||
return "(unknown)"
|
||||
}
|
||||
}
|
||||
|
||||
// Definition is a parsed definition of a marker.
|
||||
type Definition struct {
|
||||
// Output is the deserialized Go type of the marker.
|
||||
Output reflect.Type
|
||||
// Name is the marker's name.
|
||||
Name string
|
||||
// Target indicates which kind of node this marker can be associated with.
|
||||
Target TargetType
|
||||
// Fields lists out the types of each field that this marker has, by
|
||||
// argument name as used in the marker (if the output type isn't a struct,
|
||||
// it'll have a single, blank field name). This only lists exported fields,
|
||||
// (as per reflection rules).
|
||||
Fields map[string]Argument
|
||||
// FieldNames maps argument names (as used in the marker) to struct field name
|
||||
// in the output type.
|
||||
FieldNames map[string]string
|
||||
// Strict indicates that this definition should error out when parsing if
|
||||
// not all non-optional fields were seen.
|
||||
Strict bool
|
||||
}
|
||||
|
||||
// AnonymousField indicates that the definition has one field,
|
||||
// (actually the original object), and thus the field
|
||||
// doesn't get named as part of the name.
|
||||
func (d *Definition) AnonymousField() bool {
|
||||
if len(d.Fields) != 1 {
|
||||
return false
|
||||
}
|
||||
_, hasAnonField := d.Fields[""]
|
||||
return hasAnonField
|
||||
}
|
||||
|
||||
// Empty indicates that this definition has no fields.
|
||||
func (d *Definition) Empty() bool {
|
||||
return len(d.Fields) == 0
|
||||
}
|
||||
|
||||
// argumentInfo returns information about an argument field as the marker parser's field loader
|
||||
// would see it. This can be useful if you have to interact with marker definition structs
|
||||
// externally (e.g. at compile time).
|
||||
func argumentInfo(fieldName string, tag reflect.StructTag) (argName string, optionalOpt bool) {
|
||||
argName = lowerCamelCase(fieldName)
|
||||
markerTag, tagSpecified := tag.Lookup("marker")
|
||||
markerTagParts := strings.Split(markerTag, ",")
|
||||
if tagSpecified && markerTagParts[0] != "" {
|
||||
// allow overriding to support legacy cases where we don't follow camelCase conventions
|
||||
argName = markerTagParts[0]
|
||||
}
|
||||
optionalOpt = false
|
||||
for _, tagOption := range markerTagParts[1:] {
|
||||
switch tagOption {
|
||||
case "optional":
|
||||
optionalOpt = true
|
||||
}
|
||||
}
|
||||
|
||||
return argName, optionalOpt
|
||||
}
|
||||
|
||||
// loadFields uses reflection to populate argument information from the Output type.
|
||||
func (d *Definition) loadFields() error {
|
||||
if d.Fields == nil {
|
||||
d.Fields = make(map[string]Argument)
|
||||
d.FieldNames = make(map[string]string)
|
||||
}
|
||||
if d.Output.Kind() != reflect.Struct {
|
||||
// anonymous field type
|
||||
argType, err := ArgumentFromType(d.Output)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
d.Fields[""] = argType
|
||||
d.FieldNames[""] = ""
|
||||
return nil
|
||||
}
|
||||
|
||||
for i := 0; i < d.Output.NumField(); i++ {
|
||||
field := d.Output.Field(i)
|
||||
if field.PkgPath != "" {
|
||||
// as per the reflect package docs, pkgpath is empty for exported fields,
|
||||
// so non-empty package path means a private field, which we should skip
|
||||
continue
|
||||
}
|
||||
argName, optionalOpt := argumentInfo(field.Name, field.Tag)
|
||||
|
||||
argType, err := ArgumentFromType(field.Type)
|
||||
if err != nil {
|
||||
return fmt.Errorf("unable to extract type information for field %q: %w", field.Name, err)
|
||||
}
|
||||
|
||||
if argType.Type == RawType {
|
||||
return fmt.Errorf("RawArguments must be the direct type of a marker, and not a field")
|
||||
}
|
||||
|
||||
argType.Optional = optionalOpt || argType.Optional
|
||||
|
||||
d.Fields[argName] = argType
|
||||
d.FieldNames[argName] = field.Name
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// parserScanner makes a new scanner appropriate for use in parsing definitions and arguments.
|
||||
func parserScanner(raw string, err func(*sc.Scanner, string)) *sc.Scanner {
|
||||
scanner := &sc.Scanner{}
|
||||
scanner.Init(bytes.NewBufferString(raw))
|
||||
scanner.Mode = sc.ScanIdents | sc.ScanInts | sc.ScanStrings | sc.ScanRawStrings | sc.SkipComments
|
||||
scanner.Error = err
|
||||
|
||||
return scanner
|
||||
}
|
||||
|
||||
// Parse uses the type information in this Definition to parse the given
|
||||
// raw marker in the form `+a:b:c=arg,d=arg` into an output object of the
|
||||
// type specified in the definition.
|
||||
func (d *Definition) Parse(rawMarker string) (interface{}, error) {
|
||||
name, anonName, fields := splitMarker(rawMarker)
|
||||
|
||||
out := reflect.Indirect(reflect.New(d.Output))
|
||||
|
||||
// if we're a not a struct or have no arguments, treat the full `a:b:c` as the name,
|
||||
// otherwise, treat `c` as a field name, and `a:b` as the marker name.
|
||||
if !d.AnonymousField() && !d.Empty() && len(anonName) >= len(name)+1 {
|
||||
fields = anonName[len(name)+1:] + "=" + fields
|
||||
}
|
||||
|
||||
var errs []error
|
||||
scanner := parserScanner(fields, func(scanner *sc.Scanner, msg string) {
|
||||
errs = append(errs, &ScannerError{Msg: msg, Pos: scanner.Position})
|
||||
})
|
||||
|
||||
// TODO(directxman12): strict parsing where we error out if certain fields aren't optional
|
||||
seen := make(map[string]struct{}, len(d.Fields))
|
||||
if d.AnonymousField() && scanner.Peek() != sc.EOF {
|
||||
// might still be a struct that something fiddled with, so double check
|
||||
structFieldName := d.FieldNames[""]
|
||||
outTarget := out
|
||||
if structFieldName != "" {
|
||||
// it's a struct field mapped to an anonymous marker
|
||||
outTarget = out.FieldByName(structFieldName)
|
||||
if !outTarget.CanSet() {
|
||||
scanner.Error(scanner, fmt.Sprintf("cannot set field %q (might not exist)", structFieldName))
|
||||
return out.Interface(), loader.MaybeErrList(errs)
|
||||
}
|
||||
}
|
||||
|
||||
// no need for trying to parse field names if we're not a struct
|
||||
field := d.Fields[""]
|
||||
field.Parse(scanner, fields, outTarget)
|
||||
seen[""] = struct{}{} // mark as seen for strict definitions
|
||||
} else if !d.Empty() && scanner.Peek() != sc.EOF {
|
||||
// if we expect *and* actually have arguments passed
|
||||
for {
|
||||
// parse the argument name
|
||||
if !expect(scanner, sc.Ident, "argument name") {
|
||||
break
|
||||
}
|
||||
argName := scanner.TokenText()
|
||||
if !expect(scanner, '=', "equals") {
|
||||
break
|
||||
}
|
||||
|
||||
// make sure we know the field
|
||||
fieldName, known := d.FieldNames[argName]
|
||||
if !known {
|
||||
scanner.Error(scanner, fmt.Sprintf("unknown argument %q", argName))
|
||||
break
|
||||
}
|
||||
fieldType, known := d.Fields[argName]
|
||||
if !known {
|
||||
scanner.Error(scanner, fmt.Sprintf("unknown argument %q", argName))
|
||||
break
|
||||
}
|
||||
seen[argName] = struct{}{} // mark as seen for strict definitions
|
||||
|
||||
// parse the field value
|
||||
fieldVal := out.FieldByName(fieldName)
|
||||
if !fieldVal.CanSet() {
|
||||
scanner.Error(scanner, fmt.Sprintf("cannot set field %q (might not exist)", fieldName))
|
||||
break
|
||||
}
|
||||
fieldType.Parse(scanner, fields, fieldVal)
|
||||
|
||||
if len(errs) > 0 {
|
||||
break
|
||||
}
|
||||
|
||||
if scanner.Peek() == sc.EOF {
|
||||
break
|
||||
}
|
||||
if !expect(scanner, ',', "comma") {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if tok := scanner.Scan(); tok != sc.EOF {
|
||||
scanner.Error(scanner, fmt.Sprintf("extra arguments provided: %q", fields[scanner.Position.Offset:]))
|
||||
}
|
||||
|
||||
if d.Strict {
|
||||
for argName, arg := range d.Fields {
|
||||
if _, wasSeen := seen[argName]; !wasSeen && !arg.Optional {
|
||||
scanner.Error(scanner, fmt.Sprintf("missing argument %q", argName))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return out.Interface(), loader.MaybeErrList(errs)
|
||||
}
|
||||
|
||||
// MakeDefinition constructs a definition from a name, type, and the output type.
|
||||
// All such definitions are strict by default. If a struct is passed as the output
|
||||
// type, its public fields will automatically be populated into Fields (and similar
|
||||
// fields in Definition). Other values will have a single, empty-string-named Fields
|
||||
// entry.
|
||||
func MakeDefinition(name string, target TargetType, output interface{}) (*Definition, error) {
|
||||
def := &Definition{
|
||||
Name: name,
|
||||
Target: target,
|
||||
Output: reflect.TypeOf(output),
|
||||
Strict: true,
|
||||
}
|
||||
|
||||
if err := def.loadFields(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return def, nil
|
||||
}
|
||||
|
||||
// MakeAnyTypeDefinition constructs a definition for an output struct with a
|
||||
// field named `Value` of type `interface{}`. The argument to the marker will
|
||||
// be parsed as AnyType and assigned to the field named `Value`.
|
||||
func MakeAnyTypeDefinition(name string, target TargetType, output interface{}) (*Definition, error) {
|
||||
defn, err := MakeDefinition(name, target, output)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defn.FieldNames = map[string]string{"": "Value"}
|
||||
defn.Fields = map[string]Argument{"": defn.Fields["value"]}
|
||||
return defn, nil
|
||||
}
|
||||
|
||||
// splitMarker takes a marker in the form of `+a:b:c=arg,d=arg` and splits it
|
||||
// into the name (`a:b`), the name if it's not a struct (`a:b:c`), and the parts
|
||||
// that are definitely fields (`arg,d=arg`).
|
||||
func splitMarker(raw string) (name string, anonymousName string, restFields string) {
|
||||
raw = raw[1:] // get rid of the leading '+'
|
||||
nameFieldParts := strings.SplitN(raw, "=", 2)
|
||||
if len(nameFieldParts) == 1 {
|
||||
return nameFieldParts[0], nameFieldParts[0], ""
|
||||
}
|
||||
anonymousName = nameFieldParts[0]
|
||||
name = anonymousName
|
||||
restFields = nameFieldParts[1]
|
||||
|
||||
nameParts := strings.Split(name, ":")
|
||||
if len(nameParts) > 1 {
|
||||
name = strings.Join(nameParts[:len(nameParts)-1], ":")
|
||||
}
|
||||
return name, anonymousName, restFields
|
||||
}
|
||||
|
||||
type ScannerError struct {
|
||||
Msg string
|
||||
Pos sc.Position
|
||||
}
|
||||
|
||||
func (e *ScannerError) Error() string {
|
||||
return fmt.Sprintf("%s (at %s)", e.Msg, e.Pos)
|
||||
}
|
153
vendor/sigs.k8s.io/controller-tools/pkg/markers/reg.go
generated
vendored
Normal file
153
vendor/sigs.k8s.io/controller-tools/pkg/markers/reg.go
generated
vendored
Normal file
@@ -0,0 +1,153 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// Registry keeps track of registered definitions, and allows for easy lookup.
|
||||
// It's thread-safe, and the zero-value can be safely used.
|
||||
type Registry struct {
|
||||
forPkg map[string]*Definition
|
||||
forType map[string]*Definition
|
||||
forField map[string]*Definition
|
||||
helpFor map[*Definition]*DefinitionHelp
|
||||
|
||||
mu sync.RWMutex
|
||||
initOnce sync.Once
|
||||
}
|
||||
|
||||
func (r *Registry) init() {
|
||||
r.initOnce.Do(func() {
|
||||
if r.forPkg == nil {
|
||||
r.forPkg = make(map[string]*Definition)
|
||||
}
|
||||
if r.forType == nil {
|
||||
r.forType = make(map[string]*Definition)
|
||||
}
|
||||
if r.forField == nil {
|
||||
r.forField = make(map[string]*Definition)
|
||||
}
|
||||
if r.helpFor == nil {
|
||||
r.helpFor = make(map[*Definition]*DefinitionHelp)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Define defines a new marker with the given name, target, and output type.
|
||||
// It's a shortcut around
|
||||
// r.Register(MakeDefinition(name, target, obj))
|
||||
func (r *Registry) Define(name string, target TargetType, obj interface{}) error {
|
||||
def, err := MakeDefinition(name, target, obj)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return r.Register(def)
|
||||
}
|
||||
|
||||
// Register registers the given marker definition with this registry for later lookup.
|
||||
func (r *Registry) Register(def *Definition) error {
|
||||
r.init()
|
||||
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
switch def.Target {
|
||||
case DescribesPackage:
|
||||
r.forPkg[def.Name] = def
|
||||
case DescribesType:
|
||||
r.forType[def.Name] = def
|
||||
case DescribesField:
|
||||
r.forField[def.Name] = def
|
||||
default:
|
||||
return fmt.Errorf("unknown target type %v", def.Target)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddHelp stores the given help in the registry, marking it as associated with
|
||||
// the given definition.
|
||||
func (r *Registry) AddHelp(def *Definition, help *DefinitionHelp) {
|
||||
r.init()
|
||||
|
||||
r.mu.Lock()
|
||||
defer r.mu.Unlock()
|
||||
|
||||
r.helpFor[def] = help
|
||||
}
|
||||
|
||||
// Lookup fetches the definition corresponding to the given name and target type.
|
||||
func (r *Registry) Lookup(name string, target TargetType) *Definition {
|
||||
r.init()
|
||||
|
||||
r.mu.RLock()
|
||||
defer r.mu.RUnlock()
|
||||
|
||||
switch target {
|
||||
case DescribesPackage:
|
||||
return tryAnonLookup(name, r.forPkg)
|
||||
case DescribesType:
|
||||
return tryAnonLookup(name, r.forType)
|
||||
case DescribesField:
|
||||
return tryAnonLookup(name, r.forField)
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// HelpFor fetches the help for a given definition, if present.
|
||||
func (r *Registry) HelpFor(def *Definition) *DefinitionHelp {
|
||||
r.init()
|
||||
|
||||
r.mu.RLock()
|
||||
defer r.mu.RUnlock()
|
||||
|
||||
return r.helpFor[def]
|
||||
}
|
||||
|
||||
// AllDefinitions returns all marker definitions known to this registry.
|
||||
func (r *Registry) AllDefinitions() []*Definition {
|
||||
res := make([]*Definition, 0, len(r.forPkg)+len(r.forType)+len(r.forField))
|
||||
for _, def := range r.forPkg {
|
||||
res = append(res, def)
|
||||
}
|
||||
for _, def := range r.forType {
|
||||
res = append(res, def)
|
||||
}
|
||||
for _, def := range r.forField {
|
||||
res = append(res, def)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// tryAnonLookup tries looking up the given marker as both an struct-based
|
||||
// marker and an anonymous marker, returning whichever format matches first,
|
||||
// preferring the longer (anonymous) name in case of conflicts.
|
||||
func tryAnonLookup(name string, defs map[string]*Definition) *Definition {
|
||||
// NB(directxman12): we look up anonymous names first to work with
|
||||
// legacy style marker definitions that have a namespaced approach
|
||||
// (e.g. deepcopy-gen, which uses `+k8s:deepcopy-gen=foo,bar` *and*
|
||||
// `+k8s.io:deepcopy-gen:interfaces=foo`).
|
||||
name, anonName, _ := splitMarker(name)
|
||||
if def, exists := defs[anonName]; exists {
|
||||
return def
|
||||
}
|
||||
|
||||
return defs[name]
|
||||
}
|
36
vendor/sigs.k8s.io/controller-tools/pkg/markers/regutil.go
generated
vendored
Normal file
36
vendor/sigs.k8s.io/controller-tools/pkg/markers/regutil.go
generated
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
// Must panics on errors creating definitions.
|
||||
func Must(def *Definition, err error) *Definition {
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return def
|
||||
}
|
||||
|
||||
// RegisterAll attempts to register all definitions against the given registry,
|
||||
// stopping and returning if an error occurs.
|
||||
func RegisterAll(reg *Registry, defs ...*Definition) error {
|
||||
for _, def := range defs {
|
||||
if err := reg.Register(def); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
191
vendor/sigs.k8s.io/controller-tools/pkg/markers/zip.go
generated
vendored
Normal file
191
vendor/sigs.k8s.io/controller-tools/pkg/markers/zip.go
generated
vendored
Normal file
@@ -0,0 +1,191 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package markers
|
||||
|
||||
import (
|
||||
"go/ast"
|
||||
"go/token"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
)
|
||||
|
||||
// extractDoc extracts documentation from the given node, skipping markers
|
||||
// in the godoc and falling back to the decl if necessary (for single-line decls).
|
||||
func extractDoc(node ast.Node, decl *ast.GenDecl) string {
|
||||
var docs *ast.CommentGroup
|
||||
switch docced := node.(type) {
|
||||
case *ast.Field:
|
||||
docs = docced.Doc
|
||||
case *ast.File:
|
||||
docs = docced.Doc
|
||||
case *ast.GenDecl:
|
||||
docs = docced.Doc
|
||||
case *ast.TypeSpec:
|
||||
docs = docced.Doc
|
||||
// type Ident expr expressions get docs attached to the decl,
|
||||
// so check for that case (missing Lparen == single line type decl)
|
||||
if docs == nil && decl.Lparen == token.NoPos {
|
||||
docs = decl.Doc
|
||||
}
|
||||
}
|
||||
|
||||
if docs == nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
// filter out markers
|
||||
var outGroup ast.CommentGroup
|
||||
outGroup.List = make([]*ast.Comment, 0, len(docs.List))
|
||||
for _, comment := range docs.List {
|
||||
if isMarkerComment(comment.Text) {
|
||||
continue
|
||||
}
|
||||
outGroup.List = append(outGroup.List, comment)
|
||||
}
|
||||
|
||||
// split lines, and re-join together as a single
|
||||
// paragraph, respecting double-newlines as
|
||||
// paragraph markers.
|
||||
outLines := strings.Split(outGroup.Text(), "\n")
|
||||
if outLines[len(outLines)-1] == "" {
|
||||
// chop off the extraneous last part
|
||||
outLines = outLines[:len(outLines)-1]
|
||||
}
|
||||
// respect double-newline meaning actual newline
|
||||
for i, line := range outLines {
|
||||
if line == "" {
|
||||
outLines[i] = "\n"
|
||||
}
|
||||
}
|
||||
return strings.Join(outLines, " ")
|
||||
}
|
||||
|
||||
// PackageMarkers collects all the package-level marker values for the given package.
|
||||
func PackageMarkers(col *Collector, pkg *loader.Package) (MarkerValues, error) {
|
||||
markers, err := col.MarkersInPackage(pkg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
res := make(MarkerValues)
|
||||
for _, file := range pkg.Syntax {
|
||||
fileMarkers := markers[file]
|
||||
for name, vals := range fileMarkers {
|
||||
res[name] = append(res[name], vals...)
|
||||
}
|
||||
}
|
||||
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// FieldInfo contains marker values and commonly used information for a struct field.
|
||||
type FieldInfo struct {
|
||||
// Name is the name of the field (or "" for embedded fields)
|
||||
Name string
|
||||
// Doc is the Godoc of the field, pre-processed to remove markers and joine
|
||||
// single newlines together.
|
||||
Doc string
|
||||
// Tag struct tag associated with this field (or "" if non existed).
|
||||
Tag reflect.StructTag
|
||||
|
||||
// Markers are all registered markers associated with this field.
|
||||
Markers MarkerValues
|
||||
|
||||
// RawField is the raw, underlying field AST object that this field represents.
|
||||
RawField *ast.Field
|
||||
}
|
||||
|
||||
// TypeInfo contains marker values and commonly used information for a type declaration.
|
||||
type TypeInfo struct {
|
||||
// Name is the name of the type.
|
||||
Name string
|
||||
// Doc is the Godoc of the type, pre-processed to remove markers and joine
|
||||
// single newlines together.
|
||||
Doc string
|
||||
|
||||
// Markers are all registered markers associated with the type.
|
||||
Markers MarkerValues
|
||||
|
||||
// Fields are all the fields associated with the type, if it's a struct.
|
||||
// (if not, Fields will be nil).
|
||||
Fields []FieldInfo
|
||||
|
||||
// RawDecl contains the raw GenDecl that the type was declared as part of.
|
||||
RawDecl *ast.GenDecl
|
||||
// RawSpec contains the raw Spec that declared this type.
|
||||
RawSpec *ast.TypeSpec
|
||||
// RawFile contains the file in which this type was declared.
|
||||
RawFile *ast.File
|
||||
}
|
||||
|
||||
// TypeCallback is a callback called for each type declaration in a package.
|
||||
type TypeCallback func(info *TypeInfo)
|
||||
|
||||
// EachType collects all markers, then calls the given callback for each type declaration in a package.
|
||||
// Each individual spec is considered separate, so
|
||||
//
|
||||
// type (
|
||||
// Foo string
|
||||
// Bar int
|
||||
// Baz struct{}
|
||||
// )
|
||||
//
|
||||
// yields three calls to the callback.
|
||||
func EachType(col *Collector, pkg *loader.Package, cb TypeCallback) error {
|
||||
markers, err := col.MarkersInPackage(pkg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
loader.EachType(pkg, func(file *ast.File, decl *ast.GenDecl, spec *ast.TypeSpec) {
|
||||
var fields []FieldInfo
|
||||
if structSpec, isStruct := spec.Type.(*ast.StructType); isStruct {
|
||||
for _, field := range structSpec.Fields.List {
|
||||
for _, name := range field.Names {
|
||||
fields = append(fields, FieldInfo{
|
||||
Name: name.Name,
|
||||
Doc: extractDoc(field, nil),
|
||||
Tag: loader.ParseAstTag(field.Tag),
|
||||
Markers: markers[field],
|
||||
RawField: field,
|
||||
})
|
||||
}
|
||||
if field.Names == nil {
|
||||
fields = append(fields, FieldInfo{
|
||||
Doc: extractDoc(field, nil),
|
||||
Tag: loader.ParseAstTag(field.Tag),
|
||||
Markers: markers[field],
|
||||
RawField: field,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
cb(&TypeInfo{
|
||||
Name: spec.Name.Name,
|
||||
Markers: markers[spec],
|
||||
Doc: extractDoc(spec, decl),
|
||||
Fields: fields,
|
||||
RawDecl: decl,
|
||||
RawSpec: spec,
|
||||
RawFile: file,
|
||||
})
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
267
vendor/sigs.k8s.io/controller-tools/pkg/rbac/parser.go
generated
vendored
Normal file
267
vendor/sigs.k8s.io/controller-tools/pkg/rbac/parser.go
generated
vendored
Normal file
@@ -0,0 +1,267 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package rbac contain libraries for generating RBAC manifests from RBAC
|
||||
// markers in Go source files.
|
||||
//
|
||||
// The markers take the form:
|
||||
//
|
||||
// +kubebuilder:rbac:groups=<groups>,resources=<resources>,resourceNames=<resource names>,verbs=<verbs>,urls=<non resource urls>
|
||||
package rbac
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
rbacv1 "k8s.io/api/rbac/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/genall"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
var (
|
||||
// RuleDefinition is a marker for defining RBAC rules.
|
||||
// Call ToRule on the value to get a Kubernetes RBAC policy rule.
|
||||
RuleDefinition = markers.Must(markers.MakeDefinition("kubebuilder:rbac", markers.DescribesPackage, Rule{}))
|
||||
)
|
||||
|
||||
// +controllertools:marker:generateHelp:category=RBAC
|
||||
|
||||
// Rule specifies an RBAC rule to all access to some resources or non-resource URLs.
|
||||
type Rule struct {
|
||||
// Groups specifies the API groups that this rule encompasses.
|
||||
Groups []string `marker:",optional"`
|
||||
// Resources specifies the API resources that this rule encompasses.
|
||||
Resources []string `marker:",optional"`
|
||||
// ResourceNames specifies the names of the API resources that this rule encompasses.
|
||||
//
|
||||
// Create requests cannot be restricted by resourcename, as the object's name
|
||||
// is not known at authorization time.
|
||||
ResourceNames []string `marker:",optional"`
|
||||
// Verbs specifies the (lowercase) kubernetes API verbs that this rule encompasses.
|
||||
Verbs []string
|
||||
// URL specifies the non-resource URLs that this rule encompasses.
|
||||
URLs []string `marker:"urls,optional"`
|
||||
// Namespace specifies the scope of the Rule.
|
||||
// If not set, the Rule belongs to the generated ClusterRole.
|
||||
// If set, the Rule belongs to a Role, whose namespace is specified by this field.
|
||||
Namespace string `marker:",optional"`
|
||||
}
|
||||
|
||||
// ruleKey represents the resources and non-resources a Rule applies.
|
||||
type ruleKey struct {
|
||||
Groups string
|
||||
Resources string
|
||||
ResourceNames string
|
||||
URLs string
|
||||
}
|
||||
|
||||
func (key ruleKey) String() string {
|
||||
return fmt.Sprintf("%s + %s + %s + %s", key.Groups, key.Resources, key.ResourceNames, key.URLs)
|
||||
}
|
||||
|
||||
// ruleKeys implements sort.Interface
|
||||
type ruleKeys []ruleKey
|
||||
|
||||
func (keys ruleKeys) Len() int { return len(keys) }
|
||||
func (keys ruleKeys) Swap(i, j int) { keys[i], keys[j] = keys[j], keys[i] }
|
||||
func (keys ruleKeys) Less(i, j int) bool { return keys[i].String() < keys[j].String() }
|
||||
|
||||
// key normalizes the Rule and returns a ruleKey object.
|
||||
func (r *Rule) key() ruleKey {
|
||||
r.normalize()
|
||||
return ruleKey{
|
||||
Groups: strings.Join(r.Groups, "&"),
|
||||
Resources: strings.Join(r.Resources, "&"),
|
||||
ResourceNames: strings.Join(r.ResourceNames, "&"),
|
||||
URLs: strings.Join(r.URLs, "&"),
|
||||
}
|
||||
}
|
||||
|
||||
// addVerbs adds new verbs into a Rule.
|
||||
// The duplicates in `r.Verbs` will be removed, and then `r.Verbs` will be sorted.
|
||||
func (r *Rule) addVerbs(verbs []string) {
|
||||
r.Verbs = removeDupAndSort(append(r.Verbs, verbs...))
|
||||
}
|
||||
|
||||
// normalize removes duplicates from each field of a Rule, and sorts each field.
|
||||
func (r *Rule) normalize() {
|
||||
r.Groups = removeDupAndSort(r.Groups)
|
||||
r.Resources = removeDupAndSort(r.Resources)
|
||||
r.ResourceNames = removeDupAndSort(r.ResourceNames)
|
||||
r.Verbs = removeDupAndSort(r.Verbs)
|
||||
r.URLs = removeDupAndSort(r.URLs)
|
||||
}
|
||||
|
||||
// removeDupAndSort removes duplicates in strs, sorts the items, and returns a
|
||||
// new slice of strings.
|
||||
func removeDupAndSort(strs []string) []string {
|
||||
set := make(map[string]bool)
|
||||
for _, str := range strs {
|
||||
if _, ok := set[str]; !ok {
|
||||
set[str] = true
|
||||
}
|
||||
}
|
||||
|
||||
var result []string
|
||||
for str := range set {
|
||||
result = append(result, str)
|
||||
}
|
||||
sort.Strings(result)
|
||||
return result
|
||||
}
|
||||
|
||||
// ToRule converts this rule to its Kubernetes API form.
|
||||
func (r *Rule) ToRule() rbacv1.PolicyRule {
|
||||
// fix the group names first, since letting people type "core" is nice
|
||||
for i, group := range r.Groups {
|
||||
if group == "core" {
|
||||
r.Groups[i] = ""
|
||||
}
|
||||
}
|
||||
return rbacv1.PolicyRule{
|
||||
APIGroups: r.Groups,
|
||||
Verbs: r.Verbs,
|
||||
Resources: r.Resources,
|
||||
ResourceNames: r.ResourceNames,
|
||||
NonResourceURLs: r.URLs,
|
||||
}
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp
|
||||
|
||||
// Generator generates ClusterRole objects.
|
||||
type Generator struct {
|
||||
// RoleName sets the name of the generated ClusterRole.
|
||||
RoleName string
|
||||
}
|
||||
|
||||
func (Generator) RegisterMarkers(into *markers.Registry) error {
|
||||
if err := into.Register(RuleDefinition); err != nil {
|
||||
return err
|
||||
}
|
||||
into.AddHelp(RuleDefinition, Rule{}.Help())
|
||||
return nil
|
||||
}
|
||||
|
||||
// GenerateRoles generate a slice of objs representing either a ClusterRole or a Role object
|
||||
// The order of the objs in the returned slice is stable and determined by their namespaces.
|
||||
func GenerateRoles(ctx *genall.GenerationContext, roleName string) ([]interface{}, error) {
|
||||
rulesByNS := make(map[string][]*Rule)
|
||||
for _, root := range ctx.Roots {
|
||||
markerSet, err := markers.PackageMarkers(ctx.Collector, root)
|
||||
if err != nil {
|
||||
root.AddError(err)
|
||||
}
|
||||
|
||||
// group RBAC markers by namespace
|
||||
for _, markerValue := range markerSet[RuleDefinition.Name] {
|
||||
rule := markerValue.(Rule)
|
||||
namespace := rule.Namespace
|
||||
if _, ok := rulesByNS[namespace]; !ok {
|
||||
rules := make([]*Rule, 0)
|
||||
rulesByNS[namespace] = rules
|
||||
}
|
||||
rulesByNS[namespace] = append(rulesByNS[namespace], &rule)
|
||||
}
|
||||
}
|
||||
|
||||
// NormalizeRules merge Rule with the same ruleKey and sort the Rules
|
||||
NormalizeRules := func(rules []*Rule) []rbacv1.PolicyRule {
|
||||
ruleMap := make(map[ruleKey]*Rule)
|
||||
// all the Rules having the same ruleKey will be merged into the first Rule
|
||||
for _, rule := range rules {
|
||||
key := rule.key()
|
||||
if _, ok := ruleMap[key]; !ok {
|
||||
ruleMap[key] = rule
|
||||
continue
|
||||
}
|
||||
ruleMap[key].addVerbs(rule.Verbs)
|
||||
}
|
||||
|
||||
// sort the Rules in rules according to their ruleKeys
|
||||
keys := make([]ruleKey, 0, len(ruleMap))
|
||||
for key := range ruleMap {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
sort.Sort(ruleKeys(keys))
|
||||
|
||||
var policyRules []rbacv1.PolicyRule
|
||||
for _, key := range keys {
|
||||
policyRules = append(policyRules, ruleMap[key].ToRule())
|
||||
|
||||
}
|
||||
return policyRules
|
||||
}
|
||||
|
||||
// collect all the namespaces and sort them
|
||||
var namespaces []string
|
||||
for ns := range rulesByNS {
|
||||
namespaces = append(namespaces, ns)
|
||||
}
|
||||
sort.Strings(namespaces)
|
||||
|
||||
// process the items in rulesByNS by the order specified in `namespaces` to make sure that the Role order is stable
|
||||
var objs []interface{}
|
||||
for _, ns := range namespaces {
|
||||
rules := rulesByNS[ns]
|
||||
policyRules := NormalizeRules(rules)
|
||||
if len(policyRules) == 0 {
|
||||
continue
|
||||
}
|
||||
if ns == "" {
|
||||
objs = append(objs, rbacv1.ClusterRole{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: "ClusterRole",
|
||||
APIVersion: rbacv1.SchemeGroupVersion.String(),
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: roleName,
|
||||
},
|
||||
Rules: policyRules,
|
||||
})
|
||||
} else {
|
||||
objs = append(objs, rbacv1.Role{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: "Role",
|
||||
APIVersion: rbacv1.SchemeGroupVersion.String(),
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: roleName,
|
||||
Namespace: ns,
|
||||
},
|
||||
Rules: policyRules,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return objs, nil
|
||||
}
|
||||
|
||||
func (g Generator) Generate(ctx *genall.GenerationContext) error {
|
||||
objs, err := GenerateRoles(ctx, g.RoleName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(objs) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return ctx.WriteYAML("role.yaml", objs...)
|
||||
}
|
77
vendor/sigs.k8s.io/controller-tools/pkg/rbac/zz_generated.markerhelp.go
generated
vendored
Normal file
77
vendor/sigs.k8s.io/controller-tools/pkg/rbac/zz_generated.markerhelp.go
generated
vendored
Normal file
@@ -0,0 +1,77 @@
|
||||
// +build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by helpgen. DO NOT EDIT.
|
||||
|
||||
package rbac
|
||||
|
||||
import (
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
func (Generator) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "generates ClusterRole objects.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"RoleName": {
|
||||
Summary: "sets the name of the generated ClusterRole.",
|
||||
Details: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (Rule) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "RBAC",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies an RBAC rule to all access to some resources or non-resource URLs.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"Groups": {
|
||||
Summary: "specifies the API groups that this rule encompasses.",
|
||||
Details: "",
|
||||
},
|
||||
"Resources": {
|
||||
Summary: "specifies the API resources that this rule encompasses.",
|
||||
Details: "",
|
||||
},
|
||||
"ResourceNames": {
|
||||
Summary: "specifies the names of the API resources that this rule encompasses. ",
|
||||
Details: "Create requests cannot be restricted by resourcename, as the object's name is not known at authorization time.",
|
||||
},
|
||||
"Verbs": {
|
||||
Summary: "specifies the (lowercase) kubernetes API verbs that this rule encompasses.",
|
||||
Details: "",
|
||||
},
|
||||
"URLs": {
|
||||
Summary: "URL specifies the non-resource URLs that this rule encompasses.",
|
||||
Details: "",
|
||||
},
|
||||
"Namespace": {
|
||||
Summary: "specifies the scope of the Rule. If not set, the Rule belongs to the generated ClusterRole. If set, the Rule belongs to a Role, whose namespace is specified by this field.",
|
||||
Details: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
524
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/gen.go
generated
vendored
Normal file
524
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/gen.go
generated
vendored
Normal file
@@ -0,0 +1,524 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package schemapatcher
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
apiext "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1"
|
||||
apiextlegacy "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
|
||||
"k8s.io/apimachinery/pkg/api/equality"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
kyaml "sigs.k8s.io/yaml"
|
||||
|
||||
crdgen "sigs.k8s.io/controller-tools/pkg/crd"
|
||||
crdmarkers "sigs.k8s.io/controller-tools/pkg/crd/markers"
|
||||
"sigs.k8s.io/controller-tools/pkg/genall"
|
||||
"sigs.k8s.io/controller-tools/pkg/loader"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
yamlop "sigs.k8s.io/controller-tools/pkg/schemapatcher/internal/yaml"
|
||||
)
|
||||
|
||||
// NB(directxman12): this code is quite fragile, but there are a sufficient
|
||||
// number of corner cases that it's hard to decompose into separate tools.
|
||||
// When in doubt, ping @sttts.
|
||||
//
|
||||
// Namely:
|
||||
// - It needs to only update existing versions
|
||||
// - It needs to make "stable" changes that don't mess with map key ordering
|
||||
// (in order to facilitate validating that no change has occurred)
|
||||
// - It needs to collapse identical schema versions into a top-level schema,
|
||||
// if all versions are identical (this is a common requirement to all CRDs,
|
||||
// but in this case it means simple jsonpatch wouldn't suffice)
|
||||
|
||||
// TODO(directxman12): When CRD v1 rolls around, consider splitting this into a
|
||||
// tool that generates a patch, and a separate tool for applying stable YAML
|
||||
// patches.
|
||||
|
||||
var (
|
||||
legacyAPIExtVersion = apiextlegacy.SchemeGroupVersion.String()
|
||||
currentAPIExtVersion = apiext.SchemeGroupVersion.String()
|
||||
)
|
||||
|
||||
// +controllertools:marker:generateHelp
|
||||
|
||||
// Generator patches existing CRDs with new schemata.
|
||||
//
|
||||
// For legacy (v1beta1) single-version CRDs, it will simply replace the global schema.
|
||||
//
|
||||
// For legacy (v1beta1) multi-version CRDs, and any v1 CRDs, it will replace
|
||||
// schemata of existing versions and *clear the schema* from any versions not
|
||||
// specified in the Go code. It will *not* add new versions, or remove old
|
||||
// ones.
|
||||
//
|
||||
// For legacy multi-version CRDs with identical schemata, it will take care of
|
||||
// lifting the per-version schema up to the global schema.
|
||||
//
|
||||
// It will generate output for each "CRD Version" (API version of the CRD type
|
||||
// itself) , e.g. apiextensions/v1beta1 and apiextensions/v1) available.
|
||||
type Generator struct {
|
||||
// ManifestsPath contains the CustomResourceDefinition YAML files.
|
||||
ManifestsPath string `marker:"manifests"`
|
||||
|
||||
// MaxDescLen specifies the maximum description length for fields in CRD's OpenAPI schema.
|
||||
//
|
||||
// 0 indicates drop the description for all fields completely.
|
||||
// n indicates limit the description to at most n characters and truncate the description to
|
||||
// closest sentence boundary if it exceeds n characters.
|
||||
MaxDescLen *int `marker:",optional"`
|
||||
|
||||
// GenerateEmbeddedObjectMeta specifies if any embedded ObjectMeta in the CRD should be generated
|
||||
GenerateEmbeddedObjectMeta *bool `marker:",optional"`
|
||||
}
|
||||
|
||||
var _ genall.Generator = &Generator{}
|
||||
|
||||
func (Generator) CheckFilter() loader.NodeFilter {
|
||||
return crdgen.Generator{}.CheckFilter()
|
||||
}
|
||||
|
||||
func (Generator) RegisterMarkers(into *markers.Registry) error {
|
||||
return crdmarkers.Register(into)
|
||||
}
|
||||
|
||||
func (g Generator) Generate(ctx *genall.GenerationContext) (result error) {
|
||||
parser := &crdgen.Parser{
|
||||
Collector: ctx.Collector,
|
||||
Checker: ctx.Checker,
|
||||
// Indicates the parser on whether to register the ObjectMeta type or not
|
||||
GenerateEmbeddedObjectMeta: g.GenerateEmbeddedObjectMeta != nil && *g.GenerateEmbeddedObjectMeta == true,
|
||||
}
|
||||
|
||||
crdgen.AddKnownTypes(parser)
|
||||
for _, root := range ctx.Roots {
|
||||
parser.NeedPackage(root)
|
||||
}
|
||||
|
||||
metav1Pkg := crdgen.FindMetav1(ctx.Roots)
|
||||
if metav1Pkg == nil {
|
||||
// no objects in the roots, since nothing imported metav1
|
||||
return nil
|
||||
}
|
||||
|
||||
// load existing CRD manifests with group-kind and versions
|
||||
partialCRDSets, err := crdsFromDirectory(ctx, g.ManifestsPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// generate schemata for the types we care about, and save them to be written later.
|
||||
for groupKind := range crdgen.FindKubeKinds(parser, metav1Pkg) {
|
||||
existingSet, wanted := partialCRDSets[groupKind]
|
||||
if !wanted {
|
||||
continue
|
||||
}
|
||||
|
||||
for pkg, gv := range parser.GroupVersions {
|
||||
if gv.Group != groupKind.Group {
|
||||
continue
|
||||
}
|
||||
if _, wantedVersion := existingSet.Versions[gv.Version]; !wantedVersion {
|
||||
continue
|
||||
}
|
||||
|
||||
typeIdent := crdgen.TypeIdent{Package: pkg, Name: groupKind.Kind}
|
||||
parser.NeedFlattenedSchemaFor(typeIdent)
|
||||
|
||||
fullSchema := parser.FlattenedSchemata[typeIdent]
|
||||
if g.MaxDescLen != nil {
|
||||
fullSchema = *fullSchema.DeepCopy()
|
||||
crdgen.TruncateDescription(&fullSchema, *g.MaxDescLen)
|
||||
}
|
||||
|
||||
// Fix top level ObjectMeta regardless of the settings.
|
||||
if _, ok := fullSchema.Properties["metadata"]; ok {
|
||||
fullSchema.Properties["metadata"] = apiext.JSONSchemaProps{Type: "object"}
|
||||
}
|
||||
|
||||
existingSet.NewSchemata[gv.Version] = fullSchema
|
||||
}
|
||||
}
|
||||
|
||||
// patch existing CRDs with new schemata
|
||||
for _, existingSet := range partialCRDSets {
|
||||
// first, figure out if we need to merge schemata together if they're *all*
|
||||
// identical (meaning we also don't have any "unset" versions)
|
||||
|
||||
if len(existingSet.NewSchemata) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// copy over the new versions that we have, keeping old versions so
|
||||
// that we can tell if a schema would be nil
|
||||
var someVer string
|
||||
for ver := range existingSet.NewSchemata {
|
||||
someVer = ver
|
||||
existingSet.Versions[ver] = struct{}{}
|
||||
}
|
||||
|
||||
allSame := true
|
||||
firstSchema := existingSet.NewSchemata[someVer]
|
||||
for ver := range existingSet.Versions {
|
||||
otherSchema, hasSchema := existingSet.NewSchemata[ver]
|
||||
if !hasSchema || !equality.Semantic.DeepEqual(firstSchema, otherSchema) {
|
||||
allSame = false
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if allSame {
|
||||
if err := existingSet.setGlobalSchema(); err != nil {
|
||||
return fmt.Errorf("failed to set global firstSchema for %s: %w", existingSet.GroupKind, err)
|
||||
}
|
||||
} else {
|
||||
if err := existingSet.setVersionedSchemata(); err != nil {
|
||||
return fmt.Errorf("failed to set versioned schemas for %s: %w", existingSet.GroupKind, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// write the final result out to the new location
|
||||
for _, set := range partialCRDSets {
|
||||
// We assume all CRD versions came from different files, since this
|
||||
// is how controller-gen works. If they came from the same file,
|
||||
// it'd be non-sensical, since you couldn't reasonably use kubectl
|
||||
// with them against older servers.
|
||||
for _, crd := range set.CRDVersions {
|
||||
if err := func() error {
|
||||
outWriter, err := ctx.OutputRule.Open(nil, crd.FileName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer outWriter.Close()
|
||||
|
||||
enc := yaml.NewEncoder(outWriter)
|
||||
// yaml.v2 defaults to indent=2, yaml.v3 defaults to indent=4,
|
||||
// so be compatible with everything else in k8s and choose 2.
|
||||
enc.SetIndent(2)
|
||||
|
||||
return enc.Encode(crd.Yaml)
|
||||
}(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// partialCRDSet represents a set of CRDs of different apiext versions
|
||||
// (v1beta1.CRD vs v1.CRD) that represent the same GroupKind.
|
||||
//
|
||||
// It tracks modifications to the schemata of those CRDs from this source file,
|
||||
// plus some useful structured content, and keeps track of the raw YAML representation
|
||||
// of the different apiext versions.
|
||||
type partialCRDSet struct {
|
||||
// GroupKind is the GroupKind represented by this CRD.
|
||||
GroupKind schema.GroupKind
|
||||
// NewSchemata are the new schemata generated from Go IDL by controller-gen.
|
||||
NewSchemata map[string]apiext.JSONSchemaProps
|
||||
// CRDVersions are the forms of this CRD across different apiextensions
|
||||
// versions
|
||||
CRDVersions []*partialCRD
|
||||
// Versions are the versions of the given GroupKind in this set of CRDs.
|
||||
Versions map[string]struct{}
|
||||
}
|
||||
|
||||
// partialCRD represents the raw YAML encoding of a given CRD instance, plus
|
||||
// the versions contained therein for easy lookup.
|
||||
type partialCRD struct {
|
||||
// Yaml is the raw YAML structure of the CRD.
|
||||
Yaml *yaml.Node
|
||||
// FileName is the source name of the file that this was read from.
|
||||
//
|
||||
// This isn't on partialCRDSet because we could have different CRD versions
|
||||
// stored in the same file (like controller-tools does by default) or in
|
||||
// different files.
|
||||
FileName string
|
||||
|
||||
// CRDVersion is the version of the CRD object itself, from
|
||||
// apiextensions (currently apiextensions/v1 or apiextensions/v1beta1).
|
||||
CRDVersion string
|
||||
}
|
||||
|
||||
// setGlobalSchema sets the global schema for the v1beta1 apiext version in
|
||||
// this set (if present, as per partialCRD.setGlobalSchema), and sets the
|
||||
// versioned schemas (as per setVersionedSchemata) for the v1 version.
|
||||
func (e *partialCRDSet) setGlobalSchema() error {
|
||||
// there's no easy way to get a "random" key from a go map :-/
|
||||
var schema apiext.JSONSchemaProps
|
||||
for ver := range e.NewSchemata {
|
||||
schema = e.NewSchemata[ver]
|
||||
break
|
||||
}
|
||||
for _, crdInfo := range e.CRDVersions {
|
||||
switch crdInfo.CRDVersion {
|
||||
case legacyAPIExtVersion:
|
||||
if err := crdInfo.setGlobalSchema(schema); err != nil {
|
||||
return err
|
||||
}
|
||||
case currentAPIExtVersion:
|
||||
// just set the schemata as normal for non-legacy versions
|
||||
if err := crdInfo.setVersionedSchemata(e.NewSchemata); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// setGlobalSchema sets the global schema to one of the schemata
|
||||
// for this CRD. All schemata must be identical for this to be a valid operation.
|
||||
func (e *partialCRD) setGlobalSchema(newSchema apiext.JSONSchemaProps) error {
|
||||
if e.CRDVersion != legacyAPIExtVersion {
|
||||
// no global schema, nothing to do
|
||||
return fmt.Errorf("cannot set global schema on non-legacy CRD versions")
|
||||
}
|
||||
schema, err := legacySchema(newSchema)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to convert schema to legacy form: %w", err)
|
||||
}
|
||||
schemaNodeTree, err := yamlop.ToYAML(schema)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
schemaNodeTree = schemaNodeTree.Content[0] // get rid of the document node
|
||||
yamlop.SetStyle(schemaNodeTree, 0) // clear the style so it defaults to auto-style-choice
|
||||
|
||||
if err := yamlop.SetNode(e.Yaml, *schemaNodeTree, "spec", "validation", "openAPIV3Schema"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
versions, found, err := e.getVersionsNode()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !found {
|
||||
return nil
|
||||
}
|
||||
for i, verNode := range versions.Content {
|
||||
if err := yamlop.DeleteNode(verNode, "schema"); err != nil {
|
||||
return fmt.Errorf("spec.versions[%d]: %w", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// getVersionsNode gets the YAML node of .spec.versions YAML mapping,
|
||||
// if returning the node, and whether or not it was present.
|
||||
func (e *partialCRD) getVersionsNode() (*yaml.Node, bool, error) {
|
||||
versions, found, err := yamlop.GetNode(e.Yaml, "spec", "versions")
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
if !found {
|
||||
return nil, false, nil
|
||||
}
|
||||
if versions.Kind != yaml.SequenceNode {
|
||||
return nil, true, fmt.Errorf("unexpected non-sequence versions")
|
||||
}
|
||||
return versions, found, nil
|
||||
}
|
||||
|
||||
// setVersionedSchemata sets the versioned schemata on each encoding in this set as per
|
||||
// setVersionedSchemata on partialCRD.
|
||||
func (e *partialCRDSet) setVersionedSchemata() error {
|
||||
for _, crdInfo := range e.CRDVersions {
|
||||
if err := crdInfo.setVersionedSchemata(e.NewSchemata); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// setVersionedSchemata populates all existing versions with new schemata,
|
||||
// wiping the schema of any version that doesn't have a listed schema.
|
||||
// Any "unknown" versions are ignored.
|
||||
func (e *partialCRD) setVersionedSchemata(newSchemata map[string]apiext.JSONSchemaProps) error {
|
||||
var err error
|
||||
if err := yamlop.DeleteNode(e.Yaml, "spec", "validation"); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
versions, found, err := e.getVersionsNode()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !found {
|
||||
return fmt.Errorf("unexpected missing versions")
|
||||
}
|
||||
|
||||
for i, verNode := range versions.Content {
|
||||
nameNode, _, _ := yamlop.GetNode(verNode, "name")
|
||||
if nameNode.Kind != yaml.ScalarNode || nameNode.ShortTag() != "!!str" {
|
||||
return fmt.Errorf("version name was not a string at spec.versions[%d]", i)
|
||||
}
|
||||
name := nameNode.Value
|
||||
if name == "" {
|
||||
return fmt.Errorf("unexpected empty name at spec.versions[%d]", i)
|
||||
}
|
||||
newSchema, found := newSchemata[name]
|
||||
if !found {
|
||||
if err := yamlop.DeleteNode(verNode, "schema"); err != nil {
|
||||
return fmt.Errorf("spec.versions[%d]: %w", i, err)
|
||||
}
|
||||
} else {
|
||||
// TODO(directxman12): if this gets to be more than 2 versions, use polymorphism to clean this up
|
||||
var verSchema interface{} = newSchema
|
||||
if e.CRDVersion == legacyAPIExtVersion {
|
||||
verSchema, err = legacySchema(newSchema)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to convert schema to legacy form: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
schemaNodeTree, err := yamlop.ToYAML(verSchema)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to convert schema to YAML: %w", err)
|
||||
}
|
||||
schemaNodeTree = schemaNodeTree.Content[0] // get rid of the document node
|
||||
yamlop.SetStyle(schemaNodeTree, 0) // clear the style so it defaults to an auto-chosen one
|
||||
if err := yamlop.SetNode(verNode, *schemaNodeTree, "schema", "openAPIV3Schema"); err != nil {
|
||||
return fmt.Errorf("spec.versions[%d]: %w", i, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// crdsFromDirectory returns loads all CRDs from the given directory in a
|
||||
// manner that preserves ordering, comments, etc in order to make patching
|
||||
// minimally invasive. Returned CRDs are mapped by group-kind.
|
||||
func crdsFromDirectory(ctx *genall.GenerationContext, dir string) (map[schema.GroupKind]*partialCRDSet, error) {
|
||||
res := map[schema.GroupKind]*partialCRDSet{}
|
||||
dirEntries, err := ioutil.ReadDir(dir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, fileInfo := range dirEntries {
|
||||
// find all files that are YAML
|
||||
if fileInfo.IsDir() || filepath.Ext(fileInfo.Name()) != ".yaml" {
|
||||
continue
|
||||
}
|
||||
|
||||
rawContent, err := ctx.ReadFile(filepath.Join(dir, fileInfo.Name()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// NB(directxman12): we could use the universal deserializer for this, but it's
|
||||
// really pretty clunky, and the alternative is actually kinda easier to understand
|
||||
|
||||
// ensure that this is a CRD
|
||||
var typeMeta metav1.TypeMeta
|
||||
if err := kyaml.Unmarshal(rawContent, &typeMeta); err != nil {
|
||||
continue
|
||||
}
|
||||
if !isSupportedAPIExtGroupVer(typeMeta.APIVersion) || typeMeta.Kind != "CustomResourceDefinition" {
|
||||
continue
|
||||
}
|
||||
|
||||
// collect the group-kind and versions from the actual structured form
|
||||
var actualCRD crdIsh
|
||||
if err := kyaml.Unmarshal(rawContent, &actualCRD); err != nil {
|
||||
continue
|
||||
}
|
||||
groupKind := schema.GroupKind{Group: actualCRD.Spec.Group, Kind: actualCRD.Spec.Names.Kind}
|
||||
var versions map[string]struct{}
|
||||
if len(actualCRD.Spec.Versions) == 0 {
|
||||
versions = map[string]struct{}{actualCRD.Spec.Version: {}}
|
||||
} else {
|
||||
versions = make(map[string]struct{}, len(actualCRD.Spec.Versions))
|
||||
for _, ver := range actualCRD.Spec.Versions {
|
||||
versions[ver.Name] = struct{}{}
|
||||
}
|
||||
}
|
||||
|
||||
// then actually unmarshal in a manner that preserves ordering, etc
|
||||
var yamlNodeTree yaml.Node
|
||||
if err := yaml.Unmarshal(rawContent, &yamlNodeTree); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// then store this CRDVersion of the CRD in a set, populating the set if necessary
|
||||
if res[groupKind] == nil {
|
||||
res[groupKind] = &partialCRDSet{
|
||||
GroupKind: groupKind,
|
||||
NewSchemata: make(map[string]apiext.JSONSchemaProps),
|
||||
Versions: make(map[string]struct{}),
|
||||
}
|
||||
}
|
||||
for ver := range versions {
|
||||
res[groupKind].Versions[ver] = struct{}{}
|
||||
}
|
||||
res[groupKind].CRDVersions = append(res[groupKind].CRDVersions, &partialCRD{
|
||||
Yaml: &yamlNodeTree,
|
||||
FileName: fileInfo.Name(),
|
||||
CRDVersion: typeMeta.APIVersion,
|
||||
})
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// isSupportedAPIExtGroupVer checks if the given string-form group-version
|
||||
// is one of the known apiextensions versions (v1, v1beta1).
|
||||
func isSupportedAPIExtGroupVer(groupVer string) bool {
|
||||
return groupVer == currentAPIExtVersion || groupVer == legacyAPIExtVersion
|
||||
}
|
||||
|
||||
// crdIsh is a merged blob of CRD fields that looks enough like all versions of
|
||||
// CRD to extract the relevant information for partialCRDSet and partialCRD.
|
||||
//
|
||||
// We keep this separate so it's clear what info we need, and so we don't break
|
||||
// when we switch canonical internal versions and lose old fields while gaining
|
||||
// new ones (like in v1beta1 --> v1).
|
||||
//
|
||||
// Its use is tied directly to crdsFromDirectory, and is mostly an implementation detail of that.
|
||||
type crdIsh struct {
|
||||
Spec struct {
|
||||
Group string `json:"group"`
|
||||
Names struct {
|
||||
Kind string `json:"kind"`
|
||||
} `json:"names"`
|
||||
Versions []struct {
|
||||
Name string `json:"name"`
|
||||
} `json:"versions"`
|
||||
Version string `json:"version"`
|
||||
} `json:"spec"`
|
||||
}
|
||||
|
||||
// legacySchema jumps through some hoops to convert a v1 schema to a v1beta1 schema.
|
||||
func legacySchema(origSchema apiext.JSONSchemaProps) (apiextlegacy.JSONSchemaProps, error) {
|
||||
shellCRD := apiext.CustomResourceDefinition{}
|
||||
shellCRD.APIVersion = currentAPIExtVersion
|
||||
shellCRD.Kind = "CustomResourceDefinition"
|
||||
shellCRD.Spec.Versions = []apiext.CustomResourceDefinitionVersion{
|
||||
{Schema: &apiext.CustomResourceValidation{OpenAPIV3Schema: origSchema.DeepCopy()}},
|
||||
}
|
||||
|
||||
legacyCRD, err := crdgen.AsVersion(shellCRD, apiextlegacy.SchemeGroupVersion)
|
||||
if err != nil {
|
||||
return apiextlegacy.JSONSchemaProps{}, err
|
||||
}
|
||||
|
||||
return *legacyCRD.(*apiextlegacy.CustomResourceDefinition).Spec.Validation.OpenAPIV3Schema, nil
|
||||
}
|
61
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/internal/yaml/convert.go
generated
vendored
Normal file
61
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/internal/yaml/convert.go
generated
vendored
Normal file
@@ -0,0 +1,61 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package yaml
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// ToYAML converts some object that serializes to JSON into a YAML node tree.
|
||||
// It's useful since it pays attention to JSON tags, unlike yaml.Unmarshal or
|
||||
// yaml.Node.Decode.
|
||||
func ToYAML(rawObj interface{}) (*yaml.Node, error) {
|
||||
if rawObj == nil {
|
||||
return &yaml.Node{Kind: yaml.ScalarNode, Value: "null", Tag: "!!null"}, nil
|
||||
}
|
||||
|
||||
rawJSON, err := json.Marshal(rawObj)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal object: %w", err)
|
||||
}
|
||||
|
||||
var out yaml.Node
|
||||
if err := yaml.Unmarshal(rawJSON, &out); err != nil {
|
||||
return nil, fmt.Errorf("unable to unmarshal marshalled object: %w", err)
|
||||
}
|
||||
return &out, nil
|
||||
}
|
||||
|
||||
// changeAll calls the given callback for all nodes in
|
||||
// the given YAML node tree.
|
||||
func changeAll(root *yaml.Node, cb func(*yaml.Node)) {
|
||||
cb(root)
|
||||
for _, child := range root.Content {
|
||||
changeAll(child, cb)
|
||||
}
|
||||
}
|
||||
|
||||
// SetStyle sets the style for all nodes in the given
|
||||
// node tree to the given style.
|
||||
func SetStyle(root *yaml.Node, style yaml.Style) {
|
||||
changeAll(root, func(node *yaml.Node) {
|
||||
node.Style = style
|
||||
})
|
||||
}
|
87
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/internal/yaml/nested.go
generated
vendored
Normal file
87
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/internal/yaml/nested.go
generated
vendored
Normal file
@@ -0,0 +1,87 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package yaml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// ValueInMapping finds the value node with the corresponding string key
|
||||
// in the given mapping node. If the given node is not a mapping, an
|
||||
// error will be returned.
|
||||
func ValueInMapping(root *yaml.Node, key string) (*yaml.Node, error) {
|
||||
if root.Kind != yaml.MappingNode {
|
||||
return nil, fmt.Errorf("unexpected non-mapping node")
|
||||
}
|
||||
|
||||
for i := 0; i < len(root.Content)/2; i++ {
|
||||
keyNode := root.Content[i*2]
|
||||
if keyNode.Value == key {
|
||||
return root.Content[i*2+1], nil
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// asCloseAsPossible goes as deep on the given path as possible, returning the
|
||||
// last node that existed from the given path in the given tree of mapping
|
||||
// nodes, as well as the rest of the path that could not be fetched, if any.
|
||||
func asCloseAsPossible(root *yaml.Node, path ...string) (*yaml.Node, []string, error) {
|
||||
if root == nil {
|
||||
return nil, path, nil
|
||||
}
|
||||
if root.Kind == yaml.DocumentNode && len(root.Content) > 0 {
|
||||
root = root.Content[0]
|
||||
}
|
||||
|
||||
currNode := root
|
||||
for ; len(path) > 0; path = path[1:] {
|
||||
if currNode.Kind != yaml.MappingNode {
|
||||
return nil, nil, fmt.Errorf("unexpected non-mapping (%v) before path %v", currNode.Kind, path)
|
||||
}
|
||||
|
||||
nextNode, err := ValueInMapping(currNode, path[0])
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("unable to get next node in path %v: %w", path, err)
|
||||
}
|
||||
|
||||
if nextNode == nil {
|
||||
// we're as close as possible
|
||||
break
|
||||
}
|
||||
|
||||
currNode = nextNode
|
||||
}
|
||||
|
||||
return currNode, path, nil
|
||||
}
|
||||
|
||||
// GetNode gets the node at the given path in the given sequence of mapping
|
||||
// nodes, or, if it doesn't exist, returning false.
|
||||
func GetNode(root *yaml.Node, path ...string) (*yaml.Node, bool, error) {
|
||||
resNode, restPath, err := asCloseAsPossible(root, path...)
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
// more path means the node didn't exist
|
||||
if len(restPath) != 0 {
|
||||
return nil, false, nil
|
||||
}
|
||||
return resNode, true, nil
|
||||
}
|
80
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/internal/yaml/set.go
generated
vendored
Normal file
80
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/internal/yaml/set.go
generated
vendored
Normal file
@@ -0,0 +1,80 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package yaml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// SetNode sets the given path to the given yaml Node, creating mapping nodes along the way.
|
||||
func SetNode(root *yaml.Node, val yaml.Node, path ...string) error {
|
||||
currNode, path, err := asCloseAsPossible(root, path...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if len(path) > 0 {
|
||||
if currNode.Kind != yaml.MappingNode {
|
||||
return fmt.Errorf("unexpected non-mapping before path %v", path)
|
||||
}
|
||||
|
||||
for ; len(path) > 0; path = path[1:] {
|
||||
keyNode := yaml.Node{Kind: yaml.ScalarNode, Tag: "!!str", Style: yaml.DoubleQuotedStyle, Value: path[0]}
|
||||
nextNode := &yaml.Node{Kind: yaml.MappingNode}
|
||||
currNode.Content = append(currNode.Content, &keyNode, nextNode)
|
||||
|
||||
currNode = nextNode
|
||||
}
|
||||
}
|
||||
|
||||
*currNode = val
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteNode deletes the node at the given path in the given tree of mapping nodes.
|
||||
// It's a noop if the path doesn't exist.
|
||||
func DeleteNode(root *yaml.Node, path ...string) error {
|
||||
if len(path) == 0 {
|
||||
return fmt.Errorf("must specify a path to delete")
|
||||
}
|
||||
pathToParent, keyToDelete := path[:len(path)-1], path[len(path)-1]
|
||||
parentNode, path, err := asCloseAsPossible(root, pathToParent...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(path) > 0 {
|
||||
// no-op, parent node doesn't exist
|
||||
return nil
|
||||
}
|
||||
|
||||
if parentNode.Kind != yaml.MappingNode {
|
||||
return fmt.Errorf("unexpected non-mapping node")
|
||||
}
|
||||
|
||||
for i := 0; i < len(parentNode.Content)/2; i++ {
|
||||
keyNode := parentNode.Content[i*2]
|
||||
if keyNode.Value == keyToDelete {
|
||||
parentNode.Content = append(parentNode.Content[:i*2], parentNode.Content[i*2+2:]...)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// no-op, key not found in parent node
|
||||
return nil
|
||||
}
|
45
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/zz_generated.markerhelp.go
generated
vendored
Normal file
45
vendor/sigs.k8s.io/controller-tools/pkg/schemapatcher/zz_generated.markerhelp.go
generated
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
// +build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by helpgen. DO NOT EDIT.
|
||||
|
||||
package schemapatcher
|
||||
|
||||
import (
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
func (Generator) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "patches existing CRDs with new schemata. ",
|
||||
Details: "For legacy (v1beta1) single-version CRDs, it will simply replace the global schema. \n For legacy (v1beta1) multi-version CRDs, and any v1 CRDs, it will replace schemata of existing versions and *clear the schema* from any versions not specified in the Go code. It will *not* add new versions, or remove old ones. \n For legacy multi-version CRDs with identical schemata, it will take care of lifting the per-version schema up to the global schema. \n It will generate output for each \"CRD Version\" (API version of the CRD type itself) , e.g. apiextensions/v1beta1 and apiextensions/v1) available.",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"ManifestsPath": {
|
||||
Summary: "contains the CustomResourceDefinition YAML files.",
|
||||
Details: "",
|
||||
},
|
||||
"MaxDescLen": {
|
||||
Summary: "specifies the maximum description length for fields in CRD's OpenAPI schema. ",
|
||||
Details: "0 indicates drop the description for all fields completely. n indicates limit the description to at most n characters and truncate the description to closest sentence boundary if it exceeds n characters.",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
49
vendor/sigs.k8s.io/controller-tools/pkg/version/version.go
generated
vendored
Normal file
49
vendor/sigs.k8s.io/controller-tools/pkg/version/version.go
generated
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
/*
|
||||
Copyright 2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
package version
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime/debug"
|
||||
)
|
||||
|
||||
// Version returns the version of the main module
|
||||
func Version() string {
|
||||
info, ok := debug.ReadBuildInfo()
|
||||
if !ok {
|
||||
// binary has not been built with module support
|
||||
return "(unknown)"
|
||||
}
|
||||
return info.Main.Version
|
||||
}
|
||||
|
||||
// Print prints the main module version on stdout.
|
||||
//
|
||||
// Print will display either:
|
||||
//
|
||||
// - "Version: v0.2.1" when the program has been compiled with:
|
||||
//
|
||||
// $ go get github.com/controller-tools/cmd/controller-gen@v0.2.1
|
||||
//
|
||||
// Note: go modules requires the usage of semver compatible tags starting with
|
||||
// 'v' to have nice human-readable versions.
|
||||
//
|
||||
// - "Version: (devel)" when the program is compiled from a local git checkout.
|
||||
//
|
||||
// - "Version: (unknown)" when not using go modules.
|
||||
func Print() {
|
||||
fmt.Printf("Version: %s\n", Version())
|
||||
}
|
417
vendor/sigs.k8s.io/controller-tools/pkg/webhook/parser.go
generated
vendored
Normal file
417
vendor/sigs.k8s.io/controller-tools/pkg/webhook/parser.go
generated
vendored
Normal file
@@ -0,0 +1,417 @@
|
||||
/*
|
||||
Copyright 2018 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package webhook contains libraries for generating webhookconfig manifests
|
||||
// from markers in Go source files.
|
||||
//
|
||||
// The markers take the form:
|
||||
//
|
||||
// +kubebuilder:webhook:webhookVersions=<[]string>,failurePolicy=<string>,matchPolicy=<string>,groups=<[]string>,resources=<[]string>,verbs=<[]string>,versions=<[]string>,name=<string>,path=<string>,mutating=<bool>,sideEffects=<string>,admissionReviewVersions=<[]string>
|
||||
package webhook
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
admissionregv1 "k8s.io/api/admissionregistration/v1"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
"k8s.io/apimachinery/pkg/util/sets"
|
||||
|
||||
"sigs.k8s.io/controller-tools/pkg/genall"
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
// The default {Mutating,Validating}WebhookConfiguration version to generate.
|
||||
const (
|
||||
defaultWebhookVersion = "v1"
|
||||
)
|
||||
|
||||
var (
|
||||
// ConfigDefinition s a marker for defining Webhook manifests.
|
||||
// Call ToWebhook on the value to get a Kubernetes Webhook.
|
||||
ConfigDefinition = markers.Must(markers.MakeDefinition("kubebuilder:webhook", markers.DescribesPackage, Config{}))
|
||||
)
|
||||
|
||||
// supportedWebhookVersions returns currently supported API version of {Mutating,Validating}WebhookConfiguration.
|
||||
func supportedWebhookVersions() []string {
|
||||
return []string{defaultWebhookVersion, "v1beta1"}
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp:category=Webhook
|
||||
|
||||
// Config specifies how a webhook should be served.
|
||||
//
|
||||
// It specifies only the details that are intrinsic to the application serving
|
||||
// it (e.g. the resources it can handle, or the path it serves on).
|
||||
type Config struct {
|
||||
// Mutating marks this as a mutating webhook (it's validating only if false)
|
||||
//
|
||||
// Mutating webhooks are allowed to change the object in their response,
|
||||
// and are called *before* all validating webhooks. Mutating webhooks may
|
||||
// choose to reject an object, similarly to a validating webhook.
|
||||
Mutating bool
|
||||
// FailurePolicy specifies what should happen if the API server cannot reach the webhook.
|
||||
//
|
||||
// It may be either "ignore" (to skip the webhook and continue on) or "fail" (to reject
|
||||
// the object in question).
|
||||
FailurePolicy string
|
||||
// MatchPolicy defines how the "rules" list is used to match incoming requests.
|
||||
// Allowed values are "Exact" (match only if it exactly matches the specified rule)
|
||||
// or "Equivalent" (match a request if it modifies a resource listed in rules, even via another API group or version).
|
||||
MatchPolicy string `marker:",optional"`
|
||||
// SideEffects specify whether calling the webhook will have side effects.
|
||||
// This has an impact on dry runs and `kubectl diff`: if the sideEffect is "Unknown" (the default) or "Some", then
|
||||
// the API server will not call the webhook on a dry-run request and fails instead.
|
||||
// If the value is "None", then the webhook has no side effects and the API server will call it on dry-run.
|
||||
// If the value is "NoneOnDryRun", then the webhook is responsible for inspecting the "dryRun" property of the
|
||||
// AdmissionReview sent in the request, and avoiding side effects if that value is "true."
|
||||
SideEffects string `marker:",optional"`
|
||||
|
||||
// Groups specifies the API groups that this webhook receives requests for.
|
||||
Groups []string
|
||||
// Resources specifies the API resources that this webhook receives requests for.
|
||||
Resources []string
|
||||
// Verbs specifies the Kubernetes API verbs that this webhook receives requests for.
|
||||
//
|
||||
// Only modification-like verbs may be specified.
|
||||
// May be "create", "update", "delete", "connect", or "*" (for all).
|
||||
Verbs []string
|
||||
// Versions specifies the API versions that this webhook receives requests for.
|
||||
Versions []string
|
||||
|
||||
// Name indicates the name of this webhook configuration. Should be a domain with at least three segments separated by dots
|
||||
Name string
|
||||
|
||||
// Path specifies that path that the API server should connect to this webhook on. Must be
|
||||
// prefixed with a '/validate-' or '/mutate-' depending on the type, and followed by
|
||||
// $GROUP-$VERSION-$KIND where all values are lower-cased and the periods in the group
|
||||
// are substituted for hyphens. For example, a validating webhook path for type
|
||||
// batch.tutorial.kubebuilder.io/v1,Kind=CronJob would be
|
||||
// /validate-batch-tutorial-kubebuilder-io-v1-cronjob
|
||||
Path string
|
||||
|
||||
// WebhookVersions specifies the target API versions of the {Mutating,Validating}WebhookConfiguration objects
|
||||
// itself to generate. Defaults to v1.
|
||||
WebhookVersions []string `marker:"webhookVersions,optional"`
|
||||
|
||||
// AdmissionReviewVersions is an ordered list of preferred `AdmissionReview`
|
||||
// versions the Webhook expects.
|
||||
// For generating v1 {Mutating,Validating}WebhookConfiguration, this is mandatory.
|
||||
// For generating v1beta1 {Mutating,Validating}WebhookConfiguration, this is optional, and default to v1beta1.
|
||||
AdmissionReviewVersions []string `marker:"admissionReviewVersions,optional"`
|
||||
}
|
||||
|
||||
// verbToAPIVariant converts a marker's verb to the proper value for the API.
|
||||
// Unrecognized verbs are passed through.
|
||||
func verbToAPIVariant(verbRaw string) admissionregv1.OperationType {
|
||||
switch strings.ToLower(verbRaw) {
|
||||
case strings.ToLower(string(admissionregv1.Create)):
|
||||
return admissionregv1.Create
|
||||
case strings.ToLower(string(admissionregv1.Update)):
|
||||
return admissionregv1.Update
|
||||
case strings.ToLower(string(admissionregv1.Delete)):
|
||||
return admissionregv1.Delete
|
||||
case strings.ToLower(string(admissionregv1.Connect)):
|
||||
return admissionregv1.Connect
|
||||
case strings.ToLower(string(admissionregv1.OperationAll)):
|
||||
return admissionregv1.OperationAll
|
||||
default:
|
||||
return admissionregv1.OperationType(verbRaw)
|
||||
}
|
||||
}
|
||||
|
||||
// ToMutatingWebhook converts this rule to its Kubernetes API form.
|
||||
func (c Config) ToMutatingWebhook() (admissionregv1.MutatingWebhook, error) {
|
||||
if !c.Mutating {
|
||||
return admissionregv1.MutatingWebhook{}, fmt.Errorf("%s is a validating webhook", c.Name)
|
||||
}
|
||||
|
||||
matchPolicy, err := c.matchPolicy()
|
||||
if err != nil {
|
||||
return admissionregv1.MutatingWebhook{}, err
|
||||
}
|
||||
|
||||
return admissionregv1.MutatingWebhook{
|
||||
Name: c.Name,
|
||||
Rules: c.rules(),
|
||||
FailurePolicy: c.failurePolicy(),
|
||||
MatchPolicy: matchPolicy,
|
||||
ClientConfig: c.clientConfig(),
|
||||
SideEffects: c.sideEffects(),
|
||||
AdmissionReviewVersions: c.AdmissionReviewVersions,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ToValidatingWebhook converts this rule to its Kubernetes API form.
|
||||
func (c Config) ToValidatingWebhook() (admissionregv1.ValidatingWebhook, error) {
|
||||
if c.Mutating {
|
||||
return admissionregv1.ValidatingWebhook{}, fmt.Errorf("%s is a mutating webhook", c.Name)
|
||||
}
|
||||
|
||||
matchPolicy, err := c.matchPolicy()
|
||||
if err != nil {
|
||||
return admissionregv1.ValidatingWebhook{}, err
|
||||
}
|
||||
|
||||
return admissionregv1.ValidatingWebhook{
|
||||
Name: c.Name,
|
||||
Rules: c.rules(),
|
||||
FailurePolicy: c.failurePolicy(),
|
||||
MatchPolicy: matchPolicy,
|
||||
ClientConfig: c.clientConfig(),
|
||||
SideEffects: c.sideEffects(),
|
||||
AdmissionReviewVersions: c.AdmissionReviewVersions,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// rules returns the configuration of what operations on what
|
||||
// resources/subresources a webhook should care about.
|
||||
func (c Config) rules() []admissionregv1.RuleWithOperations {
|
||||
whConfig := admissionregv1.RuleWithOperations{
|
||||
Rule: admissionregv1.Rule{
|
||||
APIGroups: c.Groups,
|
||||
APIVersions: c.Versions,
|
||||
Resources: c.Resources,
|
||||
},
|
||||
Operations: make([]admissionregv1.OperationType, len(c.Verbs)),
|
||||
}
|
||||
|
||||
for i, verbRaw := range c.Verbs {
|
||||
whConfig.Operations[i] = verbToAPIVariant(verbRaw)
|
||||
}
|
||||
|
||||
// fix the group names, since letting people type "core" is nice
|
||||
for i, group := range whConfig.APIGroups {
|
||||
if group == "core" {
|
||||
whConfig.APIGroups[i] = ""
|
||||
}
|
||||
}
|
||||
|
||||
return []admissionregv1.RuleWithOperations{whConfig}
|
||||
}
|
||||
|
||||
// failurePolicy converts the string value to the proper value for the API.
|
||||
// Unrecognized values are passed through.
|
||||
func (c Config) failurePolicy() *admissionregv1.FailurePolicyType {
|
||||
var failurePolicy admissionregv1.FailurePolicyType
|
||||
switch strings.ToLower(c.FailurePolicy) {
|
||||
case strings.ToLower(string(admissionregv1.Ignore)):
|
||||
failurePolicy = admissionregv1.Ignore
|
||||
case strings.ToLower(string(admissionregv1.Fail)):
|
||||
failurePolicy = admissionregv1.Fail
|
||||
default:
|
||||
failurePolicy = admissionregv1.FailurePolicyType(c.FailurePolicy)
|
||||
}
|
||||
return &failurePolicy
|
||||
}
|
||||
|
||||
// matchPolicy converts the string value to the proper value for the API.
|
||||
func (c Config) matchPolicy() (*admissionregv1.MatchPolicyType, error) {
|
||||
var matchPolicy admissionregv1.MatchPolicyType
|
||||
switch strings.ToLower(c.MatchPolicy) {
|
||||
case strings.ToLower(string(admissionregv1.Exact)):
|
||||
matchPolicy = admissionregv1.Exact
|
||||
case strings.ToLower(string(admissionregv1.Equivalent)):
|
||||
matchPolicy = admissionregv1.Equivalent
|
||||
case "":
|
||||
return nil, nil
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown value %q for matchPolicy", c.MatchPolicy)
|
||||
}
|
||||
return &matchPolicy, nil
|
||||
}
|
||||
|
||||
// clientConfig returns the client config for a webhook.
|
||||
func (c Config) clientConfig() admissionregv1.WebhookClientConfig {
|
||||
path := c.Path
|
||||
return admissionregv1.WebhookClientConfig{
|
||||
Service: &admissionregv1.ServiceReference{
|
||||
Name: "webhook-service",
|
||||
Namespace: "system",
|
||||
Path: &path,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// sideEffects returns the sideEffects config for a webhook.
|
||||
func (c Config) sideEffects() *admissionregv1.SideEffectClass {
|
||||
var sideEffects admissionregv1.SideEffectClass
|
||||
switch strings.ToLower(c.SideEffects) {
|
||||
case strings.ToLower(string(admissionregv1.SideEffectClassNone)):
|
||||
sideEffects = admissionregv1.SideEffectClassNone
|
||||
case strings.ToLower(string(admissionregv1.SideEffectClassNoneOnDryRun)):
|
||||
sideEffects = admissionregv1.SideEffectClassNoneOnDryRun
|
||||
case strings.ToLower(string(admissionregv1.SideEffectClassSome)):
|
||||
sideEffects = admissionregv1.SideEffectClassSome
|
||||
case "":
|
||||
return nil
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
return &sideEffects
|
||||
}
|
||||
|
||||
// webhookVersions returns the target API versions of the {Mutating,Validating}WebhookConfiguration objects for a webhook.
|
||||
func (c Config) webhookVersions() ([]string, error) {
|
||||
// If WebhookVersions is not specified, we default it to `v1`.
|
||||
if len(c.WebhookVersions) == 0 {
|
||||
return []string{defaultWebhookVersion}, nil
|
||||
}
|
||||
supportedWebhookVersions := sets.NewString(supportedWebhookVersions()...)
|
||||
for _, version := range c.WebhookVersions {
|
||||
if !supportedWebhookVersions.Has(version) {
|
||||
return nil, fmt.Errorf("unsupported webhook version: %s", version)
|
||||
}
|
||||
}
|
||||
return sets.NewString(c.WebhookVersions...).UnsortedList(), nil
|
||||
}
|
||||
|
||||
// +controllertools:marker:generateHelp
|
||||
|
||||
// Generator generates (partial) {Mutating,Validating}WebhookConfiguration objects.
|
||||
type Generator struct{}
|
||||
|
||||
func (Generator) RegisterMarkers(into *markers.Registry) error {
|
||||
if err := into.Register(ConfigDefinition); err != nil {
|
||||
return err
|
||||
}
|
||||
into.AddHelp(ConfigDefinition, Config{}.Help())
|
||||
return nil
|
||||
}
|
||||
|
||||
func (Generator) Generate(ctx *genall.GenerationContext) error {
|
||||
supportedWebhookVersions := supportedWebhookVersions()
|
||||
mutatingCfgs := make(map[string][]admissionregv1.MutatingWebhook, len(supportedWebhookVersions))
|
||||
validatingCfgs := make(map[string][]admissionregv1.ValidatingWebhook, len(supportedWebhookVersions))
|
||||
for _, root := range ctx.Roots {
|
||||
markerSet, err := markers.PackageMarkers(ctx.Collector, root)
|
||||
if err != nil {
|
||||
root.AddError(err)
|
||||
}
|
||||
|
||||
for _, cfg := range markerSet[ConfigDefinition.Name] {
|
||||
cfg := cfg.(Config)
|
||||
webhookVersions, err := cfg.webhookVersions()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if cfg.Mutating {
|
||||
w, err := cfg.ToMutatingWebhook()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, webhookVersion := range webhookVersions {
|
||||
mutatingCfgs[webhookVersion] = append(mutatingCfgs[webhookVersion], w)
|
||||
}
|
||||
} else {
|
||||
w, err := cfg.ToValidatingWebhook()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, webhookVersion := range webhookVersions {
|
||||
validatingCfgs[webhookVersion] = append(validatingCfgs[webhookVersion], w)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
versionedWebhooks := make(map[string][]interface{}, len(supportedWebhookVersions))
|
||||
for _, version := range supportedWebhookVersions {
|
||||
if cfgs, ok := mutatingCfgs[version]; ok {
|
||||
// All webhook config versions in supportedWebhookVersions have the same general form, with a few
|
||||
// stricter requirements for v1. Since no conversion scheme exists for webhook configs, the v1
|
||||
// type can be used for all versioned types in this context.
|
||||
objRaw := &admissionregv1.MutatingWebhookConfiguration{}
|
||||
objRaw.SetGroupVersionKind(schema.GroupVersionKind{
|
||||
Group: admissionregv1.SchemeGroupVersion.Group,
|
||||
Version: version,
|
||||
Kind: "MutatingWebhookConfiguration",
|
||||
})
|
||||
objRaw.SetName("mutating-webhook-configuration")
|
||||
objRaw.Webhooks = cfgs
|
||||
switch version {
|
||||
case admissionregv1.SchemeGroupVersion.Version:
|
||||
for i := range objRaw.Webhooks {
|
||||
// SideEffects is required in admissionregistration/v1, if this is not set or set to `Some` or `Known`,
|
||||
// return an error
|
||||
if err := checkSideEffectsForV1(objRaw.Webhooks[i].SideEffects); err != nil {
|
||||
return err
|
||||
}
|
||||
// AdmissionReviewVersions is required in admissionregistration/v1, if this is not set,
|
||||
// return an error
|
||||
if len(objRaw.Webhooks[i].AdmissionReviewVersions) == 0 {
|
||||
return fmt.Errorf("AdmissionReviewVersions is mandatory for v1 {Mutating,Validating}WebhookConfiguration")
|
||||
}
|
||||
}
|
||||
}
|
||||
versionedWebhooks[version] = append(versionedWebhooks[version], objRaw)
|
||||
}
|
||||
|
||||
if cfgs, ok := validatingCfgs[version]; ok {
|
||||
// All webhook config versions in supportedWebhookVersions have the same general form, with a few
|
||||
// stricter requirements for v1. Since no conversion scheme exists for webhook configs, the v1
|
||||
// type can be used for all versioned types in this context.
|
||||
objRaw := &admissionregv1.ValidatingWebhookConfiguration{}
|
||||
objRaw.SetGroupVersionKind(schema.GroupVersionKind{
|
||||
Group: admissionregv1.SchemeGroupVersion.Group,
|
||||
Version: version,
|
||||
Kind: "ValidatingWebhookConfiguration",
|
||||
})
|
||||
objRaw.SetName("validating-webhook-configuration")
|
||||
objRaw.Webhooks = cfgs
|
||||
switch version {
|
||||
case admissionregv1.SchemeGroupVersion.Version:
|
||||
for i := range objRaw.Webhooks {
|
||||
// SideEffects is required in admissionregistration/v1, if this is not set or set to `Some` or `Known`,
|
||||
// return an error
|
||||
if err := checkSideEffectsForV1(objRaw.Webhooks[i].SideEffects); err != nil {
|
||||
return err
|
||||
}
|
||||
// AdmissionReviewVersions is required in admissionregistration/v1, if this is not set,
|
||||
// return an error
|
||||
if len(objRaw.Webhooks[i].AdmissionReviewVersions) == 0 {
|
||||
return fmt.Errorf("AdmissionReviewVersions is mandatory for v1 {Mutating,Validating}WebhookConfiguration")
|
||||
}
|
||||
}
|
||||
}
|
||||
versionedWebhooks[version] = append(versionedWebhooks[version], objRaw)
|
||||
}
|
||||
}
|
||||
|
||||
for k, v := range versionedWebhooks {
|
||||
var fileName string
|
||||
if k == defaultWebhookVersion {
|
||||
fileName = fmt.Sprintf("manifests.yaml")
|
||||
} else {
|
||||
fileName = fmt.Sprintf("manifests.%s.yaml", k)
|
||||
}
|
||||
if err := ctx.WriteYAML(fileName, v...); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkSideEffectsForV1(sideEffects *admissionregv1.SideEffectClass) error {
|
||||
if sideEffects == nil {
|
||||
return fmt.Errorf("SideEffects is required for creating v1 {Mutating,Validating}WebhookConfiguration")
|
||||
}
|
||||
if *sideEffects == admissionregv1.SideEffectClassUnknown ||
|
||||
*sideEffects == admissionregv1.SideEffectClassSome {
|
||||
return fmt.Errorf("SideEffects should not be set to `Some` or `Unknown` for v1 {Mutating,Validating}WebhookConfiguration")
|
||||
}
|
||||
return nil
|
||||
}
|
96
vendor/sigs.k8s.io/controller-tools/pkg/webhook/zz_generated.markerhelp.go
generated
vendored
Normal file
96
vendor/sigs.k8s.io/controller-tools/pkg/webhook/zz_generated.markerhelp.go
generated
vendored
Normal file
@@ -0,0 +1,96 @@
|
||||
// +build !ignore_autogenerated
|
||||
|
||||
/*
|
||||
Copyright2019 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Code generated by helpgen. DO NOT EDIT.
|
||||
|
||||
package webhook
|
||||
|
||||
import (
|
||||
"sigs.k8s.io/controller-tools/pkg/markers"
|
||||
)
|
||||
|
||||
func (Config) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "Webhook",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "specifies how a webhook should be served. ",
|
||||
Details: "It specifies only the details that are intrinsic to the application serving it (e.g. the resources it can handle, or the path it serves on).",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{
|
||||
"Mutating": {
|
||||
Summary: "marks this as a mutating webhook (it's validating only if false) ",
|
||||
Details: "Mutating webhooks are allowed to change the object in their response, and are called *before* all validating webhooks. Mutating webhooks may choose to reject an object, similarly to a validating webhook.",
|
||||
},
|
||||
"FailurePolicy": {
|
||||
Summary: "specifies what should happen if the API server cannot reach the webhook. ",
|
||||
Details: "It may be either \"ignore\" (to skip the webhook and continue on) or \"fail\" (to reject the object in question).",
|
||||
},
|
||||
"MatchPolicy": {
|
||||
Summary: "defines how the \"rules\" list is used to match incoming requests. Allowed values are \"Exact\" (match only if it exactly matches the specified rule) or \"Equivalent\" (match a request if it modifies a resource listed in rules, even via another API group or version).",
|
||||
Details: "",
|
||||
},
|
||||
"SideEffects": {
|
||||
Summary: "specify whether calling the webhook will have side effects. This has an impact on dry runs and `kubectl diff`: if the sideEffect is \"Unknown\" (the default) or \"Some\", then the API server will not call the webhook on a dry-run request and fails instead. If the value is \"None\", then the webhook has no side effects and the API server will call it on dry-run. If the value is \"NoneOnDryRun\", then the webhook is responsible for inspecting the \"dryRun\" property of the AdmissionReview sent in the request, and avoiding side effects if that value is \"true.\"",
|
||||
Details: "",
|
||||
},
|
||||
"Groups": {
|
||||
Summary: "specifies the API groups that this webhook receives requests for.",
|
||||
Details: "",
|
||||
},
|
||||
"Resources": {
|
||||
Summary: "specifies the API resources that this webhook receives requests for.",
|
||||
Details: "",
|
||||
},
|
||||
"Verbs": {
|
||||
Summary: "specifies the Kubernetes API verbs that this webhook receives requests for. ",
|
||||
Details: "Only modification-like verbs may be specified. May be \"create\", \"update\", \"delete\", \"connect\", or \"*\" (for all).",
|
||||
},
|
||||
"Versions": {
|
||||
Summary: "specifies the API versions that this webhook receives requests for.",
|
||||
Details: "",
|
||||
},
|
||||
"Name": {
|
||||
Summary: "indicates the name of this webhook configuration. Should be a domain with at least three segments separated by dots",
|
||||
Details: "",
|
||||
},
|
||||
"Path": {
|
||||
Summary: "specifies that path that the API server should connect to this webhook on. Must be prefixed with a '/validate-' or '/mutate-' depending on the type, and followed by $GROUP-$VERSION-$KIND where all values are lower-cased and the periods in the group are substituted for hyphens. For example, a validating webhook path for type batch.tutorial.kubebuilder.io/v1,Kind=CronJob would be /validate-batch-tutorial-kubebuilder-io-v1-cronjob",
|
||||
Details: "",
|
||||
},
|
||||
"WebhookVersions": {
|
||||
Summary: "specifies the target API versions of the {Mutating,Validating}WebhookConfiguration objects itself to generate. Defaults to v1.",
|
||||
Details: "",
|
||||
},
|
||||
"AdmissionReviewVersions": {
|
||||
Summary: "is an ordered list of preferred `AdmissionReview` versions the Webhook expects. For generating v1 {Mutating,Validating}WebhookConfiguration, this is mandatory. For generating v1beta1 {Mutating,Validating}WebhookConfiguration, this is optional, and default to v1beta1.",
|
||||
Details: "",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (Generator) Help() *markers.DefinitionHelp {
|
||||
return &markers.DefinitionHelp{
|
||||
Category: "",
|
||||
DetailedHelp: markers.DetailedHelp{
|
||||
Summary: "generates (partial) {Mutating,Validating}WebhookConfiguration objects.",
|
||||
Details: "",
|
||||
},
|
||||
FieldHelp: map[string]markers.DetailedHelp{},
|
||||
}
|
||||
}
|
Reference in New Issue
Block a user