From 519725bb3c075ee2462c929f5997cb068e18466a Mon Sep 17 00:00:00 2001
From: Ondřej Surý
+Cgo lets Go packages call C code. Given a Go source file written with some
+special features, cgo outputs Go and C files that can be combined into a
+single Go package.
+
+To lead with an example, here's a Go package that provides two functions -
+
+Let's look at what's happening here, starting with the import statement.
+
+The
+The
+The
+Here's an equivalent function that uses a temporary variable to illustrate
+the type conversion more explicitly:
+
+The
+Note that cgo knows the
+The one detail of this example we haven't examined yet is the comment
+above the
+Cgo recognizes this comment. Any lines starting
+with
+There is a limitation: if your program uses any
+The
+Strings and things
+
+Unlike Go, C doesn't have an explicit string type. Strings in C are
+represented by a zero-terminated array of chars.
+
+Conversion between Go and C strings is done with the
+
+This next example implements a
+Memory allocations made by C code are not known to Go's memory manager.
+When you create a C string with
+The call to
+Building cgo packages
+
+To build cgo packages, just use "
+
+More cgo resources
+
+The cgo command documentation has more detail about
+the C pseudo-package and the build process. The cgo examples
+in the Go tree demonstrate more advanced concepts.
+
+For a simple, idiomatic example of a cgo-based package, see Russ Cox's gosqlite.
+Also, the Go Project Dashboard lists several other
+cgo packages.
+
+Finally, if you're curious as to how all this works internally, take a look
+at the introductory comment of the runtime package's cgocall.c.
+
+Concurrent programming has its own idioms. A good example is timeouts. Although
+Go's channels do not support them directly, they are easy to implement. Say we
+want to receive from the channel
+We can then use a
+The
+(In this example we used
+Let's look at another variation of this pattern. In this example we have a
+program that reads from multiple replicated databases simultaneously. The
+program needs only one of the answers, and it should accept the answer that
+arrives first.
+
+The function
+In this example, the closure does a non-blocking send, which it achieves by
+using the send operation in
+This problem is a textbook of example of what is known as a
+race condition, but
+the fix is trivial. We just make sure to buffer the channel
+These two examples demonstrate the simplicity with which Go can express complex
+interactions between goroutines.
+
Go has the usual mechanisms for control flow: if, for, switch, goto. It also
@@ -23,23 +20,7 @@ For example, let's look at a function that opens two files and copies the
contents of one file to the other:
This works, but there is a bug. If the call to os.Create fails, the
@@ -50,22 +31,7 @@ noticed and resolved. By introducing defer statements we can ensure that the
files are always closed:
Defer statements allow us to think about closing each file right after opening
@@ -88,13 +54,7 @@ In this example, the expression "i" is evaluated when the Println call is
deferred. The deferred call will print "0" after the function returns.
2. Deferred function calls are executed in Last In First Out order
@@ -105,12 +65,7 @@ deferred. The deferred call will print "0" after the function returns.
This function prints "3210":
3. Deferred functions may read and assign to the returning function's named
@@ -122,11 +77,7 @@ In this example, a deferred function increments the return value i after
the surrounding function returns. Thus, this function returns 2:
This is convenient for modifying the error return value of a function; we will
@@ -156,36 +107,7 @@ to panic and resume normal execution.
Here's an example program that demonstrates the mechanics of panic and defer:
The function g takes the int i, and panics if i is greater than 3, or else it
diff --git a/doc/articles/defer_panic_recover.tmpl b/doc/articles/defer_panic_recover.tmpl
deleted file mode 100644
index 5f48c6ef4..000000000
--- a/doc/articles/defer_panic_recover.tmpl
+++ /dev/null
@@ -1,195 +0,0 @@
-
-{{donotedit}}
-
-Go has the usual mechanisms for control flow: if, for, switch, goto. It also
-has the go statement to run code in a separate goroutine. Here I'd like to
-discuss some of the less common ones: defer, panic, and recover.
-
-A defer statement pushes a function call onto a list. The list of saved
-calls is executed after the surrounding function returns. Defer is commonly
-used to simplify functions that perform various clean-up actions.
-
-For example, let's look at a function that opens two files and copies the
-contents of one file to the other:
-
-This works, but there is a bug. If the call to os.Create fails, the
-function will return without closing the source file. This can be easily
-remedied by putting a call to src.Close() before the second return statement,
-but if the function were more complex the problem might not be so easily
-noticed and resolved. By introducing defer statements we can ensure that the
-files are always closed:
-
-Defer statements allow us to think about closing each file right after opening
-it, guaranteeing that, regardless of the number of return statements in the
-function, the files will be closed.
-
-The behavior of defer statements is straightforward and predictable. There are
-three simple rules:
-
-1. A deferred function's arguments are evaluated when the defer statement is
-evaluated.
-
-In this example, the expression "i" is evaluated when the Println call is
-deferred. The deferred call will print "0" after the function returns.
-
-2. Deferred function calls are executed in Last In First Out order
-after the surrounding function returns.
-
-This function prints "3210":
-
-3. Deferred functions may read and assign to the returning function's named
-return values.
-
-In this example, a deferred function increments the return value i after
-the surrounding function returns. Thus, this function returns 2:
-
-This is convenient for modifying the error return value of a function; we will
-see an example of this shortly.
-
-Panic is a built-in function that stops the ordinary flow of control and
-begins panicking. When the function F calls panic, execution of F stops,
-any deferred functions in F are executed normally, and then F returns to its
-caller. To the caller, F then behaves like a call to panic. The process
-continues up the stack until all functions in the current goroutine have
-returned, at which point the program crashes. Panics can be initiated by
-invoking panic directly. They can also be caused by runtime errors, such as
-out-of-bounds array accesses.
-
-Recover is a built-in function that regains control of a panicking
-goroutine. Recover is only useful inside deferred functions. During normal
-execution, a call to recover will return nil and have no other effect. If the
-current goroutine is panicking, a call to recover will capture the value given
-to panic and resume normal execution.
-
-Here's an example program that demonstrates the mechanics of panic and defer:
-
-The function g takes the int i, and panics if i is greater than 3, or else it
-calls itself with the argument i+1. The function f defers a function that calls
-recover and prints the recovered value (if it is non-nil). Try to picture what
-the output of this program might be before reading on.
-
-The program will output:
-
-If we remove the deferred function from f the panic is not recovered and
-reaches the top of the goroutine's call stack, terminating the program. This
-modified program will output:
-
-For a real-world example of panic and recover, see the
-json package from the Go standard library.
-It decodes JSON-encoded data with a set of recursive functions.
-When malformed JSON is encountered, the parser calls panic to unwind the
-stack to the top-level function call, which recovers from the panic and returns
-an appropriate error value (see the 'error' and 'unmarshal' functions in
-decode.go).
-
-The convention in the Go libraries is that even when a package uses panic
-internally, its external API still presents explicit error return values.
-
-Other uses of defer (beyond the file.Close() example given earlier)
-include releasing a mutex:
-
-printing a footer:
-
-and more.
-
-In summary, the defer statement (with or without panic and recover) provides an
-unusual and powerful mechanism for control flow. It can be used to model a
-number of features implemented by special-purpose structures in other
-programming languages. Try it out.
-
If you have written any Go code you have probably encountered the built-in
@@ -13,20 +10,14 @@ indicate an abnormal state. For example, the
The following code uses
You can get a lot done in Go knowing just this about the
You can construct one of these values with the
Here's how you might use
A caller passing a negative argument to
The fmt package formats an
In many cases
A sophisticated caller can then use a
@@ -164,13 +125,7 @@ As another example, the json package specifies
returns when it encounters a syntax error parsing a JSON blob.
The
(This is a slightly simplified version of some
@@ -217,14 +165,7 @@ web crawler might sleep and retry when it encounters a temporary error and give
up otherwise.
Simplifying repetitive error handling
@@ -244,23 +185,7 @@ application with an HTTP handler that retrieves a record from the datastore and
formats it with a template.
This function handles errors returned by the
Then we can change our
This is simpler than the original version, but the http.Handler interface's
The
With this basic error handling infrastructure in place, we can make it more
@@ -341,24 +248,19 @@ To do this we create an
Next we modify the appHandler type to return
(It's usually a mistake to pass back the concrete type of an error rather than
-Random
and Seed
- that wrap C's random
+and srandom
functions.
+rand
package imports "C"
, but you'll find there's
+no such package in the standard Go library. That's because C
is a
+"pseudo-package", a special name interpreted by cgo as a reference to C's
+name space.
+rand
package contains four references to the C
+package: the calls to C.random
and C.srandom
, the
+conversion C.uint(i)
, and the import
statement.
+Random
function calls the standard C library's random
+function and returns the result. In C, random
returns a value of the
+C type long
, which cgo represents as the type C.long
.
+It must be converted to a Go type before it can be used by Go code outside this
+package, using an ordinary Go type conversion:
+Seed
function does the reverse, in a way. It takes a
+regular Go int
, converts it to the C unsigned int
+type, and passes it to the C function srandom
.
+unsigned int
type as C.uint
;
+see the cgo documentation for a complete list of
+these numeric type names.
+import
statement.
+#cgo
+followed
+by a space character are removed; these become directives for cgo.
+The remaining lines are used as a header when compiling the C parts of
+the package. In this case those lines are just a
+single #include
+statement, but they can be almost any C code. The #cgo
+directives are
+used to provide flags for the compiler and linker when building the C
+parts of the package.
+//export
+directives, then the C code in the comment may only include declarations
+(extern int f();
), not definitions (int f() {
+return 1; }
). You can use //export
directives to
+make Go functions accessible to C code.
+#cgo
and //export
directives are
+documented in
+the cgo documentation.
+C.CString
, C.GoString
, and
+C.GoStringN
functions. These conversions make a copy of the
+string data.
+Print
function that writes a
+string to standard output using C's fputs
function from the
+stdio
library:
+C.CString
(or any C memory
+allocation) you must remember to free the memory when you're done with it
+by calling C.free
.
+C.CString
returns a pointer to the start of the
+char array, so before the function exits we convert it to an
+unsafe.Pointer
and release
+the memory allocation with C.free
. A common idiom in cgo programs
+is to defer
+the free immediately after allocating (especially when the code that follows
+is more complex than a single function call), as in this rewrite of
+Print
:
+go build
" or
+"go install
+" as usual. The go tool recognizes the special "C"
import and automatically
+uses cgo for those files.
+ch
, but want to wait at most one
+second for the value to arrive. We would start by creating a signalling channel
+and launching a goroutine that sleeps before sending on the channel:
+select
statement to receive from either
+ch
or timeout
. If nothing arrives on ch
+after one second, the timeout case is selected and the attempt to read from
+timeout
channel is buffered with space for 1 value, allowing
+the timeout goroutine to send to the channel and then exit. The goroutine
+doesn't know (or care) whether the value is received. This means the goroutine
+won't hang around forever if the ch
receive happens before the
+timeout is reached. The timeout
channel will eventually be
+deallocated by the garbage collector.
+time.Sleep
to demonstrate the mechanics
+of goroutines and channels. In real programs you should use
+time.After
, a function that returns
+a channel and sends on that channel after the specified duration.)
+Query
takes a slice of database connections and a
+query
string. It queries each of the databases in parallel and
+returns the first response it receives:
+select
statement with a
+default
case. If the send cannot go through immediately the
+default case will be selected. Making the send non-blocking guarantees that
+none of the goroutines launched in the loop will hang around. However, if the
+result arrives before the main function has made it to the receive, the send
+could fail since no one is ready.
+ch
(by
+adding the buffer length as the second argument to make),
+guaranteeing that the first send has a place to put the value. This ensures the
+send will always succeed, and the first value to arrive will be retrieved
+regardless of the order of execution.
+func CopyFile(dstName, srcName string) (written int64, err error) {
- src, err := os.Open(srcName)
- if err != nil {
- return
- }
-
- dst, err := os.Create(dstName)
- if err != nil {
- return
- }
-
- written, err = io.Copy(dst, src)
- dst.Close()
- src.Close()
- return
-}
+{{code "/doc/progs/defer.go" `/func CopyFile/` `/STOP/`}}
func CopyFile(dstName, srcName string) (written int64, err error) {
- src, err := os.Open(srcName)
- if err != nil {
- return
- }
- defer src.Close()
-
- dst, err := os.Create(dstName)
- if err != nil {
- return
- }
- defer dst.Close()
-
- return io.Copy(dst, src)
-}
+{{code "/doc/progs/defer2.go" `/func CopyFile/` `/STOP/`}}
func a() {
- i := 0
- defer fmt.Println(i)
- i++
- return
-}
+{{code "/doc/progs/defer.go" `/func a/` `/STOP/`}}
func b() {
- for i := 0; i < 4; i++ {
- defer fmt.Print(i)
- }
-}
+{{code "/doc/progs/defer.go" `/func b/` `/STOP/`}}
func c() (i int) {
- defer func() { i++ }()
- return 1
-}
+{{code "/doc/progs/defer.go" `/func c/` `/STOP/`}}
package main
-
-import "fmt"
-
-func main() {
- f()
- fmt.Println("Returned normally from f.")
-}
-
-func f() {
- defer func() {
- if r := recover(); r != nil {
- fmt.Println("Recovered in f", r)
- }
- }()
- fmt.Println("Calling g.")
- g(0)
- fmt.Println("Returned normally from g.")
-}
-
-func g(i int) {
- if i > 3 {
- fmt.Println("Panicking!")
- panic(fmt.Sprintf("%v", i))
- }
- defer fmt.Println("Defer in g", i)
- fmt.Println("Printing in g", i)
- g(i + 1)
-}
+{{code "/doc/progs/defer2.go" `/package main/` `/STOP/`}}
Calling g.
-Printing in g 0
-Printing in g 1
-Printing in g 2
-Printing in g 3
-Panicking!
-Defer in g 3
-Defer in g 2
-Defer in g 1
-Defer in g 0
-Recovered in f 4
-Returned normally from f.
-
-Calling g.
-Printing in g 0
-Printing in g 1
-Printing in g 2
-Printing in g 3
-Panicking!
-Defer in g 3
-Defer in g 2
-Defer in g 1
-Defer in g 0
-panic: 4
-
-panic PC=0x2a9cd8
-[stack trace omitted]
-
-mu.Lock()
-defer mu.Unlock()
-
-printHeader()
-defer printFooter()
-
-os.Open
function
returns a non-nil error
value when it fails to open a file.
func Open(name string) (file *File, err error)
+{{code "/doc/progs/error.go" `/func Open/`}}
os.Open
to open a file. If an error
occurs it calls log.Fatal
to print the error message and stop.
f, err := os.Open("filename.ext")
- if err != nil {
- log.Fatal(err)
- }
- // do something with the open *File f
+{{code "/doc/progs/error.go" `/func openFile/` `/STOP/`}}
error
@@ -59,15 +50,7 @@ The most commonly-used error
implementation is the
errors package's unexported errorString
type.
// errorString is a trivial implementation of error.
-type errorString struct {
- s string
-}
-
-func (e *errorString) Error() string {
- return e.s
-}
+{{code "/doc/progs/error.go" `/errorString/` `/STOP/`}}
errors.New
@@ -75,23 +58,13 @@ function. It takes a string that it converts to an errors.errorStringerror
value.
// New returns an error that formats as the given text.
-func New(text string) error {
- return &errorString{text}
-}
+{{code "/doc/progs/error.go" `/New/` `/STOP/`}}
errors.New
:
func Sqrt(f float64) (float64, error) {
- if f < 0 {
- return 0, errors.New("math: square root of negative number")
- }
- // implementation
-}
+{{code "/doc/progs/error.go" `/func Sqrt/` `/STOP/`}}
Sqrt
receives a non-nil
@@ -101,11 +74,7 @@ A caller passing a negative argument to Sqrt
receives a non-nil
Error
method, or by just printing it:
f, err := Sqrt(-1)
- if err != nil {
- fmt.Println(err)
- }
+{{code "/doc/progs/error.go" `/func printErr/` `/STOP/`}}
error
value
@@ -126,10 +95,7 @@ rules and returns it as an error
created by
errors.New
.
if f < 0 {
- return 0, fmt.Errorf("math: square root of negative number %g", f)
- }
+{{code "/doc/progs/error.go" `/fmtError/` `/STOP/`}}
fmt.Errorf
is good enough, but since
@@ -143,12 +109,7 @@ argument passed to Sqrt
. We can enable that by defining a new
error implementation instead of using errors.errorString
:
type NegativeSqrtError float64
-
-func (f NegativeSqrtError) Error() string {
- return fmt.Sprintf("math: square root of negative number %g", float64(f))
-}
+{{code "/doc/progs/error.go" `/type NegativeSqrtError/` `/STOP/`}}
type SyntaxError struct {
- msg string // description of error
- Offset int64 // error occurred after reading Offset bytes
-}
-
-func (e *SyntaxError) Error() string { return e.msg }
+{{code "/doc/progs/error.go" `/type SyntaxError/` `/STOP/`}}
Offset
field isn't even shown in the default formatting of the
@@ -178,14 +133,7 @@ error, but callers can use it to add file and line information to their error
messages:
if err := dec.Decode(&val); err != nil {
- if serr, ok := err.(*json.SyntaxError); ok {
- line, col := findLine(f, serr.Offset)
- return fmt.Errorf("%s:%d:%d: %v", f.Name(), line, col, err)
- }
- return err
- }
+{{code "/doc/progs/error.go" `/func decodeError/` `/STOP/`}}
if nerr, ok := err.(net.Error); ok && nerr.Temporary() {
- time.Sleep(1e9)
- continue
- }
- if err != nil {
- log.Fatal(err)
- }
+{{code "/doc/progs/error.go" `/func netError/` `/STOP/`}}
func init() {
- http.HandleFunc("/view", viewRecord)
-}
-
-func viewRecord(w http.ResponseWriter, r *http.Request) {
- c := appengine.NewContext(r)
- key := datastore.NewKey(c, "Record", r.FormValue("id"), 0, nil)
- record := new(Record)
- if err := datastore.Get(c, key, record); err != nil {
- http.Error(w, err.Error(), 500)
- return
- }
- if err := viewTemplate.Execute(w, record); err != nil {
- http.Error(w, err.Error(), 500)
- }
-}
+{{code "/doc/progs/error2.go" `/func init/` `/STOP/`}}
datastore.Get
@@ -276,23 +201,13 @@ To reduce the repetition we can define our own HTTP appHandler
type that includes an error
return value:
type appHandler func(http.ResponseWriter, *http.Request) error
+{{code "/doc/progs/error3.go" `/type appHandler/`}}
viewRecord
function to return errors:
func viewRecord(w http.ResponseWriter, r *http.Request) error {
- c := appengine.NewContext(r)
- key := datastore.NewKey(c, "Record", r.FormValue("id"), 0, nil)
- record := new(Record)
- if err := datastore.Get(c, key, record); err != nil {
- return err
- }
- return viewTemplate.Execute(w, record)
-}
+{{code "/doc/progs/error3.go" `/func viewRecord/` `/STOP/`}}
ServeHTTP
method on appHandler
:
func (fn appHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- if err := fn(w, r); err != nil {
- http.Error(w, err.Error(), 500)
- }
-}
+{{code "/doc/progs/error3.go" `/ServeHTTP/` `/STOP/`}}
ServeHTTP
method calls the appHandler
function
@@ -323,10 +233,7 @@ Now when registering viewRecord
with the http package we use the
http.HandlerFunc
).
func init() {
- http.Handle("/view", appHandler(viewRecord))
-}
+{{code "/doc/progs/error3.go" `/func init/` `/STOP/`}}
appError
struct containing an
error
and some other fields:
type appError struct {
- Error error
- Message string
- Code int
-}
+{{code "/doc/progs/error4.go" `/type appError/` `/STOP/`}}
*appError
values:
type appHandler func(http.ResponseWriter, *http.Request) *appError
+{{code "/doc/progs/error4.go" `/type appHandler/`}}
error
, for reasons to be discussed in another article, but
-it's the right thing to do here because ServeHTTP
is the only
+error
,
+for reasons discussed in the Go FAQ,
+but it's the right thing to do here because ServeHTTP
is the only
place that sees the value and uses its contents.)
Code
and log the full Error
to the developer
console:
func (fn appHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { - if e := fn(w, r); e != nil { // e is *appError, not os.Error. - c := appengine.NewContext(r) - c.Errorf("%v", e.Error) - http.Error(w, e.Message, e.Code) - } -}+{{code "/doc/progs/error4.go" `/ServeHTTP/` `/STOP/`}}
Finally, we update viewRecord
to the new function signature and
have it return more context when it encounters an error:
func viewRecord(w http.ResponseWriter, r *http.Request) *appError { - c := appengine.NewContext(r) - key := datastore.NewKey(c, "Record", r.FormValue("id"), 0, nil) - record := new(Record) - if err := datastore.Get(c, key, record); err != nil { - return &appError{err, "Record not found", 404} - } - if err := viewTemplate.Execute(w, record); err != nil { - return &appError{err, "Can't display record", 500} - } - return nil -}+{{code "/doc/progs/error4.go" `/func viewRecord/` `/STOP/`}}
This version of viewRecord
is the same length as the original, but
diff --git a/doc/articles/error_handling.tmpl b/doc/articles/error_handling.tmpl
deleted file mode 100644
index 56b7fb309..000000000
--- a/doc/articles/error_handling.tmpl
+++ /dev/null
@@ -1,314 +0,0 @@
-
-{{donotedit}}
-
-If you have written any Go code you have probably encountered the built-in
-error
type. Go code uses error
values to
-indicate an abnormal state. For example, the os.Open
function
-returns a non-nil error
value when it fails to open a file.
-
-The following code uses os.Open
to open a file. If an error
-occurs it calls log.Fatal
to print the error message and stop.
-
-You can get a lot done in Go knowing just this about the error
-type, but in this article we'll take a closer look at error
and
-discuss some good practices for error handling in Go.
-
-The error type -
- -
-The error
type is an interface type. An error
-variable represents any value that can describe itself as a string. Here is the
-interface's declaration:
-
type error interface { - Error() string -}- -
-The error
type, as with all built in types, is
-predeclared in the
-universe block.
-
-The most commonly-used error
implementation is the
-errors package's unexported errorString
type.
-
-You can construct one of these values with the errors.New
-function. It takes a string that it converts to an errors.errorString
-and returns as an error
value.
-
-Here's how you might use errors.New
:
-
-A caller passing a negative argument to Sqrt
receives a non-nil
-error
value (whose concrete representation is an
-errors.errorString
value). The caller can access the error string
-("math: square root of...") by calling the error
's
-Error
method, or by just printing it:
-
-The fmt package formats an error
value
-by calling its Error() string
method.
-
-It is the error implementation's responsibility to summarize the context.
-The error returned by os.Open
formats as "open /etc/passwd:
-permission denied," not just "permission denied." The error returned by our
-Sqrt
is missing information about the invalid argument.
-
-To add that information, a useful function is the fmt
package's
-Errorf
. It formats a string according to Printf
's
-rules and returns it as an error
created by
-errors.New
.
-
-In many cases fmt.Errorf
is good enough, but since
-error
is an interface, you can use arbitrary data structures as
-error values, to allow callers to inspect the details of the error.
-
-For instance, our hypothetical callers might want to recover the invalid
-argument passed to Sqrt
. We can enable that by defining a new
-error implementation instead of using errors.errorString
:
-
-A sophisticated caller can then use a
-type assertion to check for a
-NegativeSqrtError
and handle it specially, while callers that just
-pass the error to fmt.Println
or log.Fatal
will see
-no change in behavior.
-
-As another example, the json package specifies a
-SyntaxError
type that the json.Decode
function
-returns when it encounters a syntax error parsing a JSON blob.
-
-The Offset
field isn't even shown in the default formatting of the
-error, but callers can use it to add file and line information to their error
-messages:
-
-(This is a slightly simplified version of some -actual code -from the Camlistore project.) -
- -
-The error
interface requires only a Error
method;
-specific error implementations might have additional methods. For instance, the
-net package returns errors of type
-error
, following the usual convention, but some of the error
-implementations have additional methods defined by the net.Error
-interface:
-
package net - -type Error interface { - error - Timeout() bool // Is the error a timeout? - Temporary() bool // Is the error temporary? -}- -
-Client code can test for a net.Error
with a type assertion and
-then distinguish transient network errors from permanent ones. For instance, a
-web crawler might sleep and retry when it encounters a temporary error and give
-up otherwise.
-
-Simplifying repetitive error handling -
- --In Go, error handling is important. The language's design and conventions -encourage you to explicitly check for errors where they occur (as distinct from -the convention in other languages of throwing exceptions and sometimes catching -them). In some cases this makes Go code verbose, but fortunately there are some -techniques you can use to minimize repetitive error handling. -
- --Consider an App Engine -application with an HTTP handler that retrieves a record from the datastore and -formats it with a template. -
- -{{code "progs/error2.go" `/func init/` `/STOP/`}} - -
-This function handles errors returned by the datastore.Get
-function and viewTemplate
's Execute
method. In both
-cases, it presents a simple error message to the user with the HTTP status code
-500 ("Internal Server Error"). This looks like a manageable amount of code, but
-add some more HTTP handlers and you quickly end up with many copies of
-identical error handling code.
-
-To reduce the repetition we can define our own HTTP appHandler
-type that includes an error
return value:
-
-Then we can change our viewRecord
function to return errors:
-
-This is simpler than the original version, but the http package doesn't understand functions that return
-error
.
-To fix this we can implement the http.Handler
interface's
-ServeHTTP
method on appHandler
:
-
-The ServeHTTP
method calls the appHandler
function
-and displays the returned error (if any) to the user. Notice that the method's
-receiver, fn
, is a function. (Go can do that!) The method invokes
-the function by calling the receiver in the expression fn(w, r)
.
-
-Now when registering viewRecord
with the http package we use the
-Handle
function (instead of HandleFunc
) as
-appHandler
is an http.Handler
(not an
-http.HandlerFunc
).
-
-With this basic error handling infrastructure in place, we can make it more -user friendly. Rather than just displaying the error string, it would be better -to give the user a simple error message with an appropriate HTTP status code, -while logging the full error to the App Engine developer console for debugging -purposes. -
- -
-To do this we create an appError
struct containing an
-error
and some other fields:
-
-Next we modify the appHandler type to return *appError
values:
-
-(It's usually a mistake to pass back the concrete type of an error rather than
-error
, for reasons to be discussed in another article, but
-it's the right thing to do here because ServeHTTP
is the only
-place that sees the value and uses its contents.)
-
-And make appHandler
's ServeHTTP
method display the
-appError
's Message
to the user with the correct HTTP
-status Code
and log the full Error
to the developer
-console:
-
-Finally, we update viewRecord
to the new function signature and
-have it return more context when it encounters an error:
-
-This version of viewRecord
is the same length as the original, but
-now each of those lines has specific meaning and we are providing a friendlier
-user experience.
-
-It doesn't end there; we can further improve the error handling in our -application. Some ideas: -
- -appError
that stores the
-stack trace for easier debugging,
-appHandler
, logging the error
-to the console as "Critical," while telling the user "a serious error
-has occurred." This is a nice touch to avoid exposing the user to inscrutable
-error messages caused by programming errors.
-See the Defer, Panic, and Recover
-article for more details.
--Conclusion -
- --Proper error handling is an essential requirement of good software. By -employing the techniques described in this post you should be able to write -more reliable and succinct Go code. -
diff --git a/doc/articles/go_command.html b/doc/articles/go_command.html new file mode 100644 index 000000000..1e9e70fd8 --- /dev/null +++ b/doc/articles/go_command.html @@ -0,0 +1,265 @@ + + +The Go distribution includes a command, named
+"go
", that
+automates the downloading, building, installation, and testing of Go packages
+and commands. This document talks about why we wrote a new command, what it
+is, what it's not, and how to use it.
You might have seen early Go talks in which Rob Pike jokes that the idea +for Go arose while waiting for a large Google server to compile. That +really was the motivation for Go: to build a language that worked well +for building the large software that Google writes and runs. It was +clear from the start that such a language must provide a way to +express dependencies between code libraries clearly, hence the package +grouping and the explicit import blocks. It was also clear from the +start that you might want arbitrary syntax for describing the code +being imported; this is why import paths are string literals.
+ +An explicit goal for Go from the beginning was to be able to build Go +code using only the information found in the source itself, not +needing to write a makefile or one of the many modern replacements for +makefiles. If Go needed a configuration file to explain how to build +your program, then Go would have failed.
+ +At first, there was no Go compiler, and the initial development +focused on building one and then building libraries for it. For +expedience, we postponed the automation of building Go code by using +make and writing makefiles. When compiling a single package involved +multiple invocations of the Go compiler, we even used a program to +write the makefiles for us. You can find it if you dig through the +repository history.
+ +The purpose of the new go command is our return to this ideal, that Go +programs should compile without configuration or additional effort on +the part of the developer beyond writing the necessary import +statements.
+ +The way to achieve the simplicity of a configuration-free system is to
+establish conventions. The system works only to the extent that those conventions
+are followed. When we first launched Go, many people published packages that
+had to be installed in certain places, under certain names, using certain build
+tools, in order to be used. That's understandable: that's the way it works in
+most other languages. Over the last few years we consistently reminded people
+about the goinstall
command
+(now replaced by go get
)
+and its conventions: first, that the import path is derived in a known way from
+the URL of the source code; second, that that the place to store the sources in
+the local file system is derived in a known way from the import path; third,
+that each directory in a source tree corresponds to a single package; and
+fourth, that the package is built using only information in the source code.
+Today, the vast majority of packages follow these conventions.
+The Go ecosystem is simpler and more powerful as a result.
We received many requests to allow a makefile in a package directory to +provide just a little extra configuration beyond what's in the source code. +But that would have introduced new rules. Because we did not accede to such +requests, we were able to write the go command and eliminate our use of make +or any other build system.
+ +It is important to understand that the go command is not a general +build tool. It cannot be configured and it does not attempt to build +anything but Go packages. These are important simplifying +assumptions: they simplify not only the implementation but also, more +important, the use of the tool itself.
+ +The go
command requires that code adheres to a few key,
+well-established conventions.
First, the import path is derived in an known way from the URL of the
+source code. For Bitbucket, GitHub, Google Code, and Launchpad, the
+root directory of the repository is identified by the repository's
+main URL, without the http://
prefix. Subdirectories are named by
+adding to that path. For example, the supplemental networking
+libraries for Go are obtained by running
+hg clone http://code.google.com/p/go.net ++ +
and thus the import path for the root directory of that repository is
+"code.google.com/p/go.net
". The websocket package is stored in a
+subdirectory, so its import path is
+"code.google.com/p/go.net/websocket
".
These paths are on the long side, but in exchange we get an +automatically managed name space for import paths and the ability for +a tool like the go command to look at an unfamiliar import path and +deduce where to obtain the source code.
+ +Second, the place to store sources in the local file system is derived
+in a known way from the import path. Specifically, the first choice
+is $GOPATH/src/<import-path>
. If $GOPATH
is
+unset, the go command will fall back to storing source code alongside the
+standard Go packages, in $GOROOT/src/pkg/<import-path>
.
+If $GOPATH
is set to a list of paths, the go command tries
+<dir>/src/<import-path>
for each of the directories in
+that list.
Each of those trees contains, by convention, a top-level directory named
+"bin
", for holding compiled executables, and a top-level directory
+named "pkg
", for holding compiled packages that can be imported,
+and the "src
" directory, for holding package source files.
+Imposing this structure lets us keep each of these directory trees
+self-contained: the compiled form and the sources are always near each
+other.
These naming conventions also let us work in the reverse direction, +from a directory name to its import path. This mapping is important +for many of the go command's subcommands, as we'll see below.
+ +Third, each directory in a source tree corresponds to a single +package. By restricting a directory to a single package, we don't have +to create hybrid import paths that specify first the directory and +then the package within that directory. Also, most file management +tools and UIs work on directories as fundamental units. Tying the +fundamental Go unit—the package—to file system structure means +that file system tools become Go package tools. Copying, moving, or +deleting a package corresponds to copying, moving, or deleting a +directory.
+ +Fourth, each package is built using only the information present in +the source files. This makes it much more likely that the tool will +be able to adapt to changing build environments and conditions. For +example, if we allowed extra configuration such as compiler flags or +command line recipes, then that configuration would need to be updated +each time the build tools changed; it would also be inherently tied +to the use of a specific tool chain.
+ +Finally, a quick tour of how to use the go command, to supplement
+the information in How to Write Go Code,
+which you might want to read first. Assuming you want
+to keep your source code separate from the Go distribution source
+tree, the first step is to set $GOPATH
, the one piece of global
+configuration that the go command needs. The $GOPATH
can be a
+list of directories, but by far the most common usage should be to set it to a
+single directory. In particular, you do not need a separate entry in
+$GOPATH
for each of your projects. One $GOPATH
can
+support many projects.
Here’s an example. Let’s say we decide to keep our Go code in the directory
+$HOME/mygo
. We need to create that directory and set
+$GOPATH
accordingly.
+$ mkdir $HOME/mygo +$ export GOPATH=$HOME/mygo +$ ++ +
Into this directory, we now add some source code. Suppose we want to use
+the indexing library from the codesearch project along with a left-leaning
+red-black tree. We can install both with the "go get
"
+subcommand:
+$ go get code.google.com/p/codesearch/index +$ go get github.com/petar/GoLLRB/llrb +$ ++ +
Both of these projects are now downloaded and installed into our
+$GOPATH
directory. The one tree now contains the two directories
+src/code.google.com/p/codesearch/index/
and
+src/github.com/petar/GoLLRB/llrb/
, along with the compiled
+packages (in pkg/
) for those libraries and their dependencies.
Because we used version control systems (Mercurial and Git) to check
+out the sources, the source tree also contains the other files in the
+corresponding repositories, such as related packages. The "go list
"
+subcommand lists the import paths corresponding to its arguments, and
+the pattern "./...
" means start in the current directory
+("./
") and find all packages below that directory
+("...
"):
+$ go list ./... +code.google.com/p/codesearch/cmd/cgrep +code.google.com/p/codesearch/cmd/cindex +code.google.com/p/codesearch/cmd/csearch +code.google.com/p/codesearch/index +code.google.com/p/codesearch/regexp +code.google.com/p/codesearch/sparse +github.com/petar/GoLLRB/example +github.com/petar/GoLLRB/llrb +$ ++ +
We can also test those packages:
+ ++$ go test ./... +? code.google.com/p/codesearch/cmd/cgrep [no test files] +? code.google.com/p/codesearch/cmd/cindex [no test files] +? code.google.com/p/codesearch/cmd/csearch [no test files] +ok code.google.com/p/codesearch/index 0.239s +ok code.google.com/p/codesearch/regexp 0.021s +? code.google.com/p/codesearch/sparse [no test files] +? github.com/petar/GoLLRB/example [no test files] +ok github.com/petar/GoLLRB/llrb 0.231s +$ ++ +
If a go subcommand is invoked with no paths listed, it operates on the +current directory:
+ ++$ cd $GOPATH/src/code.google.com/p/codesearch/regexp +$ go list +code.google.com/p/codesearch/regexp +$ go test -v +=== RUN TestNstateEnc +--- PASS: TestNstateEnc (0.00 seconds) +=== RUN TestMatch +--- PASS: TestMatch (0.01 seconds) +=== RUN TestGrep +--- PASS: TestGrep (0.00 seconds) +PASS +ok code.google.com/p/codesearch/regexp 0.021s +$ go install +$ ++ +
That "go install
" subcommand installs the latest copy of the
+package into the pkg directory. Because the go command can analyze the
+dependency graph, "go install
" also installs any packages that
+this package imports but that are out of date, recursively.
Notice that "go install
" was able to determine the name of the
+import path for the package in the current directory, because of the convention
+for directory naming. It would be a little more convenient if we could pick
+the name of the directory where we kept source code, and we probably wouldn't
+pick such a long name, but that ability would require additional configuration
+and complexity in the tool. Typing an extra directory name or two is a small
+price to pay for the increased simplicity and power.
As the example shows, it’s fine to work with packages from many different
+projects at once within a single $GOPATH
root directory.
As mentioned above, the go command is not a general-purpose build
+tool. In particular, it does not have any facility for generating Go
+source files during a build. Instead, if you want to use a tool like
+yacc or the protocol buffer compiler, you will need to write a
+makefile (or a configuration file for the build tool of your choice)
+to generate the Go files and then check those generated source files
+into your repository. This is more work for you, the package author,
+but it is significantly less work for your users, who can use
+"go get
" without needing to obtain and build
+any additional tools.
For more information, read How to Write Go Code +and see the go command documentation.
diff --git a/doc/articles/gobs_of_data.html b/doc/articles/gobs_of_data.html new file mode 100644 index 000000000..6b836b2c3 --- /dev/null +++ b/doc/articles/gobs_of_data.html @@ -0,0 +1,315 @@ + + ++To transmit a data structure across a network or to store it in a file, it must +be encoded and then decoded again. There are many encodings available, of +course: JSON, +XML, Google's +protocol buffers, and more. +And now there's another, provided by Go's gob +package. +
+ ++Why define a new encoding? It's a lot of work and redundant at that. Why not +just use one of the existing formats? Well, for one thing, we do! Go has +packages supporting all the encodings just mentioned (the +protocol buffer package is in +a separate repository but it's one of the most frequently downloaded). And for +many purposes, including communicating with tools and systems written in other +languages, they're the right choice. +
+ ++But for a Go-specific environment, such as communicating between two servers +written in Go, there's an opportunity to build something much easier to use and +possibly more efficient. +
+ ++Gobs work with the language in a way that an externally-defined, +language-independent encoding cannot. At the same time, there are lessons to be +learned from the existing systems. +
+ ++Goals +
+ ++The gob package was designed with a number of goals in mind. +
+ ++First, and most obvious, it had to be very easy to use. First, because Go has +reflection, there is no need for a separate interface definition language or +"protocol compiler". The data structure itself is all the package should need +to figure out how to encode and decode it. On the other hand, this approach +means that gobs will never work as well with other languages, but that's OK: +gobs are unashamedly Go-centric. +
+ ++Efficiency is also important. Textual representations, exemplified by XML and +JSON, are too slow to put at the center of an efficient communications network. +A binary encoding is necessary. +
+ ++Gob streams must be self-describing. Each gob stream, read from the beginning, +contains sufficient information that the entire stream can be parsed by an +agent that knows nothing a priori about its contents. This property means that +you will always be able to decode a gob stream stored in a file, even long +after you've forgotten what data it represents. +
+ ++There were also some things to learn from our experiences with Google protocol +buffers. +
+ ++Protocol buffer misfeatures +
+ ++Protocol buffers had a major effect on the design of gobs, but have three +features that were deliberately avoided. (Leaving aside the property that +protocol buffers aren't self-describing: if you don't know the data definition +used to encode a protocol buffer, you might not be able to parse it.) +
+ ++First, protocol buffers only work on the data type we call a struct in Go. You +can't encode an integer or array at the top level, only a struct with fields +inside it. That seems a pointless restriction, at least in Go. If all you want +to send is an array of integers, why should you have to put it into a +struct first? +
+ +
+Next, a protocol buffer definition may specify that fields T.x
and
+T.y
are required to be present whenever a value of type
+T
is encoded or decoded. Although such required fields may seem
+like a good idea, they are costly to implement because the codec must maintain a
+separate data structure while encoding and decoding, to be able to report when
+required fields are missing. They're also a maintenance problem. Over time, one
+may want to modify the data definition to remove a required field, but that may
+cause existing clients of the data to crash. It's better not to have them in the
+encoding at all. (Protocol buffers also have optional fields. But if we don't
+have required fields, all fields are optional and that's that. There will be
+more to say about optional fields a little later.)
+
+The third protocol buffer misfeature is default values. If a protocol buffer +omits the value for a "defaulted" field, then the decoded structure behaves as +if the field were set to that value. This idea works nicely when you have +getter and setter methods to control access to the field, but is harder to +handle cleanly when the container is just a plain idiomatic struct. Required +fields are also tricky to implement: where does one define the default values, +what types do they have (is text UTF-8? uninterpreted bytes? how many bits in a +float?) and despite the apparent simplicity, there were a number of +complications in their design and implementation for protocol buffers. We +decided to leave them out of gobs and fall back to Go's trivial but effective +defaulting rule: unless you set something otherwise, it has the "zero value" +for that type - and it doesn't need to be transmitted. +
+ ++So gobs end up looking like a sort of generalized, simplified protocol buffer. +How do they work? +
+ ++Values +
+ +
+The encoded gob data isn't about int8
s and uint16
s.
+Instead, somewhat analogous to constants in Go, its integer values are abstract,
+sizeless numbers, either signed or unsigned. When you encode an
+int8
, its value is transmitted as an unsized, variable-length
+integer. When you encode an int64
, its value is also transmitted as
+an unsized, variable-length integer. (Signed and unsigned are treated
+distinctly, but the same unsized-ness applies to unsigned values too.) If both
+have the value 7, the bits sent on the wire will be identical. When the receiver
+decodes that value, it puts it into the receiver's variable, which may be of
+arbitrary integer type. Thus an encoder may send a 7 that came from an
+int8
, but the receiver may store it in an int64
. This
+is fine: the value is an integer and as a long as it fits, everything works. (If
+it doesn't fit, an error results.) This decoupling from the size of the variable
+gives some flexibility to the encoding: we can expand the type of the integer
+variable as the software evolves, but still be able to decode old data.
+
+This flexibility also applies to pointers. Before transmission, all pointers are
+flattened. Values of type int8
, *int8
,
+**int8
, ****int8
, etc. are all transmitted as an
+integer value, which may then be stored in int
of any size, or
+*int
, or ******int
, etc. Again, this allows for
+flexibility.
+
+Flexibility also happens because, when decoding a struct, only those fields +that are sent by the encoder are stored in the destination. Given the value +
+ +{{code "/doc/progs/gobs1.go" `/type T/` `/STOP/`}} + +
+the encoding of t
sends only the 7 and 8. Because it's zero, the
+value of Y
isn't even sent; there's no need to send a zero value.
+
+The receiver could instead decode the value into this structure: +
+ +{{code "/doc/progs/gobs1.go" `/type U/` `/STOP/`}} + +
+and acquire a value of u
with only X
set (to the
+address of an int8
variable set to 7); the Z
field is
+ignored - where would you put it? When decoding structs, fields are matched by
+name and compatible type, and only fields that exist in both are affected. This
+simple approach finesses the "optional field" problem: as the type
+T
evolves by adding fields, out of date receivers will still
+function with the part of the type they recognize. Thus gobs provide the
+important result of optional fields - extensibility - without any additional
+mechanism or notation.
+
+From integers we can build all the other types: bytes, strings, arrays, slices, +maps, even floats. Floating-point values are represented by their IEEE 754 +floating-point bit pattern, stored as an integer, which works fine as long as +you know their type, which we always do. By the way, that integer is sent in +byte-reversed order because common values of floating-point numbers, such as +small integers, have a lot of zeros at the low end that we can avoid +transmitting. +
+ ++One nice feature of gobs that Go makes possible is that they allow you to define +your own encoding by having your type satisfy the +GobEncoder and +GobDecoder interfaces, in a manner +analogous to the JSON package's +Marshaler and +Unmarshaler and also to the +Stringer interface from +package fmt. This facility makes it possible to +represent special features, enforce constraints, or hide secrets when you +transmit data. See the documentation for +details. +
+ ++Types on the wire +
+ ++The first time you send a given type, the gob package includes in the data +stream a description of that type. In fact, what happens is that the encoder is +used to encode, in the standard gob encoding format, an internal struct that +describes the type and gives it a unique number. (Basic types, plus the layout +of the type description structure, are predefined by the software for +bootstrapping.) After the type is described, it can be referenced by its type +number. +
+ +
+Thus when we send our first type T
, the gob encoder sends a
+description of T
and tags it with a type number, say 127. All
+values, including the first, are then prefixed by that number, so a stream of
+T
values looks like:
+
+("define type id" 127, definition of type T)(127, T value)(127, T value), ... ++ +
+These type numbers make it possible to describe recursive types and send values +of those types. Thus gobs can encode types such as trees: +
+ +{{code "/doc/progs/gobs1.go" `/type Node/` `/STOP/`}} + ++(It's an exercise for the reader to discover how the zero-defaulting rule makes +this work, even though gobs don't represent pointers.) +
+ ++With the type information, a gob stream is fully self-describing except for the +set of bootstrap types, which is a well-defined starting point. +
+ ++Compiling a machine +
+ ++The first time you encode a value of a given type, the gob package builds a +little interpreted machine specific to that data type. It uses reflection on +the type to construct that machine, but once the machine is built it does not +depend on reflection. The machine uses package unsafe and some trickery to +convert the data into the encoded bytes at high speed. It could use reflection +and avoid unsafe, but would be significantly slower. (A similar high-speed +approach is taken by the protocol buffer support for Go, whose design was +influenced by the implementation of gobs.) Subsequent values of the same type +use the already-compiled machine, so they can be encoded right away. +
+ ++Decoding is similar but harder. When you decode a value, the gob package holds +a byte slice representing a value of a given encoder-defined type to decode, +plus a Go value into which to decode it. The gob package builds a machine for +that pair: the gob type sent on the wire crossed with the Go type provided for +decoding. Once that decoding machine is built, though, it's again a +reflectionless engine that uses unsafe methods to get maximum speed. +
+ ++Use +
+ ++There's a lot going on under the hood, but the result is an efficient, +easy-to-use encoding system for transmitting data. Here's a complete example +showing differing encoded and decoded types. Note how easy it is to send and +receive values; all you need to do is present values and variables to the +gob package and it does all the work. +
+ +{{code "/doc/progs/gobs2.go" `/package main/` `$`}} + ++You can compile and run this example code in the +Go Playground. +
+ ++The rpc package builds on gobs to turn this +encode/decode automation into transport for method calls across the network. +That's a subject for another article. +
+ ++Details +
+ ++The gob package documentation, especially the +file doc.go, expands on many of the +details described here and includes a full worked example showing how the +encoding represents data. If you are interested in the innards of the gob +implementation, that's a good place to start. +
diff --git a/doc/articles/godoc_documenting_go_code.html b/doc/articles/godoc_documenting_go_code.html new file mode 100644 index 000000000..ca66076ad --- /dev/null +++ b/doc/articles/godoc_documenting_go_code.html @@ -0,0 +1,139 @@ + + ++The Go project takes documentation seriously. Documentation is a huge part of +making software accessible and maintainable. Of course it must be well-written +and accurate, but it also must be easy to write and to maintain. Ideally, it +should be coupled to the code itself so the documentation evolves along with the +code. The easier it is for programmers to produce good documentation, the better +for everyone. +
+ ++To that end, we have developed the godoc documentation +tool. This article describes godoc's approach to documentation, and explains how +you can use our conventions and tools to write good documentation for your own +projects. +
+ ++Godoc parses Go source code - including comments - and produces documentation as +HTML or plain text. The end result is documentation tightly coupled with the +code it documents. For example, through godoc's web interface you can navigate +from a function's documentation to its +implementation with one click. +
+ ++Godoc is conceptually related to Python's +Docstring and Java's +Javadoc, +but its design is simpler. The comments read by godoc are not language +constructs (as with Docstring) nor must they have their own machine-readable +syntax (as with Javadoc). Godoc comments are just good comments, the sort you +would want to read even if godoc didn't exist. +
+ +
+The convention is simple: to document a type, variable, constant, function, or
+even a package, write a regular comment directly preceding its declaration, with
+no intervening blank line. Godoc will then present that comment as text
+alongside the item it documents. For example, this is the documentation for the
+fmt
package's Fprint
+function:
+
+Notice this comment is a complete sentence that begins with the name of the +element it describes. This important convention allows us to generate +documentation in a variety of formats, from plain text to HTML to UNIX man +pages, and makes it read better when tools truncate it for brevity, such as when +they extract the first line or sentence. +
+ +
+Comments on package declarations should provide general package documentation.
+These comments can be short, like the sort
+package's brief description:
+
+They can also be detailed like the gob package's +overview. That package uses another convention for packages +that need large amounts of introductory documentation: the package comment is +placed in its own file, doc.go, which +contains only those comments and a package clause. +
+ ++When writing package comments of any size, keep in mind that their first +sentence will appear in godoc's package list. +
+ +
+Comments that are not adjacent to a top-level declaration are omitted from
+godoc's output, with one notable exception. Top-level comments that begin with
+the word "BUG(who)”
are recognized as known bugs, and included in
+the "Bugs” section of the package documentation. The "who” part should be the
+user name of someone who could provide more information. For example, this is a
+known issue from the bytes package:
+
+// BUG(r): The rule Title uses for word boundaries does not handle Unicode punctuation properly. ++ +
+Godoc treats executable commands somewhat differently. Instead of inspecting the +command source code, it looks for a Go source file belonging to the special +package "documentation”. The comment on the "package documentation” clause is +used as the command's documentation. For example, see the +godoc documentation and its corresponding +doc.go file. +
+ ++There are a few formatting rules that Godoc uses when converting comments to +HTML: +
+ ++Note that none of these rules requires you to do anything out of the ordinary. +
+ ++In fact, the best thing about godoc's minimal approach is how easy it is to use. +As a result, a lot of Go code, including all of the standard library, already +follows the conventions. +
+ +
+Your own code can present good documentation just by having comments as
+described above. Any Go packages installed inside $GOROOT/src/pkg
+and any GOPATH
work spaces will already be accessible via godoc's
+command-line and HTTP interfaces, and you can specify additional paths for
+indexing via the -path
flag or just by running "godoc ."
+in the source directory. See the godoc documentation
+for more details.
+
+Newcomers to Go wonder why the declaration syntax is different from the +tradition established in the C family. In this post we'll compare the +two approaches and explain why Go's declarations look as they do. +
+ ++C syntax +
+ ++First, let's talk about C syntax. C took an unusual and clever approach +to declaration syntax. Instead of describing the types with special +syntax, one writes an expression involving the item being declared, and +states what type that expression will have. Thus +
+ ++int x; ++ +
+declares x to be an int: the expression 'x' will have type int. In +general, to figure out how to write the type of a new variable, write an +expression involving that variable that evaluates to a basic type, then +put the basic type on the left and the expression on the right. +
+ ++Thus, the declarations +
+ ++int *p; +int a[3]; ++ +
+state that p is a pointer to int because '*p' has type int, and that a +is an array of ints because a[3] (ignoring the particular index value, +which is punned to be the size of the array) has type int. +
+ ++What about functions? Originally, C's function declarations wrote the +types of the arguments outside the parens, like this: +
+ ++int main(argc, argv) + int argc; + char *argv[]; +{ /* ... */ } ++ +
+Again, we see that main is a function because the expression main(argc, +argv) returns an int. In modern notation we'd write +
+ ++int main(int argc, char *argv[]) { /* ... */ } ++ +
+but the basic structure is the same. +
+ ++This is a clever syntactic idea that works well for simple types but can +get confusing fast. The famous example is declaring a function pointer. +Follow the rules and you get this: +
+ ++int (*fp)(int a, int b); ++ +
+Here, fp is a pointer to a function because if you write the expression +(*fp)(a, b) you'll call a function that returns int. What if one of fp's +arguments is itself a function? +
+ ++int (*fp)(int (*ff)(int x, int y), int b) ++ +
+That's starting to get hard to read. +
+ ++Of course, we can leave out the name of the parameters when we declare a +function, so main can be declared +
+ ++int main(int, char *[]) ++ +
+Recall that argv is declared like this, +
+ ++char *argv[] ++ +
+so you drop the name from the middle of its declaration to construct +its type. It's not obvious, though, that you declare something of type +char *[] by putting its name in the middle. +
+ ++And look what happens to fp's declaration if you don't name the +parameters: +
+ ++int (*fp)(int (*)(int, int), int) ++ +
+Not only is it not obvious where to put the name inside +
+ ++int (*)(int, int) ++ +
+it's not exactly clear that it's a function pointer declaration at all. +And what if the return type is a function pointer? +
+ ++int (*(*fp)(int (*)(int, int), int))(int, int) ++ +
+It's hard even to see that this declaration is about fp. +
+ ++You can construct more elaborate examples but these should illustrate +some of the difficulties that C's declaration syntax can introduce. +
+ ++There's one more point that needs to be made, though. Because type and +declaration syntax are the same, it can be difficult to parse +expressions with types in the middle. This is why, for instance, C casts +always parenthesize the type, as in +
+ ++(int)M_PI ++ +
+Go syntax +
+ ++Languages outside the C family usually use a distinct type syntax in +declarations. Although it's a separate point, the name usually comes +first, often followed by a colon. Thus our examples above become +something like (in a fictional but illustrative language) +
+ ++x: int +p: pointer to int +a: array[3] of int ++ +
+These declarations are clear, if verbose - you just read them left to +right. Go takes its cue from here, but in the interests of brevity it +drops the colon and removes some of the keywords: +
+ ++x int +p *int +a [3]int ++ +
+There is no direct correspondence between the look of [3]int and how to +use a in an expression. (We'll come back to pointers in the next +section.) You gain clarity at the cost of a separate syntax. +
+ ++Now consider functions. Let's transcribe the declaration for main, even +though the main function in Go takes no arguments: +
+ ++func main(argc int, argv *[]byte) int ++ +
+Superficially that's not much different from C, but it reads well from +left to right: +
+ ++function main takes an int and a pointer to a slice of bytes and returns an int. +
+ ++Drop the parameter names and it's just as clear - they're always first +so there's no confusion. +
+ ++func main(int, *[]byte) int ++ +
+One value of this left-to-right style is how well it works as the types +become more complex. Here's a declaration of a function variable +(analogous to a function pointer in C): +
+ ++f func(func(int,int) int, int) int ++ +
+Or if f returns a function: +
+ ++f func(func(int,int) int, int) func(int, int) int ++ +
+It still reads clearly, from left to right, and it's always obvious +which name is being declared - the name comes first. +
+ ++The distinction between type and expression syntax makes it easy to +write and invoke closures in Go: +
+ ++sum := func(a, b int) int { return a+b } (3, 4) ++ +
+Pointers +
+ ++Pointers are the exception that proves the rule. Notice that in arrays +and slices, for instance, Go's type syntax puts the brackets on the left +of the type but the expression syntax puts them on the right of the +expression: +
+ ++var a []int +x = a[1] ++ +
+For familiarity, Go's pointers use the * notation from C, but we could +not bring ourselves to make a similar reversal for pointer types. Thus +pointers work like this +
+ ++var p *int +x = *p ++ +
+We couldn't say +
+ ++var p *int +x = p* ++ +
+because that postfix * would conflate with multiplication. We could have +used the Pascal ^, for example: +
+ ++var p ^int +x = p^ ++ +
+and perhaps we should have (and chosen another operator for xor), +because the prefix asterisk on both types and expressions complicates +things in a number of ways. For instance, although one can write +
+ ++[]int("hi") ++ +
+as a conversion, one must parenthesize the type if it starts with a *: +
+ ++(*int)(nil) ++ +
+Had we been willing to give up * as pointer syntax, those parentheses +would be unnecessary. +
+ ++So Go's pointer syntax is tied to the familiar C form, but those ties +mean that we cannot break completely from using parentheses to +disambiguate types and expressions in the grammar. +
+ ++Overall, though, we believe Go's type syntax is easier to understand +than C's, especially when things get complicated. +
+ ++Notes +
+ ++Go's declarations read left to right. It's been pointed out that C's +read in a spiral! See +The "Clockwise/Spiral Rule" by David Anderson. +
diff --git a/doc/articles/image-20.png b/doc/articles/image-20.png new file mode 100644 index 000000000..063e43064 Binary files /dev/null and b/doc/articles/image-20.png differ diff --git a/doc/articles/image-2a.png b/doc/articles/image-2a.png new file mode 100644 index 000000000..3f1c0afff Binary files /dev/null and b/doc/articles/image-2a.png differ diff --git a/doc/articles/image-2b.png b/doc/articles/image-2b.png new file mode 100644 index 000000000..32b247011 Binary files /dev/null and b/doc/articles/image-2b.png differ diff --git a/doc/articles/image-2c.png b/doc/articles/image-2c.png new file mode 100644 index 000000000..f9abce5b5 Binary files /dev/null and b/doc/articles/image-2c.png differ diff --git a/doc/articles/image-2d.png b/doc/articles/image-2d.png new file mode 100644 index 000000000..ed0a9f92c Binary files /dev/null and b/doc/articles/image-2d.png differ diff --git a/doc/articles/image-2e.png b/doc/articles/image-2e.png new file mode 100644 index 000000000..483b208e3 Binary files /dev/null and b/doc/articles/image-2e.png differ diff --git a/doc/articles/image-2f.png b/doc/articles/image-2f.png new file mode 100644 index 000000000..3dce02d5f Binary files /dev/null and b/doc/articles/image-2f.png differ diff --git a/doc/articles/image_draw.html b/doc/articles/image_draw.html new file mode 100644 index 000000000..848b65982 --- /dev/null +++ b/doc/articles/image_draw.html @@ -0,0 +1,222 @@ + + ++Package image/draw defines +only one operation: drawing a source image onto a destination +image, through an optional mask image. This one operation is +surprisingly versatile and can perform a number of common image +manipulation tasks elegantly and efficiently. +
+ +
+Composition is performed pixel by pixel in the style of the Plan 9
+graphics library and the X Render extension. The model is based on
+the classic "Compositing Digital Images" paper by Porter and Duff,
+with an additional mask parameter: dst = (src IN mask) OP dst
.
+For a fully opaque mask, this reduces to the original Porter-Duff
+formula: dst = src OP dst
. In Go, a nil mask image is equivalent
+to an infinitely sized, fully opaque mask image.
+
+The Porter-Duff paper presented
+12 different composition operators,
+but with an explicit mask, only 2 of these are needed in practice:
+source-over-destination and source. In Go, these operators are
+represented by the Over
and Src
constants. The Over
operator
+performs the natural layering of a source image over a destination
+image: the change to the destination image is smaller where the
+source (after masking) is more transparent (that is, has lower
+alpha). The Src
operator merely copies the source (after masking)
+with no regard for the destination image's original content. For
+fully opaque source and mask images, the two operators produce the
+same output, but the Src
operator is usually faster.
+
Geometric Alignment
+ +
+Composition requires associating destination pixels with source and
+mask pixels. Obviously, this requires destination, source and mask
+images, and a composition operator, but it also requires specifying
+what rectangle of each image to use. Not every drawing should write
+to the entire destination: when updating an animating image, it is
+more efficient to only draw the parts of the image that have
+changed. Not every drawing should read from the entire source: when
+using a sprite that combines many small images into one large one,
+only a part of the image is needed. Not every drawing should read
+from the entire mask: a mask image that collects a font's glyphs is
+similar to a sprite. Thus, drawing also needs to know three
+rectangles, one for each image. Since each rectangle has the same
+width and height, it suffices to pass a destination rectangle `r`
+and two points sp
and mp
: the source rectangle is equal to r
+translated so that r.Min
in the destination image aligns with
+sp
in the source image, and similarly for mp
. The effective
+rectangle is also clipped to each image's bounds in their
+respective co-ordinate space.
+
+
+
+The DrawMask
+function takes seven arguments, but an explicit mask and mask-point
+are usually unnecessary, so the
+Draw
function takes five:
+
+// Draw calls DrawMask with a nil mask. +func Draw(dst Image, r image.Rectangle, src image.Image, sp image.Point, op Op) +func DrawMask(dst Image, r image.Rectangle, src image.Image, sp image.Point, + mask image.Image, mp image.Point, op Op) ++ +
+The destination image must be mutable, so the image/draw package
+defines a draw.Image
+interface which has a Set
method.
+
Filling a Rectangle
+ +
+To fill a rectangle with a solid color, use an image.Uniform
+source. The ColorImage
type re-interprets a Color
as a
+practically infinite-sized Image
of that color. For those
+familiar with the design of Plan 9's draw library, there is no need
+for an explicit "repeat bit" in Go's slice-based image types; the
+concept is subsumed by Uniform
.
+
+To initialize a new image to all-blue: +
+ +{{code "/doc/progs/image_draw.go" `/BLUE/` `/STOP/`}} + +
+To reset an image to transparent (or black, if the destination
+image's color model cannot represent transparency), use
+image.Transparent
, which is an image.Uniform
:
+
+
+
Copying an Image
+ +
+To copy from a rectangle sr
in the source image to a rectangle
+starting at a point dp
in the destination, convert the source
+rectangle into the destination image's co-ordinate space:
+
+Alternatively: +
+ +{{code "/doc/progs/image_draw.go" `/RECT2/` `/STOP/`}} + +
+To copy the entire source image, use sr = src.Bounds()
.
+
+
+
Scrolling an Image
+ ++Scrolling an image is just copying an image to itself, with +different destination and source rectangles. Overlapping +destination and source images are perfectly valid, just as Go's +built-in copy function can handle overlapping destination and +source slices. To scroll an image m by 20 pixels: +
+ +{{code "/doc/progs/image_draw.go" `/SCROLL/` `/STOP/`}} + +Converting an Image to RGBA
+ +
+The result of decoding an image format might not be an
+image.RGBA
: decoding a GIF results in an image.Paletted
,
+decoding a JPEG results in a ycbcr.YCbCr
, and the result of
+decoding a PNG depends on the image data. To convert any image to
+an image.RGBA
:
+
+
+
Drawing Through a Mask
+ +
+To draw an image through a circular mask with center p
and radius
+r
:
+
+
+
Drawing Font Glyphs
+ +
+To draw a font glyph in blue starting from a point p
, draw with
+an image.ColorImage
source and an image.Alpha mask
. For
+simplicity, we aren't performing any sub-pixel positioning or
+rendering, or correcting for a font's height above a baseline.
+
+
+
Performance
+ +
+The image/draw package implementation demonstrates how to provide
+an image manipulation function that is both general purpose, yet
+efficient for common cases. The DrawMask
function takes arguments
+of interface types, but immediately makes type assertions that its
+arguments are of specific struct types, corresponding to common
+operations like drawing one image.RGBA
image onto another, or
+drawing an image.Alpha
mask (such as a font glyph) onto an
+image.RGBA
image. If a type assertion succeeds, that type
+information is used to run a specialized implementation of the
+general algorithm. If the assertions fail, the fallback code path
+uses the generic At
and Set
methods. The fast-paths are purely
+a performance optimization; the resultant destination image is the
+same either way. In practice, only a small number of special cases
+are necessary to support typical applications.
+
+JSON (JavaScript Object Notation) is a simple data interchange format. +Syntactically it resembles the objects and lists of JavaScript. It is most +commonly used for communication between web back-ends and JavaScript programs +running in the browser, but it is used in many other places, too. Its home page, +json.org, provides a wonderfully clear and concise +definition of the standard. +
+ ++With the json package it's a snap to read and +write JSON data from your Go programs. +
+ ++Encoding +
+ +
+To encode JSON data we use the
+Marshal
function.
+
+func Marshal(v interface{}) ([]byte, error) ++ +
+Given the Go data structure, Message
,
+
+and an instance of Message
+
+we can marshal a JSON-encoded version of m using json.Marshal
:
+
+If all is well, err
will be nil
and b
+will be a []byte
containing this JSON data:
+
+b == []byte(`{"Name":"Alice","Body":"Hello","Time":1294706395881547000}`) ++ +
+Only data structures that can be represented as valid JSON will be encoded: +
+ +map[string]T
(where T
is any Go type
+supported by the json package).
+Marshal
+to go into an infinite loop.
+nil
).
++The json package only accesses the exported fields of struct types (those that +begin with an uppercase letter). Therefore only the the exported fields of a +struct will be present in the JSON output. +
+ ++Decoding +
+ +
+To decode JSON data we use the
+Unmarshal
function.
+
+func Unmarshal(data []byte, v interface{}) error ++ +
+We must first create a place where the decoded data will be stored +
+ +{{code "/doc/progs/json1.go" `/var m Message/`}} + +
+and call json.Unmarshal
, passing it a []byte
of JSON
+data and a pointer to m
+
+If b
contains valid JSON that fits in m
, after the
+call err
will be nil
and the data from b
+will have been stored in the struct m
, as if by an assignment
+like:
+
+How does Unmarshal
identify the fields in which to store the
+decoded data? For a given JSON key "Foo"
, Unmarshal
+will look through the destination struct's fields to find (in order of
+preference):
+
"Foo"
(see the
+Go spec for more on struct tags),
+"Foo"
, or
+"FOO"
or "FoO"
or some other
+case-insensitive match of "Foo"
.
++What happens when the structure of the JSON data doesn't exactly match the Go +type? +
+ +{{code "/doc/progs/json1.go" `/"Food":"Pickle"/` `/STOP/`}} + +
+Unmarshal
will decode only the fields that it can find in the
+destination type. In this case, only the Name field of m will be populated,
+and the Food field will be ignored. This behavior is particularly useful when
+you wish to pick only a few specific fields out of a large JSON blob. It also
+means that any unexported fields in the destination struct will be unaffected
+by Unmarshal
.
+
+But what if you don't know the structure of your JSON data beforehand? +
+ ++Generic JSON with interface{} +
+ +
+The interface{}
(empty interface) type describes an interface with
+zero methods. Every Go type implements at least zero methods and therefore
+satisfies the empty interface.
+
+The empty interface serves as a general container type: +
+ +{{code "/doc/progs/json2.go" `/var i interface{}/` `/STOP/`}} + ++A type assertion accesses the underlying concrete type: +
+ +{{code "/doc/progs/json2.go" `/r := i/` `/STOP/`}} + ++Or, if the underlying type is unknown, a type switch determines the type: +
+ +{{code "/doc/progs/json2.go" `/switch v/` `/STOP/`}} + + +The json package usesmap[string]interface{}
and
+[]interface{}
values to store arbitrary JSON objects and arrays;
+it will happily unmarshal any valid JSON blob into a plain
+interface{}
value. The default concrete Go types are:
+
+bool
for JSON booleans,
+float64
for JSON numbers,
+string
for JSON strings, and
+nil
for JSON null.
++Decoding arbitrary data +
+ +
+Consider this JSON data, stored in the variable b
:
+
+Without knowing this data's structure, we can decode it into an
+interface{}
value with Unmarshal
:
+
+At this point the Go value in f
would be a map whose keys are
+strings and whose values are themselves stored as empty interface values:
+
+To access this data we can use a type assertion to access f
's
+underlying map[string]interface{}
:
+
+We can then iterate through the map with a range statement and use a type switch +to access its values as their concrete types: +
+ +{{code "/doc/progs/json3.go" `/for k, v/` `/STOP/`}} + ++In this way you can work with unknown JSON data while still enjoying the +benefits of type safety. +
+ ++Reference Types +
+ ++Let's define a Go type to contain the data from the previous example: +
+ +{{code "/doc/progs/json4.go" `/type FamilyMember/` `/STOP/`}} + +{{code "/doc/progs/json4.go" `/var m FamilyMember/` `/STOP/`}} + +
+Unmarshaling that data into a FamilyMember
value works as
+expected, but if we look closely we can see a remarkable thing has happened.
+With the var statement we allocated a FamilyMember
struct, and
+then provided a pointer to that value to Unmarshal
, but at that
+time the Parents
field was a nil
slice value. To
+populate the Parents
field, Unmarshal
allocated a new
+slice behind the scenes. This is typical of how Unmarshal
works
+with the supported reference types (pointers, slices, and maps).
+
+Consider unmarshaling into this data structure: +
+ ++type Foo struct { + Bar *Bar +} ++ +
+If there were a Bar
field in the JSON object,
+Unmarshal
would allocate a new Bar
and populate it.
+If not, Bar
would be left as a nil
pointer.
+
+From this a useful pattern arises: if you have an application that receives a +few distinct message types, you might define "receiver" structure like +
+ ++type IncomingMessage struct { + Cmd *Command + Msg *Message +} ++ +
+and the sending party can populate the Cmd
field and/or the
+Msg
field of the top-level JSON object, depending on the type of
+message they want to communicate. Unmarshal
, when decoding the
+JSON into an IncomingMessage
struct, will only allocate the data
+structures present in the JSON data. To know which messages to process, the
+programmer need simply test that either Cmd
or Msg
is
+not nil
.
+
+Streaming Encoders and Decoders +
+ +
+The json package provides Decoder
and Encoder
types
+to support the common operation of reading and writing streams of JSON data.
+The NewDecoder
and NewEncoder
functions wrap the
+io.Reader
and
+io.Writer
interface types.
+
+func NewDecoder(r io.Reader) *Decoder +func NewEncoder(w io.Writer) *Encoder ++ +
+Here's an example program that reads a series of JSON objects from standard
+input, removes all but the Name
field from each object, and then
+writes the objects to standard output:
+
+Due to the ubiquity of Readers and Writers, these Encoder
and
+Decoder
types can be used in a broad range of scenarios, such as
+reading and writing to HTTP connections, WebSockets, or files.
+
+References +
+ ++For more information see the json package documentation. For an example usage of +json see the source files of the jsonrpc package. +
diff --git a/doc/articles/laws_of_reflection.html b/doc/articles/laws_of_reflection.html index 4df70e0d2..a6175f73c 100644 --- a/doc/articles/laws_of_reflection.html +++ b/doc/articles/laws_of_reflection.html @@ -1,11 +1,7 @@ - -
Reflection in computing is the
@@ -36,11 +32,7 @@ exactly one type known and fixed at compile time: int
,
and so on. If we declare
type MyInt int - -var i int -var j MyInt+{{code "/doc/progs/interface.go" `/type MyInt/` `/STOP/`}}
then i
has type int
and j
@@ -60,16 +52,7 @@ interface's methods. A well-known pair of examples is
"http://golang.org/pkg/io/">io package:
// Reader is the interface that wraps the basic Read method. -type Reader interface { - Read(p []byte) (n int, err error) -} - -// Writer is the interface that wraps the basic Write method. -type Writer interface { - Write(p []byte) (n int, err error) -}+{{code "/doc/progs/interface.go" `/// Reader/` `/STOP/`}}
Any type that implements a Read
(or
@@ -80,12 +63,7 @@ purposes of this discussion, that means that a variable of type
Read
method:
var r io.Reader - r = os.Stdin - r = bufio.NewReader(r) - r = new(bytes.Buffer) - // and so on+{{code "/doc/progs/interface.go" `/func readers/` `/STOP/`}}
It's important to be clear that whatever concrete value @@ -138,13 +116,7 @@ that implements the interface and the type describes the full type of that item. For instance, after
-var r io.Reader - tty, err := os.OpenFile("/dev/tty", os.O_RDWR, 0) - if err != nil { - return nil, err - } - r = tty+{{code "/doc/progs/interface.go" `/func typeAssertions/` `/STOP/`}}
r
contains, schematically, the (value, type) pair,
@@ -156,9 +128,7 @@ the type information about that value. That's why we can do things
like this:
var w io.Writer - w = r.(io.Writer)+{{code "/doc/progs/interface.go" `/var w io.Writer/` `/STOP/`}}
The expression in this assignment is a type assertion; what it @@ -176,9 +146,7 @@ methods. Continuing, we can do this:
-var empty interface{} - empty = w+{{code "/doc/progs/interface.go" `/var empty interface{}/` `/STOP/`}}
and our empty interface value e
will again contain
@@ -216,7 +184,7 @@ At the basic level, reflection is just a mechanism to examine the
type and value pair stored inside an interface variable. To get
started, there are two types we need to know about in
package reflect:
-Typeand
+Type and
Value. Those two types
give access to the contents of an interface variable, and two
simple functions, called reflect.TypeOf
and
@@ -232,18 +200,7 @@ now.)
Let's start with TypeOf
:
package main - -import ( - "fmt" - "reflect" -) - -func main() { - var x float64 = 3.4 - fmt.Println("type:", reflect.TypeOf(x)) -}+{{code "/doc/progs/interface2.go" `/package main/` `/STOP main/`}}
This program prints @@ -281,9 +238,7 @@ value (from here on we'll elide the boilerplate and focus just on the executable code):
-var x float64 = 3.4 - fmt.Println("type:", reflect.TypeOf(x))+{{code "/doc/progs/interface2.go" `/START f9/` `/STOP/`}}
prints
@@ -307,12 +262,7 @@ on. Also methods on Value
with names like
int64
and float64
) stored inside:
var x float64 = 3.4 - v := reflect.ValueOf(x) - fmt.Println("type:", v.Type()) - fmt.Println("kind is float64:", v.Kind() == reflect.Float64) - fmt.Println("value:", v.Float())+{{code "/doc/progs/interface2.go" `/START f1/` `/STOP/`}}
prints
@@ -342,12 +292,7 @@ instance. That is, the Int
method of
necessary to convert to the actual type involved:
var x uint8 = 'x' - v := reflect.ValueOf(x) - fmt.Println("type:", v.Type()) // uint8. - fmt.Println("kind is uint8: ", v.Kind() == reflect.Uint8) // true. - x = uint8(v.Uint()) // v.Uint returns a uint64.+{{code "/doc/progs/interface2.go" `/START f2/` `/STOP/`}}
The second property is that the Kind
of a reflection
@@ -356,10 +301,7 @@ reflection object contains a value of a user-defined integer type,
as in
type MyInt int - var x MyInt = 7 - v := reflect.ValueOf(x)+{{code "/doc/progs/interface2.go" `/START f3/` `/STOP/`}}
the Kind
of v
is still
@@ -395,9 +337,7 @@ func (v Value) Interface() interface{}
As a consequence we can say
y := v.Interface().(float64) // y will have type float64. - fmt.Println(y)+{{code "/doc/progs/interface2.go" `/START f3b/` `/STOP/`}}
to print the float64
value represented by the
@@ -415,8 +355,7 @@ the Interface
method to the formatted print
routine:
fmt.Println(v.Interface())+{{code "/doc/progs/interface2.go" `/START f3c/` `/STOP/`}}
(Why not fmt.Println(v)
? Because v
is a
@@ -425,8 +364,7 @@ Since our value is a float64
, we can even use a
floating-point format if we want:
fmt.Printf("value is %7.1e\n", v.Interface())+{{code "/doc/progs/interface2.go" `/START f3d/` `/STOP/`}}
and get in this case @@ -467,10 +405,7 @@ enough to understand if we start from first principles. Here is some code that does not work, but is worth studying.
-var x float64 = 3.4 - v := reflect.ValueOf(x) - v.SetFloat(7.1) // Error: will panic.+{{code "/doc/progs/interface2.go" `/START f4/` `/STOP/`}}
If you run this code, it will panic with the cryptic message
@@ -492,10 +427,7 @@ The CanSet
method of Value
reports the
settability of a Value
; in our case,
var x float64 = 3.4 - v := reflect.ValueOf(x) - fmt.Println("settability of v:", v.CanSet())+{{code "/doc/progs/interface2.go" `/START f5/` `/STOP/`}}
prints @@ -518,9 +450,7 @@ determined by whether the reflection object holds the original item. When we say
-var x float64 = 3.4 - v := reflect.ValueOf(x)+{{code "/doc/progs/interface2.go" `/START f6/` `/STOP/`}}
we pass a copy of x
to
@@ -530,8 +460,7 @@ argument to reflect.ValueOf
is a copy of
statement
v.SetFloat(7.1)+{{code "/doc/progs/interface2.go" `/START f6b/` `/STOP/`}}
were allowed to succeed, it would not update x
, even
@@ -577,11 +506,7 @@ and then create a reflection value that points to it, called
p
.
var x float64 = 3.4 - p := reflect.ValueOf(&x) // Note: take the address of x. - fmt.Println("type of p:", p.Type()) - fmt.Println("settability of p:", p.CanSet())+{{code "/doc/progs/interface2.go" `/START f7/` `/STOP/`}}
The output so far is
@@ -601,9 +526,7 @@ and save the result in a reflection Value
called
v
:
v := p.Elem() - fmt.Println("settability of v:", v.CanSet())+{{code "/doc/progs/interface2.go" `/START f7b/` `/STOP/`}}
Now v
is a settable reflection object, as the output
@@ -620,10 +543,7 @@ and since it represents x
, we are finally able to use
x
:
v.SetFloat(7.1) - fmt.Println(v.Interface()) - fmt.Println(x)+{{code "/doc/progs/interface2.go" `/START f7c/` `/STOP/`}}
The output, as expected, is
@@ -664,22 +584,7 @@ but the fields themselves are regular reflect.Value
objects.
type T struct { - A int - B string - } - t := T{23, "skidoo"} - s := reflect.ValueOf(&t).Elem() - typeOfT := s.Type() - for i := 0; i < s.NumField(); i++ { - f := s.Field(i) - fmt.Printf("%d: %s %s = %v\n", i, - typeOfT.Field(i).Name, f.Type(), f.Interface()) - } - s.Field(0).SetInt(77) - s.Field(1).SetString("Sunset Strip") - fmt.Println("t is now", t)+{{code "/doc/progs/interface2.go" `/START f8/` `/STOP/`}}
The output of this program is
@@ -702,10 +607,7 @@ Because s
contains a settable reflection object, we
can modify the fields of the structure.
s.Field(0).SetInt(77) - s.Field(1).SetString("Sunset Strip") - fmt.Println("t is now", t)+{{code "/doc/progs/interface2.go" `/START f8b/` `/STOP/`}}
And here's the result: @@ -749,4 +651,4 @@ sending and receiving on channels, allocating memory, using slices and maps, calling methods and functions — but this post is long enough. We'll cover some of those topics in a later article. -
\ No newline at end of file + diff --git a/doc/articles/laws_of_reflection.tmpl b/doc/articles/laws_of_reflection.tmpl deleted file mode 100644 index 7db5d6d3b..000000000 --- a/doc/articles/laws_of_reflection.tmpl +++ /dev/null @@ -1,654 +0,0 @@ - -{{donotedit}} - --Reflection in computing is the -ability of a program to examine its own structure, particularly -through types; it's a form of metaprogramming. It's also a great -source of confusion. -
- --In this article we attempt to clarify things by explaining how -reflection works in Go. Each language's reflection model is -different (and many languages don't support it at all), but -this article is about Go, so for the rest of this article the word -"reflection" should be taken to mean "reflection in Go". -
- -Types and interfaces
- --Because reflection builds on the type system, let's start with a -refresher about types in Go. -
- -
-Go is statically typed. Every variable has a static type, that is,
-exactly one type known and fixed at compile time: int
,
-float32
, *MyType
, []byte
,
-and so on. If we declare
-
-then i
has type int
and j
-has type MyInt
. The variables i
and
-j
have distinct static types and, although they have
-the same underlying type, they cannot be assigned to one another
-without a conversion.
-
-One important category of type is interface types, which represent
-fixed sets of methods. An interface variable can store any concrete
-(non-interface) value as long as that value implements the
-interface's methods. A well-known pair of examples is
-io.Reader
and io.Writer
, the types
-Reader
and Writer
from the io package:
-
-Any type that implements a Read
(or
-Write
) method with this signature is said to implement
-io.Reader
(or io.Writer
). For the
-purposes of this discussion, that means that a variable of type
-io.Reader
can hold any value whose type has a
-Read
method:
-
-It's important to be clear that whatever concrete value
-r
may hold, r
's type is always
-io.Reader
: Go is statically typed and the static type
-of r
is io.Reader
.
-An extremely important example of an interface type is the empty -interface: -
- --interface{} -- -
-It represents the empty set of methods and is satisfied by any -value at all, since any value has zero or more methods. -
- --Some people say that Go's interfaces are dynamically typed, but -that is misleading. They are statically typed: a variable of -interface type always has the same static type, and even though at -run time the value stored in the interface variable may change -type, that value will always satisfy the interface. -
- --We need to be precise about all this because reflection and -interfaces are closely related. -
- -The representation of an interface
- --Russ Cox has written a -detailed blog post about the representation of interface values -in Go. It's not necessary to repeat the full story here, but a -simplified summary is in order. -
- --A variable of interface type stores a pair: the concrete value -assigned to the variable, and that value's type descriptor. -To be more precise, the value is the underlying concrete data item -that implements the interface and the type describes the full type -of that item. For instance, after -
- -{{code "progs/interface.go" `/func typeAssertions/` `/STOP/`}} - -
-r
contains, schematically, the (value, type) pair,
-(tty
, *os.File
). Notice that the type
-*os.File
implements methods other than
-Read
; even though the interface value provides access
-only to the Read
method, the value inside carries all
-the type information about that value. That's why we can do things
-like this:
-
-The expression in this assignment is a type assertion; what it
-asserts is that the item inside r
also implements
-io.Writer
, and so we can assign it to w
.
-After the assignment, w
will contain the pair
-(tty
, *os.File
). That's the same pair as
-was held in r
. The static type of the interface
-determines what methods may be invoked with an interface variable,
-even though the concrete value inside may have a larger set of
-methods.
-
-Continuing, we can do this: -
- -{{code "progs/interface.go" `/var empty interface{}/` `/STOP/`}} - -
-and our empty interface value e
will again contain
-that same pair, (tty
, *os.File
). That's
-handy: an empty interface can hold any value and contains all the
-information we could ever need about that value.
-
-(We don't need a type assertion here because it's known statically
-that w
satisfies the empty interface. In the example
-where we moved a value from a Reader
to a
-Writer
, we needed to be explicit and use a type
-assertion because Writer
's methods are not a
-subset of Reader
's.)
-
-One important detail is that the pair inside an interface always -has the form (value, concrete type) and cannot have the form -(value, interface type). Interfaces do not hold interface -values. -
- --Now we're ready to reflect. -
- -The first law of reflection
- -1. Reflection goes from interface value to reflection object.
- -
-At the basic level, reflection is just a mechanism to examine the
-type and value pair stored inside an interface variable. To get
-started, there are two types we need to know about in
-package reflect:
-Typeand
-Value. Those two types
-give access to the contents of an interface variable, and two
-simple functions, called reflect.TypeOf
and
-reflect.ValueOf
, retrieve reflect.Type
-and reflect.Value
pieces out of an interface value.
-(Also, from the reflect.Value
it's easy to get
-to the reflect.Type
, but let's keep the
-Value
and Type
concepts separate for
-now.)
-
-Let's start with TypeOf
:
-
-This program prints -
- --type: float64 -- -
-You might be wondering where the interface is here, since the
-program looks like it's passing the float64
-variable x
, not an interface value, to
-reflect.TypeOf
. But it's there; as godoc reports, the
-signature of reflect.TypeOf
includes an empty
-interface:
-
-// TypeOf returns the reflection Type of the value in the interface{}. -func TypeOf(i interface{}) Type -- -
-When we call reflect.TypeOf(x)
, x
is
-first stored in an empty interface, which is then passed as the
-argument; reflect.TypeOf
unpacks that empty interface
-to recover the type information.
-
-The reflect.ValueOf
function, of course, recovers the
-value (from here on we'll elide the boilerplate and focus just on
-the executable code):
-
-prints -
- --value: <float64 Value> -- -
-Both reflect.Type
and reflect.Value
have
-lots of methods to let us examine and manipulate them. One
-important example is that Value
has a
-Type
method that returns the Type
of a
-reflect.Value
. Another is that both Type
-and Value
have a Kind
method that returns
-a constant indicating what sort of item is stored:
-Uint
, Float64
, Slice
, and so
-on. Also methods on Value
with names like
-Int
and Float
let us grab values (as
-int64
and float64
) stored inside:
-
-prints -
- --type: float64 -kind is float64: true -value: 3.4 -- -
-There are also methods like SetInt
and
-SetFloat
but to use them we need to understand
-settability, the subject of the third law of reflection, discussed
-below.
-
-The reflection library has a couple of properties worth singling
-out. First, to keep the API simple, the "getter" and "setter"
-methods of Value
operate on the largest type that can
-hold the value: int64
for all the signed integers, for
-instance. That is, the Int
method of
-Value
returns an int64
and the
-SetInt
value takes an int64
; it may be
-necessary to convert to the actual type involved:
-
-The second property is that the Kind
of a reflection
-object describes the underlying type, not the static type. If a
-reflection object contains a value of a user-defined integer type,
-as in
-
-the Kind
of v
is still
-reflect.Int
, even though the static type of
-x
is MyInt
, not int
. In
-other words, the Kind
cannot discriminate an int from
-a MyInt
even though the Type
can.
-
The second law of reflection
- -2. Reflection goes from reflection object to interface -value.
- --Like physical reflection, reflection in Go generates its own -inverse. -
- -
-Given a reflect.Value
we can recover an interface
-value using the Interface
method; in effect the method
-packs the type and value information back into an interface
-representation and returns the result:
-
-// Interface returns v's value as an interface{}. -func (v Value) Interface() interface{} -- -
-As a consequence we can say -
- -{{code "progs/interface2.go" `/START f3b/` `/START/`}} - -
-to print the float64
value represented by the
-reflection object v
.
-
-We can do even better, though. The arguments to
-fmt.Println
, fmt.Printf
and so on are all
-passed as empty interface values, which are then unpacked by the
-fmt
package internally just as we have been doing in
-the previous examples. Therefore all it takes to print the contents
-of a reflect.Value
correctly is to pass the result of
-the Interface
method to the formatted print
-routine:
-
-(Why not fmt.Println(v)
? Because v
is a
-reflect.Value
; we want the concrete value it holds.)
-Since our value is a float64
, we can even use a
-floating-point format if we want:
-
-and get in this case -
- --3.4e+00 -- -
-Again, there's no need to type-assert the result of
-v.Interface()
to float64
; the empty
-interface value has the concrete value's type information inside
-and Printf
will recover it.
-
-In short, the Interface
method is the inverse of the
-ValueOf
function, except that its result is always of
-static type interface{}
.
-
-Reiterating: Reflection goes from interface values to reflection -objects and back again. -
- -The third law of reflection
- -3. To modify a reflection object, the value must be settable.
- --The third law is the most subtle and confusing, but it's easy -enough to understand if we start from first principles. -
- --Here is some code that does not work, but is worth studying. -
- -{{code "progs/interface2.go" `/START f4/` `/STOP/`}} - --If you run this code, it will panic with the cryptic message -
- --panic: reflect.Value.SetFloat using unaddressable value -- -
-The problem is not that the value 7.1
is not
-addressable; it's that v
is not settable. Settability
-is a property of a reflection Value
, and not all
-reflection Values
have it.
-
-The CanSet
method of Value
reports the
-settability of a Value
; in our case,
-
-prints -
- --settability of v: false -- -
-It is an error to call a Set
method on an non-settable
-Value
. But what is settability?
-
-Settability is a bit like addressability, but stricter. It's the -property that a reflection object can modify the actual storage -that was used to create the reflection object. Settability is -determined by whether the reflection object holds the original -item. When we say -
- -{{code "progs/interface2.go" `/START f6/` `/START/`}} - -
-we pass a copy of x
to
-reflect.ValueOf
, so the interface value created as the
-argument to reflect.ValueOf
is a copy of
-x
, not x
itself. Thus, if the
-statement
-
-were allowed to succeed, it would not update x
, even
-though v
looks like it was created from
-x
. Instead, it would update the copy of x
-stored inside the reflection value and x
itself would
-be unaffected. That would be confusing and useless, so it is
-illegal, and settability is the property used to avoid this
-issue.
-
-If this seems bizarre, it's not. It's actually a familiar situation
-in unusual garb. Think of passing x
to a
-function:
-
-f(x) -- -
-We would not expect f
to be able to modify
-x
because we passed a copy of x
's value,
-not x
itself. If we want f
to modify
-x
directly we must pass our function the address of
-x
(that is, a pointer to x
):
-f(&x)
-
-This is straightforward and familiar, and reflection works the same
-way. If we want to modify x
by reflection, we must
-give the reflection library a pointer to the value we want to
-modify.
-
-Let's do that. First we initialize x
as usual
-and then create a reflection value that points to it, called
-p
.
-
-The output so far is -
- --type of p: *float64 -settability of p: false -- -
-The reflection object p
isn't settable, but it's not
-p
we want to set, it's (in effect) *p
. To
-get to what p
points to, we call the Elem
-method of Value
, which indirects through the pointer,
-and save the result in a reflection Value
called
-v
:
-
-Now v
is a settable reflection object, as the output
-demonstrates,
-
-settability of v: true -- -
-and since it represents x
, we are finally able to use
-v.SetFloat
to modify the value of
-x
:
-
-The output, as expected, is -
- --7.1 -7.1 -- -
-Reflection can be hard to understand but it's doing exactly what
-the language does, albeit through reflection Types
and
-Values
that can disguise what's going on. Just keep in
-mind that reflection Values need the address of something in order
-to modify what they represent.
-
Structs
- -
-In our previous example v
wasn't a pointer itself, it
-was just derived from one. A common way for this situation to arise
-is when using reflection to modify the fields of a structure. As
-long as we have the address of the structure, we can modify its
-fields.
-
-Here's a simple example that analyzes a struct value,
-t
. We create the reflection object with the address of
-the struct because we'll want to modify it later. Then we set
-typeOfT
to its type and iterate over the fields using
-straightforward method calls (see
-package reflect for details).
-Note that we extract the names of the fields from the struct type,
-but the fields themselves are regular reflect.Value
-objects.
-
-The output of this program is -
- --0: A int = 23 -1: B string = skidoo -- -
-There's one more point about settability introduced in
-passing here: the field names of T
are upper case
-(exported) because only exported fields of a struct are
-settable.
-
-Because s
contains a settable reflection object, we
-can modify the fields of the structure.
-
-And here's the result: -
- --t is now {77 Sunset Strip} -- -
-If we modified the program so that s
was created from
-t
, not &t
, the calls to
-SetInt
and SetString
would fail as the
-fields of t
would not be settable.
-
Conclusion
- --Here again are the laws of reflection: -
- --Once you understand these laws reflection in Go becomes much easier -to use, although it remains subtle. It's a powerful tool that -should be used with care and avoided unless strictly -necessary. -
- --There's plenty more to reflection that we haven't covered — -sending and receiving on channels, allocating memory, using slices -and maps, calling methods and functions — but this post is -long enough. We'll cover some of those topics in a later -article. -
\ No newline at end of file diff --git a/doc/articles/slices_usage_and_internals.html b/doc/articles/slices_usage_and_internals.html index c10dfe0ca..810b0a41f 100644 --- a/doc/articles/slices_usage_and_internals.html +++ b/doc/articles/slices_usage_and_internals.html @@ -1,11 +1,7 @@ - -Go's slice type provides a convenient and efficient means of working with @@ -326,20 +322,7 @@ appends byte elements to a slice of bytes, growing the slice if necessary, and returns the updated slice value:
-func AppendByte(slice []byte, data ...byte) []byte { - m := len(slice) - n := m + len(data) - if n > cap(slice) { // if necessary, reallocate - // allocate double what's needed, for future growth. - newSlice := make([]byte, (n+1)*2) - copy(newSlice, slice) - slice = newSlice - } - slice = slice[0:n] - copy(slice[m:n], data) - return slice -}+{{code "/doc/progs/slices.go" `/AppendByte/` `/STOP/`}}
One could use AppendByte
like this:
@@ -398,18 +381,7 @@ Since the zero value of a slice (nil
) acts like a zero-length
slice, you can declare a slice variable and then append to it in a loop:
// Filter returns a new slice holding only -// the elements of s that satisfy f() -func Filter(s []int, fn func(int) bool) []int { - var p []int // == nil - for _, i := range s { - if fn(i) { - p = append(p, i) - } - } - return p -}+{{code "/doc/progs/slices.go" `/Filter/` `/STOP/`}}
A possible "gotcha" @@ -428,13 +400,7 @@ searches it for the first group of consecutive numeric digits, returning them as a new slice.
-var digitRegexp = regexp.MustCompile("[0-9]+") - -func FindDigits(filename string) []byte { - b, _ := ioutil.ReadFile(filename) - return digitRegexp.Find(b) -}+{{code "/doc/progs/slices.go" `/digit/` `/STOP/`}}
This code behaves as advertised, but the returned []byte
points
@@ -449,14 +415,7 @@ To fix this problem one can copy the interesting data to a new slice before
returning it:
func CopyDigits(filename string) []byte { - b, _ := ioutil.ReadFile(filename) - b = digitRegexp.Find(b) - c := make([]byte, len(b)) - copy(c, b) - return c -}+{{code "/doc/progs/slices.go" `/CopyDigits/` `/STOP/`}}
A more concise version of this function could be constructed by using diff --git a/doc/articles/slices_usage_and_internals.tmpl b/doc/articles/slices_usage_and_internals.tmpl deleted file mode 100644 index d2f8fb7f5..000000000 --- a/doc/articles/slices_usage_and_internals.tmpl +++ /dev/null @@ -1,438 +0,0 @@ - -{{donotedit}} - -
-Go's slice type provides a convenient and efficient means of working with -sequences of typed data. Slices are analogous to arrays in other languages, but -have some unusual properties. This article will look at what slices are and how -they are used. -
- --Arrays -
- --The slice type is an abstraction built on top of Go's array type, and so to -understand slices we must first understand arrays. -
- -
-An array type definition specifies a length and an element type. For example,
-the type [4]int
represents an array of four integers. An array's
-size is fixed; its length is part of its type ([4]int
and
-[5]int
are distinct, incompatible types). Arrays can be indexed in
-the usual way, so the expression s[n]
accesses the nth
-element:
-
-var a [4]int -a[0] = 1 -i := a[0] -// i == 1 -- -
-Arrays do not need to be initialized explicitly; the zero value of an array is -a ready-to-use array whose elements are themselves zeroed: -
- --// a[2] == 0, the zero value of the int type -- -
-The in-memory representation of [4]int
is just four integer values laid out sequentially:
-
-
-
-Go's arrays are values. An array variable denotes the entire array; it is not a -pointer to the first array element (as would be the case in C). This means -that when you assign or pass around an array value you will make a copy of its -contents. (To avoid the copy you could pass a pointer to the array, but -then that's a pointer to an array, not an array.) One way to think about arrays -is as a sort of struct but with indexed rather than named fields: a fixed-size -composite value. -
- --An array literal can be specified like so: -
- --b := [2]string{"Penn", "Teller"} -- -
-Or, you can have the compiler count the array elements for you: -
- --b := [...]string{"Penn", "Teller"} -- -
-In both cases, the type of b
is [2]string
.
-
-Slices -
- --Arrays have their place, but they're a bit inflexible, so you don't see them -too often in Go code. Slices, though, are everywhere. They build on arrays to -provide great power and convenience. -
- -
-The type specification for a slice is []T
, where T
is
-the type of the elements of the slice. Unlike an array type, a slice type has
-no specified length.
-
-A slice literal is declared just like an array literal, except you leave out -the element count: -
- --letters := []string{"a", "b", "c", "d"} -- -
-A slice can be created with the built-in function called make
,
-which has the signature,
-
-func make([]T, len, cap) []T -- -
-where T stands for the element type of the slice to be created. The
-make
function takes a type, a length, and an optional capacity.
-When called, make
allocates an array and returns a slice that
-refers to that array.
-
-var s []byte -s = make([]byte, 5, 5) -// s == []byte{0, 0, 0, 0, 0} -- -
-When the capacity argument is omitted, it defaults to the specified length. -Here's a more succinct version of the same code: -
- --s := make([]byte, 5) -- -
-The length and capacity of a slice can be inspected using the built-in
-len
and cap
functions.
-
-len(s) == 5 -cap(s) == 5 -- -
-The next two sections discuss the relationship between length and capacity. -
- -
-The zero value of a slice is nil
. The len
and
-cap
functions will both return 0 for a nil slice.
-
-A slice can also be formed by "slicing" an existing slice or array. Slicing is
-done by specifying a half-open range with two indices separated by a colon. For
-example, the expression b[1:4]
creates a slice including elements
-1 through 3 of b
(the indices of the resulting slice will be 0
-through 2).
-
-b := []byte{'g', 'o', 'l', 'a', 'n', 'g'} -// b[1:4] == []byte{'o', 'l', 'a'}, sharing the same storage as b -- -
-The start and end indices of a slice expression are optional; they default to zero and the slice's length respectively: -
- --// b[:2] == []byte{'g', 'o'} -// b[2:] == []byte{'l', 'a', 'n', 'g'} -// b[:] == b -- -
-This is also the syntax to create a slice given an array: -
- --x := [3]string{"Лайка", "Белка", "Стрелка"} -s := x[:] // a slice referencing the storage of x -- -
-Slice internals -
- --A slice is a descriptor of an array segment. It consists of a pointer to the -array, the length of the segment, and its capacity (the maximum length of the -segment). -
- -
-
-
-Our variable s
, created earlier by make([]byte, 5)
,
-is structured like this:
-
-
-
-The length is the number of elements referred to by the slice. The capacity is -the number of elements in the underlying array (beginning at the element -referred to by the slice pointer). The distinction between length and capacity -will be made clear as we walk through the next few examples. -
- -
-As we slice s
, observe the changes in the slice data structure and
-their relation to the underlying array:
-
-s = s[2:4] -- -
-
-
-Slicing does not copy the slice's data. It creates a new slice value that -points to the original array. This makes slice operations as efficient as -manipulating array indices. Therefore, modifying the elements (not the -slice itself) of a re-slice modifies the elements of the original slice: -
- --d := []byte{'r', 'o', 'a', 'd'} -e := d[2:] -// e == []byte{'a', 'd'} -e[1] == 'm' -// e == []byte{'a', 'm'} -// d == []byte{'r', 'o', 'a', 'm'} -- -
-Earlier we sliced s
to a length shorter than its capacity. We can
-grow s to its capacity by slicing it again:
-
-s = s[:cap(s)] -- -
-
-
-A slice cannot be grown beyond its capacity. Attempting to do so will cause a -runtime panic, just as when indexing outside the bounds of a slice or array. -Similarly, slices cannot be re-sliced below zero to access earlier elements in -the array. -
- --Growing slices (the copy and append functions) -
- -
-To increase the capacity of a slice one must create a new, larger slice and
-copy the contents of the original slice into it. This technique is how dynamic
-array implementations from other languages work behind the scenes. The next
-example doubles the capacity of s
by making a new slice,
-t
, copying the contents of s
into t
, and
-then assigning the slice value t
to s
:
-
-t := make([]byte, len(s), (cap(s)+1)*2) // +1 in case cap(s) == 0 -for i := range s { - t[i] = s[i] -} -s = t -- -
-The looping piece of this common operation is made easier by the built-in copy -function. As the name suggests, copy copies data from a source slice to a -destination slice. It returns the number of elements copied. -
- --func copy(dst, src []T) int -- -
-The copy
function supports copying between slices of different
-lengths (it will copy only up to the smaller number of elements). In addition,
-copy
can handle source and destination slices that share the same
-underlying array, handling overlapping slices correctly.
-
-Using copy
, we can simplify the code snippet above:
-
-t := make([]byte, len(s), (cap(s)+1)*2) -copy(t, s) -s = t -- -
-A common operation is to append data to the end of a slice. This function -appends byte elements to a slice of bytes, growing the slice if necessary, and -returns the updated slice value: -
- -{{code "progs/slices.go" `/AppendByte/` `/STOP/`}} - -
-One could use AppendByte
like this:
-
-p := []byte{2, 3, 5} -p = AppendByte(p, 7, 11, 13) -// p == []byte{2, 3, 5, 7, 11, 13} -- -
-Functions like AppendByte
are useful because they offer complete
-control over the way the slice is grown. Depending on the characteristics of
-the program, it may be desirable to allocate in smaller or larger chunks, or to
-put a ceiling on the size of a reallocation.
-
-But most programs don't need complete control, so Go provides a built-in
-append
function that's good for most purposes; it has the
-signature
-
-func append(s []T, x ...T) []T -- -
-The append
function appends the elements x
to the end
-of the slice s
, and grows the slice if a greater capacity is
-needed.
-
-a := make([]int, 1) -// a == []int{0} -a = append(a, 1, 2, 3) -// a == []int{0, 1, 2, 3} -- -
-To append one slice to another, use ...
to expand the second
-argument to a list of arguments.
-
-a := []string{"John", "Paul"} -b := []string{"George", "Ringo", "Pete"} -a = append(a, b...) // equivalent to "append(a, b[0], b[1], b[2])" -// a == []string{"John", "Paul", "George", "Ringo", "Pete"} -- -
-Since the zero value of a slice (nil
) acts like a zero-length
-slice, you can declare a slice variable and then append to it in a loop:
-
-A possible "gotcha" -
- --As mentioned earlier, re-slicing a slice doesn't make a copy of the underlying -array. The full array will be kept in memory until it is no longer referenced. -Occasionally this can cause the program to hold all the data in memory when -only a small piece of it is needed. -
- -
-For example, this FindDigits
function loads a file into memory and
-searches it for the first group of consecutive numeric digits, returning them
-as a new slice.
-
-This code behaves as advertised, but the returned []byte
points
-into an array containing the entire file. Since the slice references the
-original array, as long as the slice is kept around the garbage collector can't
-release the array; the few useful bytes of the file keep the entire contents in
-memory.
-
-To fix this problem one can copy the interesting data to a new slice before -returning it: -
- -{{code "progs/slices.go" `/CopyDigits/` `/STOP/`}} - -
-A more concise version of this function could be constructed by using
-append
. This is left as an exercise for the reader.
-
-Further Reading -
- --Effective Go contains an -in-depth treatment of slices -and arrays, -and the Go language specification -defines slices and their -associated -helper -functions. -
diff --git a/doc/articles/wiki/test.bash b/doc/articles/wiki/test.bash new file mode 100755 index 000000000..5c2cb60dc --- /dev/null +++ b/doc/articles/wiki/test.bash @@ -0,0 +1,30 @@ +#!/usr/bin/env bash +# Copyright 2010 The Go Authors. All rights reserved. +# Use of this source code is governed by a BSD-style +# license that can be found in the LICENSE file. + +set -e +wiki_pid= +cleanup() { + kill $wiki_pid + rm -f test_*.out Test.txt final-test.bin final-test.go +} +trap cleanup 0 INT + +go build -o get.bin get.go +addr=$(./get.bin -addr) +sed s/:8080/$addr/ < final.go > final-test.go +go build -o final-test.bin final-test.go +(./final-test.bin) & +wiki_pid=$! + +sleep 1 + +./get.bin http://$addr/edit/Test > test_edit.out +diff -u test_edit.out test_edit.good +./get.bin -post=body=some%20content http://$addr/save/Test +diff -u Test.txt test_Test.txt.good +./get.bin http://$addr/view/Test > test_view.out +diff -u test_view.out test_view.good + +echo PASS diff --git a/doc/articles/wiki/test.sh b/doc/articles/wiki/test.sh deleted file mode 100755 index 58b218a78..000000000 --- a/doc/articles/wiki/test.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env bash - -set -e -wiki_pid= -cleanup() { - kill $wiki_pid - rm -f test_*.out Test.txt final-test.bin final-test.go -} -trap cleanup 0 INT - -make get.bin -addr=$(./get.bin -addr) -sed s/:8080/$addr/ < final.go > final-test.go -make final-test.bin -(./final-test.bin) & -wiki_pid=$! - -sleep 1 - -./get.bin http://$addr/edit/Test > test_edit.out -diff -u test_edit.out test_edit.good -./get.bin -post=body=some%20content http://$addr/save/Test -diff -u Test.txt test_Test.txt.good -./get.bin http://$addr/view/Test > test_view.out -diff -u test_view.out test_view.good - -echo PASS -- cgit v1.2.3