开发者

The maximum value for an int type in Go

How does one specify the maximum value representable for an unsig开发者_StackOverflowned integer type?

I would like to know how to initialize min in the loop below that iteratively computes min and max lengths from some structs.

var minLen uint = ???
var maxLen uint = 0
for _, thing := range sliceOfThings {
  if minLen > thing.n { minLen = thing.n }
  if maxLen < thing.n { maxLen = thing.n }
}
if minLen > maxLen {
  // If there are no values, clamp min at 0 so that min <= max.
  minLen = 0
}

so that the first time through the comparison, minLen >= n.


https://groups.google.com/group/golang-nuts/msg/71c307e4d73024ce?pli=1

The germane part:

Since integer types use two's complement arithmetic, you can infer the min/max constant values for int and uint. For example,

const MaxUint = ^uint(0) 
const MinUint = 0 
const MaxInt = int(MaxUint >> 1) 
const MinInt = -MaxInt - 1

As per @CarelZA's comment:

uint8  : 0 to 255 
uint16 : 0 to 65535 
uint32 : 0 to 4294967295 
uint64 : 0 to 18446744073709551615 
int8   : -128 to 127 
int16  : -32768 to 32767 
int32  : -2147483648 to 2147483647 
int64  : -9223372036854775808 to 9223372036854775807


https://golang.org/ref/spec#Numeric_types for physical type limits.

The max values are defined in the math package so in your case: math.MaxUint32

Watch out as there is no overflow - incrementing past max causes wraparound.


I would use math package for getting the maximal and minimal values for integers:

package main

import (
    "fmt"
    "math"
)

func main() {
    // integer max
    fmt.Printf("max int64   = %+v\n", math.MaxInt64)
    fmt.Printf("max int32   = %+v\n", math.MaxInt32)
    fmt.Printf("max int16   = %+v\n", math.MaxInt16)

    // integer min
    fmt.Printf("min int64   = %+v\n", math.MinInt64)
    fmt.Printf("min int32   = %+v\n", math.MinInt32)

    fmt.Printf("max float64 = %+v\n", math.MaxFloat64)
    fmt.Printf("max float32 = %+v\n", math.MaxFloat32)

    // etc you can see more int the `math`package
}

Output:

max int64   = 9223372036854775807
max int32   = 2147483647
max int16   = 32767
min int64   = -9223372036854775808
min int32   = -2147483648
max float64 = 1.7976931348623157e+308
max float32 = 3.4028234663852886e+38


note: this answer is superseded as of go 1.17, which included e8eb1d8; ie: the math package now includes constants for math.MaxUint, math.MaxInt, and math.MinInt.

Quick summary:

import "math/bits"
const (
    MaxUint uint = (1 << bits.UintSize) - 1
    MaxInt int = (1 << bits.UintSize) / 2 - 1
    MinInt int = (1 << bits.UintSize) / -2
)

Background:

As I presume you know, the uint type is the same size as either uint32 or uint64, depending on the platform you're on. Usually, one would use the unsized version of these only when there is no risk of coming close to the maximum value, as the version without a size specification can use the "native" type, depending on platform, which tends to be faster.

Note that it tends to be "faster" because using a non-native type sometimes requires additional math and bounds-checking to be performed by the processor, in order to emulate the larger or smaller integer. With that in mind, be aware that the performance of the processor (or compiler's optimised code) is almost always going to be better than adding your own bounds-checking code, so if there is any risk of it coming into play, it may make sense to simply use the fixed-size version, and let the optimised emulation handle any fallout from that.

With that having been said, there are still some situations where it is useful to know what you're working with.

The package "math/bits" contains the size of uint, in bits. To determine the maximum value, shift 1 by that many bits, minus 1. ie: (1 << bits.UintSize) - 1

Note that when calculating the maximum value of uint, you'll generally need to put it explicitly into a uint (or larger) variable, otherwise the compiler may fail, as it will default to attempting to assign that calculation into a signed int (where, as should be obvious, it would not fit), so:

const MaxUint uint = (1 << bits.UintSize) - 1

That's the direct answer to your question, but there are also a couple of related calculations you may be interested in.

According to the spec, uint and int are always the same size.

uint either 32 or 64 bits

int same size as uint

So we can also use this constant to determine the maximum value of int, by taking that same answer and dividing by 2 then subtracting 1. ie: (1 << bits.UintSize) / 2 - 1

And the minimum value of int, by shifting 1 by that many bits and dividing the result by -2. ie: (1 << bits.UintSize) / -2

In summary:

MaxUint: (1 << bits.UintSize) - 1

MaxInt: (1 << bits.UintSize) / 2 - 1

MinInt: (1 << bits.UintSize) / -2

full example (should be the same as below)

package main

import "fmt"
import "math"
import "math/bits"

func main() {
    var mi32 int64 = math.MinInt32
    var mi64 int64 = math.MinInt64
    
    var i32 uint64 = math.MaxInt32
    var ui32 uint64 = math.MaxUint32
    var i64 uint64 = math.MaxInt64
    var ui64 uint64 = math.MaxUint64
    var ui uint64 = (1 << bits.UintSize) - 1
    var i uint64 = (1 << bits.UintSize) / 2 - 1
    var mi int64 = (1 << bits.UintSize) / -2
    
    fmt.Printf(" MinInt32: %d\n", mi32)
    fmt.Printf(" MaxInt32:  %d\n", i32)
    fmt.Printf("MaxUint32:  %d\n", ui32)
    fmt.Printf(" MinInt64: %d\n", mi64)
    fmt.Printf(" MaxInt64:  %d\n", i64)
    fmt.Printf("MaxUint64:  %d\n", ui64)
    fmt.Printf("  MaxUint:  %d\n", ui)
    fmt.Printf("   MinInt: %d\n", mi)
    fmt.Printf("   MaxInt:  %d\n", i)
}


I originally used the code taken from the discussion thread that @nmichaels used in his answer. I now use a slightly different calculation. I've included some comments in case anyone else has the same query as @Arijoon

const (
    MinUint uint = 0                 // binary: all zeroes

    // Perform a bitwise NOT to change every bit from 0 to 1
    MaxUint      = ^MinUint          // binary: all ones

    // Shift the binary number to the right (i.e. divide by two)
    // to change the high bit to 0
    MaxInt       = int(MaxUint >> 1) // binary: all ones except high bit

    // Perform another bitwise NOT to change the high bit to 1 and
    // all other bits to 0
    MinInt       = ^MaxInt           // binary: all zeroes except high bit
)

The last two steps work because of how positive and negative numbers are represented in two's complement arithmetic. The Go language specification section on Numeric types refers the reader to the relevant Wikipedia article. I haven't read that, but I did learn about two's complement from the book Code by Charles Petzold, which is a very accessible intro to the fundamentals of computers and coding.

I put the code above (minus most of the comments) in to a little integer math package.


From math lib: https://github.com/golang/go/blob/master/src/math/const.go#L39

package main

import (
    "fmt"
    "math"
)

func main() {
    fmt.Printf("max int64: %d\n", math.MaxInt64)
}


Go-1.17 now defines MaxUint, MaxInt and MinInt constants in the math package.

package main

import "fmt"
import "math"

const maxUint = uint(math.MaxUint)

func main() {
    fmt.Println("Integer range on your system")

    // .Println("MaxUint:", math.MaxUint)  ERROR constant 18446744073709551615 overflows int
    fmt.Println("MaxUint:", maxUint)

    fmt.Println("MinInt:", math.MinInt)
    fmt.Println("MaxInt:", math.MaxInt)
}
  • Test this above code: https://play.golang.org/p/5R2iPasn6OZ

  • From the Go 1.17 Release Notes: https://golang.org/doc/go1.17#math

The math package now defines three more constants: MaxUint, MaxInt and MinInt.
For 32-bit systems their values are 2^32 - 1, 2^31 - 1 and -2^31, respectively.
For 64-bit systems their values are 2^64 - 1, 2^63 - 1 and -2^63, respectively.

  • Commit: https://github.com/golang/go/commit/e8eb1d82
  • Documentation: https://pkg.go.dev/math#pkg-constants
const (
    MaxInt    = 1<<(intSize-1) - 1   // New
    MinInt    = -1 << (intSize - 1)  // New
    MaxInt8   = 1<<7 - 1
    MinInt8   = -1 << 7
    MaxInt16  = 1<<15 - 1
    MinInt16  = -1 << 15
    MaxInt32  = 1<<31 - 1
    MinInt32  = -1 << 31
    MaxInt64  = 1<<63 - 1
    MinInt64  = -1 << 63
    MaxUint   = 1<<intSize - 1       // New
    MaxUint8  = 1<<8 - 1
    MaxUint16 = 1<<16 - 1
    MaxUint32 = 1<<32 - 1
    MaxUint64 = 1<<64 - 1
)

See also the Go source code: https://github.com/golang/go/blob/master/src/math/const.go#L39


Use the constants defined in the math package:

const (
    MaxInt8   = 1<<7 - 1
    MinInt8   = -1 << 7
    MaxInt16  = 1<<15 - 1
    MinInt16  = -1 << 15
    MaxInt32  = 1<<31 - 1
    MinInt32  = -1 << 31
    MaxInt64  = 1<<63 - 1
    MinInt64  = -1 << 63
    MaxUint8  = 1<<8 - 1
    MaxUint16 = 1<<16 - 1
    MaxUint32 = 1<<32 - 1
    MaxUint64 = 1<<64 - 1
)


One way to solve this problem is to get the starting points from the values themselves:

var minLen, maxLen uint
if len(sliceOfThings) > 0 {
  minLen = sliceOfThings[0].minLen
  maxLen = sliceOfThings[0].maxLen
  for _, thing := range sliceOfThings[1:] {
    if minLen > thing.minLen { minLen = thing.minLen }
    if maxLen < thing.maxLen { maxLen = thing.maxLen }
  }
}


Go 1.17 (Q4 2021) might help, with commit e8eb1d8, as noted by Go101:

Before Go 1.17, we can use the following trick to define MaxInt:

const MaxInt = int(^uint(0) >> 1)

Since Go 1.17, we can directly use math.MaxInt instead

That fixes issue 28538 reported by Silentd00m, reviewed with CL 247058.

Since we have int8 to int64 min max and uint8 to uint64 max constants, we should probably have some for the word size types too.

Tests are illustrating how this works:

    if v := int(MaxInt); v+1 != MinInt {
        t.Errorf("MaxInt should wrap around to MinInt: %d", v+1)
    }
    if v := int8(MaxInt8); v+1 != MinInt8 {
        t.Errorf("MaxInt8 should wrap around to MinInt8: %d", v+1)
    }
    if v := int16(MaxInt16); v+1 != MinInt16 {
        t.Errorf("MaxInt16 should wrap around to MinInt16: %d", v+1)
    }
    if v := int32(MaxInt32); v+1 != MinInt32 {
        t.Errorf("MaxInt32 should wrap around to MinInt32: %d", v+1)
    }
    if v := int64(MaxInt64); v+1 != MinInt64 {
        t.Errorf("MaxInt64 should wrap around to MinInt64: %d", v+1)
    }


MaxInt8   = 1<<7 - 1
MinInt8   = -1 << 7
MaxInt16  = 1<<15 - 1
MinInt16  = -1 << 15
MaxInt32  = 1<<31 - 1
MinInt32  = -1 << 31
MaxInt64  = 1<<63 - 1
MinInt64  = -1 << 63
MaxUint8  = 1<<8 - 1
MaxUint16 = 1<<16 - 1
MaxUint32 = 1<<32 - 1
MaxUint64 = 1<<64 - 1


The way I always remember it, is you take the bits (int8 is 8 bits, int is 32 bits), divide by 8 and you get the bytes (int8 would be one byte, int would be four bytes).

Each byte is 0xFF (except for signed integer, in which case most significant byte will be 0x7F). Here is result:

package main

func main() {
   {
      var n int8 = 0x7F
      println(n) // 127
   }
   {
      var n uint8 = 0xFF
      println(n) // 255
   }
   {
      var n int = 0x7FFF_FFFF
      println(n) // 2147483647
   }
   {
      var n uint = 0xFFFF_FFFF
      println(n) // 4294967295
   }
}


A lightweight package contains them (as well as other int types limits and some widely used integer functions):

import (
    "fmt"
    "<Full URL>/go-imath/ix"
    "<Full URL>/go-imath/ux"
)
...
fmt.Println(ix.Minimal) // Output: -2147483648 (32-bit) or -9223372036854775808 (64-bit)
fmt.Println(ix.Maximal) // Output: 2147483647 or 9223372036854775807
fmt.Println(ux.Minimal) // Output: 0
fmt.Println(ux.Maximal) // Output: 4294967295 or 18446744073709551615
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜