# swift: issue in converting string to double

Here is a simple code in Xcode 7.3.1 playground:

```var str = "8.7" print(Double(str))```

• How to use NSString drawInRect to center text?
• Sum array of unsigned 8-bit integers using the Accelerate framework
• How to scroll to the bottom of a UITableView on the iPhone before the view appears
• Memory related issue of ZBarReaderViewController in iOS 7
• Parse NSURL path and query (iphoneOS)
• the output is suprising:
`Optional(8.6999999999999993)`

also, `Float(str)` gives: `8.69999981`

Any thoughts or reasons on this guys?
Any references to this would be appreciated.

Also, how should I then convert “8.7” to 8.7 as Double (or Float)?

Edit

in swift:

(str as NSString).doubleValue returns 8.7

Now, that is Ok. But my question, still, does not get a complete answer. We have found an alternative but why can we not rely on Double(“8.7”). Please, give a deeper insight on this.

Edit 2

(“6.9” as NSString).doubleValue // prints 6.9000000000000004

So, the question opens up again.

### 3 Solutions Collect From Internet About “swift: issue in converting string to double”

There are two different issues here. First – as already mentioned in
the comments – a binary floating point number cannot represent the
number `8.7` precisely. Swift uses the IEEE 754 standard for representing
single- and double-precision floating point numbers, and if you assign

``````let x = 8.7
``````

then the closest representable number is stored in `x`, and that is

``````8.699999999999999289457264239899814128875732421875
``````

Q&A Is floating point math broken?.

The second issue is: Why is the number sometimes printed as “8.7”
and sometimes as “8.6999999999999993”?

``````let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)

let x = 8.7
print(x) // 8.7
``````

Is `Double("8.7")` different from `8.7`? Is one more precise than
the other?

To answer these questions, we need to know how the `print()`
function works:

• If an argument conforms to `CustomStringConvertible`, the print function calls its `description` property and prints the result
to the standard output.
• Otherwise, if an argument conforms to `CustomDebugStringConvertible`,
the print function calls is `debugDescription` property and prints
the result to the standard output.
• Otherwise, some other mechanism is used. (Not imported here for our
purpose.)

The `Double` type conforms to `CustomStringConvertible`, therefore

``````let x = 8.7
print(x) // 8.7
``````

produces the same output as

``````let x = 8.7
print(x.description) // 8.7
``````

But what happens in

``````let str = "8.7"
print(Double(str)) // Optional(8.6999999999999993)
``````

`Double(str)` is an optional, and `struct Optional` does not
conform to `CustomStringConvertible`, but to
`CustomDebugStringConvertible`. Therefore the print function calls
the `debugDescription` property of `Optional`, which in turn
calls the `debugDescription` of the underlying `Double`.
Therefore – apart from being an optional – the number output is
the same as in

``````let x = 8.7
print(x.debugDescription) // 8.6999999999999993
``````

But what is the difference between `description` and `debugDescription`
for floating point values? From the Swift source code one can see
that both ultimately call the `swift_floatingPointToString`
function in Stubs.cpp, with the `Debug` parameter set to `false` and `true`, respectively.
This controls the precision of the number to string conversion:

``````  int Precision = std::numeric_limits<T>::digits10;
if (Debug) {
Precision = std::numeric_limits<T>::max_digits10;
}
``````

For the meaning of those constants, see http://en.cppreference.com/w/cpp/types/numeric_limits:

• `digits10` – number of decimal digits that can be represented without change,
• `max_digits10` – number of decimal digits necessary to differentiate all values of this type.

So `description` creates a string with less decimal digits. That
string can be converted to a `Double` and back to a string giving
the same result.
`debugDescription` creates a string with more decimal digits, so that
any two different floating point values will produce a different output.

Summary:

• Most decimal numbers cannot be represented exactly as a binary
floating point value.
• The `description` and `debugDescription` methods of the floating
point types use a different precision for the conversion to a
string. As a consequence,
• printing an optional floating point value uses a different precision for the conversion than printing a non-optional value.

Therefore in your case, you probably want to unwrap the optional
before printing it:

``````let str = "8.7"
if let d = Double(str) {
print(d) // 8.7
}
``````

For better control, use `NSNumberFormatter` or formatted
printing with the `%.<precision>f` format.

Another option can be to use `(NS)DecimalNumber` instead of `Double`
(e.g. for currency amounts), see e.g. Round Issue in swift.

I would use:

``````let doubleValue = NSNumberFormatter().numberFromString(str)?.doubleValue
``````

You can use this code may be useful.

``````print(str.doubleValue)
``````