Pyspark and UDF types problem - 2020-11-02 12:20:05

Hello! Here is a fast note that might not be obvious. Beware with UDF types in PySpark. from pyspark.sql.functions import udf from pyspark.sql.types import IntegerType, FloatType def very_fun(idk): return(22) def floating_fun(idk): return(22.0) df = sqlContext.createDataFrame( [ (1, 'foo'), (2, 'bar'), ], ['id', 'txt'] ) funfun_int = udf(very_fun, IntegerType()) funfun_float = udf(very_fun, FloatType()) floatingfun_int = udf(floating_fun, IntegerType()) floatingfun_float = udf(floating_fun, FloatType()) df = df.withColumn('funfun_int', funfun_int(df['id'])) df = df.withColumn('funfun_float', funfun_float(df['id'])) df = df.

Leaflet on Jupyter - 2017-05-19 08:10:27

To do: Improve text. Give examples. Explain stuff, dude…! You need this package: https://github.com/ellisonbg/ipyleaflet

Jupyter Customization: NBSExtensions and themes - 2017-05-19 08:07:49

To do: Intro, write the article better. https://github.com/ipython-contrib/jupyter_contrib_nbextensions Installing nbsextensions: sudo pip install jupyter_contrib_nbextensions Color customization: Make file ~/.jupyter/custom/custom.css try: https://github.com/powerpak/jupyter-dark-theme

SparkR gapply mess - 2017-05-12 08:56:31

Hello, Do not assume anything. Never. Ever. Specially with SparkR (Apache Spark 2.1.0). When using the gapply function, maybe you want to return the key to mark the results in a function as follows: countRows <- function(key, values) { df <- data.frame(key=key, nvalues=nrow(values)) return(df) } count <- gapplyCollect(data, "keyAttribute", countRows) countRows <- function(key, values) { df <- data.frame(key=key, nvalues=nrow(values)) return(df) } count <- gapplyCollect(data, "keyAttribute", countRows) SURPRISE. You can’t. You should get this error:

asMSX bugfix 1: Ifs on .megarom - 2016-09-03 12:49:45

Hello! Finally some improvements on asMSX. At the start of this project JamQue told me that he had issues using ifs when the “.megarom” clause is active. The issue correction can be seen here in Github. The problem was that when generating a byte (for example, an LD instruction) it checks if it is able to generate it if the condition established for this level allows it. Original: The first if will only affect the if(type!