Etiquetado: Machine Learning Mostrar/Ocultar Comentarios | Atajos de teclado

  • Javier Gramajo 6:13 am el October 27, 2010 Enlace permanente | Responder
    Etiquetas: , Machine Learning   

    Pregunta 91 IA2010 

    Sparse and large-scale learning with heterogeneous data, put out all the main concepts presented, resume your ideas but do not hesitate to extend if you want

     
    • Honard Bravo 7:59 pm el octubre 30, 2010 Enlace permanente

      The current and interesting challenge for research in the machine learning is the increasing amount of data that is available for learning and the diversity of information sources.

      In this video present us the data beyond classical vectorial data formats, data as: linked structure of the World Wide Web, text, images and sounds on web pages.

      The heterogeneous data that is contains in web page as: content (text, images, structured data, and sounds), relationship (links), users (log data, behavior) produce classification problems with heterogeneous information sources.

      This heterogeneity could be in terms of structure (relational databases, flat files, etc.) or content (different ontological commitments, which means different assumptions concerning the objects that exist in the world, the properties or attributes of the objects, the possible values of attributes, and their intended meaning)

      In the classical approach one algorithm has two components: information extraction component that formulates and sends a statistical query to a data source and a hypothesis generation component that uses the resulting statistic to modify a partially constructed hypothesis (and further invokes the information extraction component as needed)

      In the conference is talk two approaches that address sparse, large-sacale learning whit heterogeneous data, and show some applications.

      The objective is creating a new algorithm for extract important information in this sources and use for machine learning.


      200611078,200611122,200611509

    • Alex Sánchez 11:20 pm el octubre 30, 2010 Enlace permanente

      heterogeneous data are used for example to have different descriptions or knowledge and perspectives of the same matter
      Kernel-based knowledge speaks to associate ideas of data from a
      transforming linear perspective in various perspectives such as multidimensional
      enrich their understanding to finally pass a Support Vector Machine (SVM)
      The idea of using kernel methods is to mix heterogeneous data information matrices to provide a further understanding of the problem. Works describing patterns in chains
      Kernel broadcasts establishing similarities between vertices of the graph
      The SDP provides the cost of convex functions on the cost of other arrays
      The key to combining multiple kernel is to maximize margins. It has several uses such as predicting the membrane protein
      One of the main componetnes PCA analysis are multivariate data analysis and search patterns of low dimensions.

      Carlos Escobar 200217819
      Ferdy Martinez 200212489
      Alex Sanchez 200117120

  • Javier Gramajo 6:05 am el October 27, 2010 Enlace permanente | Responder
    Etiquetas: , Machine Learning   

    Pregunta 90 IA2010 

    Optimization for Machine Learning, make a strong discussion of this video

     
    • Elder Prado 200611078 - Emilio Mendez 200611112 - Honard Bravo 200611509 10:05 pm el octubre 29, 2010 Enlace permanente

      Elder Prado 200611078 – Emilio Mendez 200611112 – Honard Bravo 200611509
      Talk about the optimization algorithm to manage data to be correct and accurate, they are available at the time of need.
      But applications today are looking for simple solutions which makes the correctness of the data making it more difficult to find more exact solutions.
      With all that the primary objective being sought is to reduce and optimize the risk because this is the fundamental basis of learning algorithms.
      Applications today have very different needs and requirements of each other, which makes it different and require specialized approaches to each situation.
      Although some approaches to the algorithms and transparency, can generate some abstract and generic algorithms.
      There is a key concept in the development of these algorithms, the concept is the duality and is used primarily for the formulation of solutions, from different perspectives in order to find the best solution.
      In the video we talk primarily on two algorithms for the objective functions.
      Tips:
      Build two or three sequences of iteration
      Evaluate gradient at each iteration after
      You can save the missing information from previous iterations.

    • Alex Sánchez 7:49 pm el octubre 30, 2010 Enlace permanente

      los datos heterogéneos se usan por ejemplo para tener diferentes descripciones o conocimiento y perspectivas de un mismo asunto
      El conocimiento basado en kernel habla de asociar ideas de datos de una
      perspectiva lineal transformándolas en varias perspectivas como multidimensional
      enriqueciendo su comprensión para pasarlo por último a un Support Vector Machine (SVM)
      La idea de usar Metodos Kernel con datos heterogéneos es mezclar la información en matrices para dar otra comprensión del problema. Funciona describiendo patrones entre cadenas
      Las difusiones de kernel establecen similitudes entre vértices de la gráfica
      El SDP da el costo de las funciones convexas sobre el costo de otras matrices
      La clave al combinar varios kernel es maximizar los márgenes. Tiene varios usos como la predicción de la membrana de la proteína
      Uno de los principales componetnes del análisis PCA son el análisis multivariable de datos y los modelos de búsqueda de dimensiones bajas.

      Carlos Escobar 200217819
      Ferdy Martinez 200212489
      Alex Sanchez 200117120

    • Alex Sánchez 11:19 pm el octubre 30, 2010 Enlace permanente

      heterogeneous data are used for example to have different descriptions or knowledge and perspectives of the same matter
      Kernel-based knowledge speaks to associate ideas of data from a
      transforming linear perspective in various perspectives such as multidimensional
      enrich their understanding to finally pass a Support Vector Machine (SVM)
      The idea of using kernel methods is to mix heterogeneous data information matrices to provide a further understanding of the problem. Works describing patterns in chains
      Kernel broadcasts establishing similarities between vertices of the graph
      The SDP provides the cost of convex functions on the cost of other arrays
      The key to combining multiple kernel is to maximize margins. It has several uses such as predicting the membrane protein
      One of the main componetnes PCA analysis are multivariate data analysis and search patterns of low dimensions.

      Carlos Escobar 200217819
      Ferdy Martinez 200212489
      Alex Sanchez 200117120

    • Alex Sánchez 11:21 pm el octubre 30, 2010 Enlace permanente

      Machine Learning To optimize we need to build a model to predict and quantify the data loss function, generalize the data listed and regularizing penalizing complex models
      When loss occurs the functions should be estimated density exponentially, making binary classifications. These considerations also have deen for linear models.
      You can do an optimization strategy to transform the data after linear multidimensional data

      Carlos Escobar 200217819
      Ferdy Martinez 200212489
      Alex Sanchez 200117120

  • Javier Gramajo 6:03 am el October 27, 2010 Enlace permanente | Responder
    Etiquetas: , Machine Learning   

    Pregunta 89 IA2010 

    The Next Generation of Neural Networks, make an extended discussion of this video

     
    • Alex Sánchez 12:17 am el octubre 31, 2010 Enlace permanente

      One of the precepts of neural networks is that the brain is much better than today’s computers. The first generation of neural networks using many features of code and trying to learn to recognize objects.
      The second generation Progan errors. Poor spread has negative consequences. You can build blocks with stochastic neurons. There are algorithms and neural network models such as the Gibbs model which is designed to learn. Also exist for comparing images.
      There are also models of neural networks for recognition of digits and back propagation. Aencontrar pair also used binary codes into documents.

      Carlos Escobar 200217819
      Ferdy Martinez 200212489
      Alex Sanchez 200117120

    • Roberto 12:41 am el noviembre 6, 2010 Enlace permanente

      Neurons in the perceptual system represent features of the sensory input. The brain learns to extract many layers of features. Features in one layer represent combinations of simpler features in the layer below. There was a neat learning algorithm for adjusting the weights

      The energy of a joint configuration of the visible and hidden units determines the probability that the network will choose that configuration. By manipulating the energies of joint configurations, we can manipulate the probabilities that the model assigns to visible vectors.

      To generate data:
      1. Get an equilibrium sample from the top-level RBM by performing alternating Gibbs sampling.
      2. Perform a top-down pass to get states for all the other layers.

      references
      the next generation

      the guilty spark team
      200614790 Aura Luz Cifuentes Reyes
      200611109 Elder Manuel Mutzus Arévalo
      200413417 Otto Roberto Ockrassa Morales

  • Javier Gramajo 4:53 am el October 27, 2010 Enlace permanente | Responder
    Etiquetas: , Machine Learning   

    Pregunta 87 IA2010 

    Decision Trees, remark the most important concepts presented on these videos, explain your ideas in term of linked concepts.

    Use this videos:

     
    • Elder Prado 200611078 - Emilio Mendez 200611112 - Honard Bravo 200611509 8:35 pm el octubre 29, 2010 Enlace permanente

      Elder Prado 200611078 – Emilio Mendez 200611112 – Honard Bravo 200611509

      Components of a Tree
      Decision: represented by squares
      Sequence of events: represented by circles
      Consequences: are de results of de events.
      Analysis process matters
      In the process of analysis must consider each event will happen in the decision tree to estimate its consequences.

      Process
      In the process of developing the decision tree, drafts should be made to enable it reshapes the tree.

      Estimating Probabilities
      To estimate the odds must be sure that the tree is correct, and are involved in all events.

      Estimating Cost
      To estimate costs, once defined all the possible consequences of the events proceed to calculate the cost of these consequences.

    • Alex Sánchez 11:45 pm el octubre 30, 2010 Enlace permanente

      traducción del español al inglés

      Among the components of a decision tree are the node table, the two archos and options. Is a sequence of events among which are the circular node, arcs and event probabilities and finally the consequences among which are the profits or costs.
      Event Analysis reports if a decición processes is correct. Check if the analysis is wrong and removes the labels decision concerning a day.
      Considering the odds and finding data can estimate the impact of increasing processes.
      In estimating the costs should be reviewed some cases the employee and set the maximum difference
      Among the myths the analysis is that all options have sidorepresentadas and all the consequences have been listed, as well as all data is valid

      Carlos Escobar 200217819
      Ferdy Martinez 200212489
      Alex Sanchez 200117120

    • Roberto 12:31 am el noviembre 6, 2010 Enlace permanente

      a tree has 3 componetes they are, decision, sequence of events and consequences. based on this tree is armed, ie each of these represents a layer around the tree, and everything is related in sequence. We can ensure that a decision triggers a series of events which end in certain consequences. Rodo helps us make decisions about which system we can use.

      references
      decision tree

      the guilty spark team
      200614790 Aura Luz Cifuentes Reyes
      200611109 Elder Manuel Mutzus Arévalo
      200413417 Otto Roberto Ockrassa Morales

  • Javier Gramajo 3:52 am el October 27, 2010 Enlace permanente | Responder
    Etiquetas: , Machine Learning   

    Pregunta 86 IA2010 

    Remark the most important concepts about this presentation?, explain your ideas in terms of your point of view (Spanish).

    Use this video:

     
    • Honard Bravo 8:47 pm el octubre 30, 2010 Enlace permanente

      Los conceptos importantes de esta presentación:
      • Técnicas utilizadas para modelado de datos:
      – Clustering: Agrupar y descubrir múltiples grupos de datos.
      – Infinite Mixture Models
      – Bayesian Mixture Models
      – Bayesian Nonparametrics models: nuevo enfoque para crear modelos en machine learning.
      • Modelado de documentos
      • Modelado del idioma

      Comentario:
      Los modelos Bayesianos no para-métricos han ganado una considerable atención en machine learning, debido a su flexibilidad y crecimiento de complejidad según la cantidad de datos.

      El conferencista habla de los procesos de Dirichlet y modelos infinite mixture, estos modelos (mixture) son la base para los modelos bayesianos no para-métricos. Presenta también el proceso jerárquico de Dirichlet, este proceso se puede utilizar para modelado de documentos y modelado de idioma (como se muestra en la presentación).

      El modelado de documentos con este tipo de procesos permite describir de forma sucinta los documentos y aprender la complejidad del modelo de forma automática.


      200611078,200611122,200611509

    • Alex Sánchez 12:48 am el octubre 31, 2010 Enlace permanente

      Pienso que el clustering es una técnica muy interesante para crear Machine Learning y Data mining, es importante porque ayuda a ordenar de cierta forma los datos para procesarlos y consutlarlos. Es importante el uso de un generador de eventos probabilísticos puesto que se tienen que considerar probabilidades y estadísticas en clustering. Los modelos Bayesianos son importantes por el hecho que las estadísticas de probabilidades condicionales son una de las cualidades más importante de este tipo de redes. Los molos infinitos mezclados nos dan una solución bastante general acerca de los problemas que se tratan de solucionar con este tipo de herramientas. La mezcla de estos algoritmos con los modelos de Dirichlet potencian al algoritmo.

      Carlos Escobar 200217819
      Ferdy Martinez 200212489
      Alex Sanchez 200117120

    • Roberto 12:18 am el noviembre 6, 2010 Enlace permanente

      Machine Learning utiliza varias herramientas para crear comportamientos, induccion del conocimiento, para esto se relaciona con estadistica, ya que se basa en el analisis de datos. Usando cluster, se pueden agrupar los datos de multiples maneras, donde cada grupo utiliza un modelo mixture y se puede cruzar con multiples grupos. Tambien se pueden usar estructura de procesos, las estructuras aprenden automaticamente.

      En los modelos de lenguaje estadisticos, se utiliza para estimar los modelos desde los datos. El aprendizaje automático nosotros lo llevamos acaba de manera automática ya que es un proceso tan sencillo para nosotros que ni cuenta nos damos de cómo se realiza y todo lo que esto implica. Desde que nacemos hasta que mórimos los seres humanos tenemos diferentes procesos entre ellos encontramos el de aprendizaje por medio del cual adquirimos conocimientos,

      references
      Document topic modeling

      Pregunta 86 IA2010

      the guilty spark team
      200614790 Aura Luz Cifuentes Reyes
      200611109 Elder Manuel Mutzus Arévalo
      200413417 Otto Roberto Ockrassa Morales

  • Javier Gramajo 3:44 am el October 27, 2010 Enlace permanente | Responder
    Etiquetas: , Machine Learning   

    Pregunta 85 IA2010 

    Highlight the most important ideas about this video.

    Use this video:

     
    • Elder Prado 200611078 - Emilio Mendez 200611112 - Honard Bravo 200611509 8:36 pm el octubre 29, 2010 Enlace permanente

      Elder Prado 200611078 – Emilio Mendez 200611112 – Honard Bravo 200611509
      Machine Learning is everywhere.
      Machine learning applications provide meaning to the data, giving a meaning that is easier to understand, an example would be data mining.
      Skills give us better understand certain things that were very difficult to understand.
      It is crucial for the future and developing the economies and cultures.
      Improving the environment in every way, and provides assistance to important areas such as medicine and health.

    • Alex Sánchez 1:09 am el octubre 31, 2010 Enlace permanente

      I think computer science is now effectively involved in all or almost all areas of daily life of human beings
      Really now being used for many things like graphic design, facial recognition, among other things.
      The new revolution in information technology will be to implement smarter technology to help the industrial and social sector.

      Carlos Escobar 200217819
      Ferdy Martinez 200212489
      Alex Sanchez 200117120

    • Roberto 10:09 pm el noviembre 5, 2010 Enlace permanente

      Machine learning

      Get kids interested and focus on science and that all cultures seek its space technology will be crucial for competitiveness for the future of the economy.
      the impact of information technology will open in robotics and artificial intelligence what it means for business.
      the revolution in biology is the biological changes of microbes and how it works in computer systems. this will open a new world of benefits for companies working in the economy, to create a new future

      references
      Alberta ingenuity centre for Machine Learning

      the guilty spark team
      200614790 Aura Luz Cifuentes Reyes
      200611109 Elder Manuel Mutzus Arévalo
      200413417 Otto Roberto Ockrassa Morales

c
Crea una nueva entrada
j
Siguiente entrada / Siguiente comentario
k
anterior entrada/anterior comentario
r
Responder
e
Editar
o
mostrar/ocultar comentarios
t
ir al encabezado
l
ir a iniciar sesión
h
mostrar/ocultar ayuda
shift + esc
Cancelar