Embodiments disclose an artificial intelligence chip and a convolutional neural network applied to the artificial intelligence chip comprising a processor, at least one parallel computing unit, and a pooling computation unit. The method comprises: dividing a convolution task into convolution subtasks and corresponding pooling subtasks; executing convolution subtasks at different parallel computing units, and performing convolution, batch normalization, and non-linear computing operation in a same parallel computing unit; sending an execution result of each parallel computing unit from executing the convolution subtask to the pooling computation unit for executing the corresponding pooling subtask; merging executing results of the pooling computation unit from performing pooling operations on the executing results outputted by respective convolution subtasks to obtain an execution result of the convolution task. This can reduce data transport, such that operations of the convolutional neural network may be accomplished with lower power consumption and less time in an edge device.